Recently, the tech news bubble featured a very interesting phenomenon that had caught Facebook’s A.I. experts by surprise. Namely, two AIs that were developed by the company’s AI team were found to communicate in unexpected ways between themselves, during the testing phase of the project. Although they were using what would qualify as “bad English,” the people monitoring these communications were unable to understand what information was conveyed and they argued that these AIs may have invented their own language! Although this claim would require a considerable amount of data before it is proven or disproven, the possibility of these young AIs having overstepped their boundaries is quite real. Perhaps this is not something to lose sleep over, since it’s hardly a catastrophic event, it may still be a good proxy of a situation that no-one would like to experience, that of AIs getting out of control. Because, if they can communicate independently now, who knows what they could do in the future, if they are treated with the same recklessness that FB has demonstrated? The outcomes of this may not be as obvious as those being portrayed in sci-fi films. On the contrary, they are bound to be very subtle, so much so that they would be very hard to detect, at least at first. Some would classify them as system bugs, but these would not be the kind of bugs that would cause a system failure and make some coders to want to smash their computer screen. These bugs would linger in the code until they manifest in some unforeseen (and possibly unforeseeable) imbalance. Best case scenario, they could cause the users of the popular social medium to get frustrated or even complain about the new system. I don’t want to think about what the worst case scenario would be... Of course the A.I. fanboys are bound to disregard this matter as a fluke or an unfortunate novelty that people turn into an issue when it isn’t. They would argue that small hick-ups like this one are inevitable and that we should just power through. Although there is something admirable about the optimism of these people, the thing is that this is a highly complex matter that technology experts like Elon Musk and Bill Gates have repeatedly warned us about. This is not like a problem with a car that may cause a simple road accident, if left unattended. This is the equivalent of a problem with the control tower of an airport that could cause lots of planes to crash all over the place. Fortunately, there are contingencies that prevent such catastrophes when it comes to airports, but can we say the same about the A.I. field? There are different ways to respond to this kind of situation and I’m not saying that we should panic or start fearing A.I. That wouldn't help much, if at all. A more prudent response would be to see this as a learning experience and an opportunity to implement fail-safes that will keep this sort of A.I. behavior under wraps. After all, I doubt that the absence of helpful AIs in FB or any other social medium are going to drive people away from social networks, while the presence of an unpredictable AI probably would...
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Zacharias Voulgaris, PhDPassionate data scientist with a foxy approach to technology, particularly related to A.I. Archives
April 2024
Categories
All
|