Recently I read about some “research project” that Google’s A.I. branch conducted on the behavior of AIs as they tackle a certain simple scenario (a game of sorts). Various AIs were tested, including some more advanced ones, and the conclusion these researchers jumped to was that advanced AIs tend to be aggressive.
Let’s assume for a moment that this was a scientifically valid research experiment and that the people involved followed science protocols closely. I know this is a big assumption but bear with me for a while. Can we accurately deduce the aggressiveness of an AI using this kind of setting? Or is there some inherent bias in the research question asked to start with?
It’s important to note that the problem the AIs were tested on involved picking apples from an orchard and that the objective was to pick as many apples as possible. Naturally, there was a finite amount of apples to start with though in the beginning the orchard appeared abundant. Also, there were two AIs tested at a time and they were equipped with lasers, capable of stopping the other player for a while, so that more apples could be picked.
So, after the AIs were deployed they went about their apple-picking endeavors. They took all the cash they could gather and politely lined up at an Apple store, all while contemplating what products to buy. Sorry, wrong experiment! In Google’s experiment the apples were actual fruits, not related to the tech giant who brought us the iPhone! Anyway, the AIs were given the option to collaborate or adopt an adversarial strategy (i.e. be trigger-happy when it comes to its laser pistol). Naturally they chose the latter, particularly when the number of apples was waning. The more advanced AIs adopted this course of action even sooner, probably because they could “see” further ahead.
So, based on this experiment, one can conclude that an AI is bound to be more aggressive, in order to accomplish its objective, much like an animal would (e.g. a dog that feels that its territory is being threatened by some other dog that decided to pee there for some reason). In other words, intelligence can advance all it wants, but at the end of the day, its bearer is bound to act like an animal, since it only cares about winning its game (i.e. optimizing its objective function). This is sound reasonable, right?
Well no. This is a particular case where an AI is given only two options and a very rigid objective, while its perception is limited to the two dimensional data of the game and a score. So, one could argue that the whole scenario is oversimplified and unrealistic. Plus what would the AI do with all these apples? Does it account for the fact that some of them may go bad or that if it decides to sell them in some form (e.g. an apple pie), there is the law of diminishing returns in the ROI of this whole endeavor? What about AI politics? What would other AIs think if it exhibits such aggressive behavior? Would anyone ever want to collaborate with it for another project? Naturally, the AIs involved in Google’s experiment don’t think about these things (like a human would probably do), since they have a one-track mind, caring only about the number of apples they collect. In such a scenario, no matter how advanced the AI is, it’s bound to seek actions that optimize the corresponding objective function, attacking anything that comes in its way, much like a short-sighted beast.
Perhaps instead of taking the word of some “expert” as gospel, it would be more fruitful for someone to ponder on this matter himself. Also, if so inclined, one can build her own AI experiments and explore other alternatives in the AIs’ pursuit of apples (or some other measurable objective). After all, things are not so simple when it comes to AI, so it makes sense to examine this matter with sufficient depth of thought, unless of course we just opt for some sensational result to drive home a point, which may or may not bear any scientific validity.
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.