(image taken from computerworld.com site)
A.I. is all the rage these days, esp. after Prof. Ng’s bold statement that “whoever wins A.I., wins the Internet.” Even though A.I. has been around for decades, only now has the world started to recognize its full potential, particularly in the online world. Even though A.I. is not an essential aspect of data science, it integrates quite well, particularly in the data modeling part of the pipeline. And yes, it is foxy as a discipline, definitely more than conventional computational methods!
So, what is this alternative bit about? Well, conventional A.I. focuses on the stuff that all computational methods focus: getting things done! So, it often lacks the interpretability that other methods have. For example, decision trees, even though they are not the best classification/regression system out there, are excellent in that respect. Wouldn’t it be great if we had this kind of transperency in a system’s operation in the A.I. realm? “Well, duh!” you would probably say. Interpretability is great and it could of course be most welcome in an A.I. system. However, even though there have been serious attempts to make it a reality, eventually the “black box” approach came to dominate.
The black box approach to A.I. is basically the exact opposite of transparency. You give a system some inputs, it spits out some outputs, and you have no idea how it came to these conclusions. You may get a confidence metric (aka probability score) as an output, but everything else in the whole process remains obscure. That’s not necessarily a bad thing though. Sometimes you just need a result and you don’t care so much about how it came about. However, if you want to explain this result to your manager or to a colleague, then you are in trouble! Black box A.I. systems may have some insightful results but are terrible at communicating them. If we were to imagine them as robots, a conversation with such a system would be something like this:
Human: What do you think of this network traffic signature?
AI: It is most likely malicious.
Human: Are you sure?
AI: Yes, at 87%
Human: Why is it malicious?
AI: I’m sorry Dave. I cannot answer that question!
Human: Who is Dave?
AI: I’m sorry Dave, I cannot answer that question either!
So, even though a black box A.I. would be useful, it is doubtful if it would amiable enough or what you would call the communicative type, even if some data scientists wouldn’t mind it. As a former program manager though, I would be hesitant in relying solely on it and I would definitely want to keep my data people around even if that system produced more accurate models.
I’m writing all this not to discredit the value of existing A.I. system, nor to make fun of their role in the tech world. Quite the contrary. I just want to make the point that if we are to have a truly useful and affable A.I. system that can integrate well in our society, we’ll need to look into making it more communicative. This kind of system I call an alternative artificial intelligence (or A2I for short), since it would require an alternative approach to the A.I. technology. In a nutshell, the design of such a system needs to include (but not limited to) the following features:
All these are a bit high-level but at least they paint a picture of how A.I. could evolve in a more holistic way, so as to be more amicable and less risky. Perhaps an A.I. system like this would convince even the skeptics that A.I. can be not only useful but safe. We can imagine a conversation with a system like this to be like this:
Human: What do you think of this client?
A2I: I find that Mr. X is a reliable client and that we should continue doing business with them.
Human: Are you sure?
A2I: I am fairly certain.
Human: How sure are you?
A2I: About 89% certain.
Human: Why would you classify this client as reliable?
A2I: Because X’s investments have shown to be of an acceptable ROI. Also, X’s credit score is within acceptable parameters. Furthermore, X’s social profile appears to reflect a person who is positively poised towards your company and does not seem to be interested in your competitors.
Human: That’s quite useful. Please put all that in a report so that I can present this insight to the 2:00 meeting.
A2I: I’ll be happy to do that, Dave!
Human: Who is Dave?
A2I: Just making a joke. Thank you for your feedback!
So there you have it. An alternative A.I. that is geared towards a synergy between our species and its own. Will that be 100% safe? Probably. However, just like every A.I. system designed, it is bound to be limited to our perception and our values. So, if we were to make it with a collaborative approach in mind instead of winning control over something, the risk of it ever rebelling and taking control would be mitigated. Something to think about…
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.