It is often the case that we treat a new A.I. as a child that we need to teach and pay close attention to, in order for it to evolve into a mature and responsible entity. However, a fox-like approach to this matter would be to turn things around and see how we, as human beings, can learn from an A.I., particularly of a more advanced level.
Of course A.I. is still in a very rudimentary stage of its evolution so it doesn’t have that much to teach us that we can’t learn from another human being. However, that wise human who would be a great mentor is bound to be bound by his everyday commitments, personal and professional making him inaccessible. Also, finding him may take many years, assuming that it is even possible given our circumstances. So, learning from an A.I. may be the next best thing, plus we don’t have to deal with personality-related impediments that often plague human relationships, even the more professional ones.
An A.I., first and foremostly is unassuming. This is something that we can all develop more, no matter how objective we think we are. A.I. doesn’t have any prejudices so it deals with every situation anew, much like a child, making it more poised to finding the optimum solution to the problem at hand. That’s something that is encouraged and often practiced in scientific ecosystems, like research centers and R&D departments, where the objective is so important that all assumptions are set aside, at least long enough for this approach to yield some measurable results.
A.I.s also tend to be very efficient, minimizing waste and unnecessary tasks. They don’t care about politics or massaging our egos. Their only focus is maximizing an objective function, given a series of restraints and, whenever it is applicable, take actions based on all this. If we were to act like that we’d definitely cut our time overheads significantly since we’d be concentrating more on results rather than pleasing some person who may have some influence over us professionally or personally.
A third lesson we could get from A.I. is organization. Although we most certainly have organization in our lives to some extent, we have a lot to learn from the cool-headed A.I. that employs an organizational approach to things. An A.I. tends to model its knowledge (and data) in coherent logical structures, immune to emotional or otherwise irrational influences. It deals with the facts rather than its interpretations of them. It builds functional structures rather than pretty pictures, to deal with the inherent disorder that its inputs entail. It makes graphs and optimizes them, rather than graphics that are easy on the eyes (although there is value in those too, in a data science setting). Clearly we don’t have to abandon our sentimental aspects in order to imitate this highly efficient approach to problem-solving, but we can try to be more detached when dealing with our work, rather than let sentimental attachments and eye candy exercise influence over our process.
Perhaps if we were to treat A.I. as a potential teacher of sorts, in the stuff it does well, it wouldn’t seem so threatening. Maybe feeling scared of it is merely a projection of ours, an objectification of our inherent fear of our own minds, which is still largely uncharted territory. A.I. doesn’t have an agenda and is not there to get us. If we treat it as an educational tool, it may prove an asset that will bring about a mutually beneficial synergy. It’s up to us.
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.