A.I. is great, especially when applied to data science. Many people lately are quite concerned about the various dangers it may entail. This naturally polarizes people, splitting views of the topic into two main groups: the ones neglecting these concerns and those mirroring a fear that the end of the world is upon us. Probably the truth lies somewhere in-between, but given the lack of evidence, any speculation on the matter may be premature and likely to be inaccurate.
In this post I’d like to focus on another danger that many people don’t think much about, or don’t see it as a danger at all: the sense of complacency that may arise from a super-automated world. Of course, complacency is a human condition and has little to do with A.I. but someone may consider that it is A.I. to blame for this condition. After all, super-automation may be possible only through this new technology becoming wide-spread.
This danger, which can find its way to data science too if left unchecked, is a real one. However, it is neither singular nor catastrophic. After all, every large-scale technological innovation has brought about social changes that have triggered this condition to some extent. This does not mean that we should go back to the stone age, however. After all, technology is largely neutral and the people who make it available to the world have the best intentions in mind. So, it seems that blaming a new tech for this matter may be a bit irresponsible.
Yet, the advent of technology can be a good thing if dealt with in a mature manner. Just like you can own a car and still make time for physical exercise, you can have access to an A.I. and still be a creative and productive person. It’s all a matter of power, at the end of the day. If we give away our power, our ability to choose and to shape our lives, then we are left powerless victims of whoever has taken hold of that power. In the case of A.I., if we cherish automation so much that we outsource every task to it, then we are willingly creating our own peril. So, if we choose to maintain a presence in all processes where A.I. is involved, the latter is not going to be a threat, not a considerable one anyway.
There is no doubt that A.I. can be dangerous, much like every other technological advancement. However, it seems that the crux of the problem lies within us, rather than at the machines that incarnate this technology. If we give into a sense of complacency and allow the AIs to have a gradually more active part in our society, then maybe this tech will create more problems than the ones it’ll solve. However, if we deal with this new technological advent maturely, we can still benefit from it, without making ourselves obsolete or irrelevant, in the process.
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.