When designing an A.I. system these days it seems that people focus on one thing mainly: efficiency. However, even though there is no doubt about the value of such a trait, there are other factors to consider when building such a system, so that it is not only practical but also safe and useful in other projects. Namely, in order for AGI to one day become feasible, we need to start building A.I. systems that fulfill a certain set of requirements.
This is the Achilles heal of most modern A.I. Systems and a key A.I. Safety concern. However, it’s not an insolvable problem as many A.I. researchers (particularly those bold enough to think outside the black box of Deep Learning systems) have tackled this matter and some have proposed some solutions for shedding some light on the outputs of that DL network that crunches the data behind those cat pictures it is asked to process. Unfortunately, this transparency element they add is geared more towards image data since it’s easier to comprehend and interpret, when it takes the form of complex meta-features in the various layers of a DL network. Still, it is possible to have transparency in alternative A.I. systems that use a simpler architecture, perhaps non-network based.
It goes without saying that a system needs to be autonomous, even in its training, if it is to be considered intelligent. Although humans will need to play an important role in its training by providing this A.I. with data that makes sense, as well as some general directions (e.g. the terminal goal and some instrumental goals perhaps), the A.I. system needs to be able to figure out its own parameters automatically, using the data at hand. Otherwise, its effectiveness will be limited to the know-how of the “expert” involved in it, who may or may not have an in-depth understanding of the field or how data science works.
For an A.I. system to be effective, it has to be scalable, i.e. able to be deployed on a large computer network, be it in a cluster or the cloud. Otherwise, that system is bound to be of very limited scope and therefore its usefulness will be quite limited. For an A.I. system to scale well, however, its various processes need to be parallelizable, something that requires a certain design. DL networks are like that but not all A.I. systems are as easy to parallelize and scale.
This is an important aspect of our own thinking and one that hasn’t been implemented enough in A.I. systems, partly because of methodological limitations and partly because it’s not as easy for most A.I. people to wrap their heads around. In essence, it is the most down-to-earth form of intuition and what allows lateral thinking. An A.I. system having this attribute would be able to think like a human would and therefore be more easily understood and more relateable. It’s possible that this will mitigate the risks of the rigid rule-based thinking that many A.I. systems now have, even if it is concealed in complex architectures.
Of course we shouldn’t neglect efficiency in this whole design. An A.I. system has to be efficient in both its application and its training. If it takes a whole data center in order to train, that’s not efficient, not even if it is feasible for some people having access to such computational resources. An efficient A.I. system should be able to perform even in a small computer cluster, even if its effectiveness will be more limited in relation to the same system having access to a larger amount of resources.
Putting It All Together
Although A.I. systems today are fascinating and to some extent inspiring in their potential, they could be better. Namely, if we were to design them with the aforementioned principles in mind, they’d be more tasteful, if you catch my drift. Perhaps, such systems will not only be useful and practical but also safer and easier to relate with, making their integration in our society more natural and mutually beneficial.
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.