Beyond the play of words here, there is an important matter that needs to be addressed, since data science is becoming increasingly influential nowadays, in various aspects of our lives. Gone are the days when it was limited to the data science departments of certain companies; these days, the impact of data science transcends the boundaries of the organizations it serves. Take for example the data scientists working for large companies like Facebook and Google. The impact of their work influences a large number of people, even outside the companies themselves. Perhaps the range of this impact is hard to fathom even by the managers of these data science teams since it often has a lasting impact that's nearly impossible to gauge without sufficient data and the time required for this impact to fully manifest.
Ethics is a word that's used so much that has lost its meaning, or maybe it was never really properly defined in the first place. Also, with the impersonal aspects of ethics being formalized in particular codes of conduct, it has lost its essence since it has been reduced to a number of do's and don't, a set of guidelines which can be followed unconsciously and mechanically. However, ethics is the formal aspect of morality, which is founded in the values we follow. The latter is real and oftentimes comprehensible things that we express in our actions, oftentimes consciously. Values like honesty, diligence, and efficiently don't require a Master's in philosophy in order to comprehend, while the ethics of a modern information worker can be a bit more abstract and challenging to relate to. Values are something we have, whether we talk about them or not, and it's not too difficult to figure out what these are with a little introspection. However, even though values are a personal matter, they have a concrete effect on our work and in how we relate to the world. Good managers are aware of that and pay attention to the values of the candidates of the positions they wish to fill. The resume/CV is important but it’s not the only factor at play when hiring a professional.
Perhaps it's time to pay attention to this aspect of the craft more. Knowledge and know-how are becoming more easily accessible to everyone, particularly those who are willing to pay for that, an investment that is guaranteed to pay off. That's great, particularly for those who wish to enter this field even if their education is not aligned with this subject. Still, it's equally important to balance this aptitude with the moral strength that empowers us to deliver our data science work in a way that respects other people's privacy and doesn't abuse the information involved. At one point in our careers, it is natural to come into a crossroad where we need to either do is expected or do what is ethically right. The former is bound to be a more tempting option, at least financially, while the latter may be void of any direct benefit. Having a solid set of positive values may help us make the right choice instead of trading the long-term benefit of the many for the short-term gain of the few.
With everyone in A.I. feeling the need to have an opinion or even a stance on Artificial General Intelligence (AGI), we often neglect the source of this concept. Namely, the well-rounded intelligence that characterizes a human being, having all kinds of smarts. The latter I refer to as Natural General Intelligence (NGI) and someone can argue that it's as important if not more important than AGI, at least in this point in time, particularly to data science professionals.
But isn’t this kind of intelligence another name for genius? Not necessarily. NGI is modeled after the human being in general even if its artificial counterpart (AGI) is often linked to super-intelligence, a kind of supergenius that may characterize an A.I. that has developed this level of intelligence. Still, it is possible to have NGI without being a modern Leonardo DaVinci or a Benjamin Franklin.
Natural General Intelligence is all about enabling your mind to develop in different aspects, not merely the ones that you need for your vocation or the ones that were essential for your survival so far. This idea is not new and has been popular during the Renaissance. Even today we use the term "Renaissance Man" to refer to the individual who is well-rounded in his or her life and can be good at different things. In this era of overspecialization, this seems to be a Utopian endeavor, at least to some people. In reality, however, it isn't. If you want to learn a musical instrument, for example, there are plenty of courses and books you can leverage, while there are even music instructors who can teach you over the internet. As for the instruments themselves, they are far more affordable than they used to be while for certain instruments, the prices continue to drop due to high demand. However, more important than developing one’s musical aptitude is the growth of one’s emotional intelligence (EQ), particularly interpersonal skills.
What does all this have to do with data science? Well, in data science it’s easy to overspecialize too (e.g. in Machine Learning, Data Engineering, NLP, etc.). However, this creates artificial barriers which may render communication with other data professionals more challenging. Of course, more often than not these issues are alleviated through a competent data science lead or a manager with sufficient data science understanding. Still, if you as a data science professional can mitigate the need for external intervention when it comes to collaborating with others, that’s definitely a plus. Not just in terms of smoothing the professional relationships involved, but also in terms of business value. Stand-alone professionals are very sought after since such people tend to be (or quickly become) assets. In time, these professionals can grow into versatilists and/or assume leadership positions.
From all this, it is hopefully clear that Natural General Intelligence is more tangible and significantly more feasible than any other kind of advanced intelligence capable of yielding value in an organization. What's more, an individual with NGI is bound to be more relate-able and accountable, rendering the whole team he/she belongs to a more functional unit. Perhaps such a goal is more beneficial than the blind pursuit of some exotic kind of A.I. that can solve all of our problems. The latter is intriguing and worth investigating, but I wouldn't bet on it benefiting the average Joe any time soon!
In the most venerable of sciences, Physics, there are two closely linked concepts, that of work and that of energy. Work is the result of a force applied over a given distance, while energy is often seen as the result of work. However, energy takes a variety of forms, which enables us to produce work through the use of it, be it through a preexisting form (e.g. uranium and thorium) or some man-made form (e.g. a battery). This fundamental idea of the relationship between work and energy, which we often take for granted, is something that applies to data science as well, by substituting energy for value.
Value is sometimes considered as the 5th V of Big Data (the other four being Volume, Velocity, Variety, and Veracity), something that is quite inaccurate though since value is a fundamental characteristic of information, not a particular kind of data. Information, however, can be found even in relatively small datasets (which were considered large once, before the era of big data), so calling it a characteristic of big data can be misleading. This misconception doesn't take away any value from the idea of value though, which is often a value instilled in many data scientists, particularly those who go beyond the techniques and methods. These data scientists penetrate the essence of the craft, through the development of the data science mindset, which is the most valuable aspect of the field.
Value is something that concerns business people too, however, since it is one of the outcomes of a data science project, which ideally can translate into increased revenue, be it via the development of a new product or by making a business process more efficient. Also, value can enable an organization to expand its scope, know its customers better (KYC), and liaise with other organizations more effectively. This value, which often takes the form of insights, is at the core and oftentimes at the end of the data science pipeline.
Value, however, can take the form of a product, such as an API that automates a particular evaluation process or a prediction. Although the technology behind such a product is nothing spectacular (APIs have existed for a while now and they are fairly straight-forward for a software engineer to develop), the data science part of that product is what brings about the real value in such an API. Without a data science engine behind it, an API is bound to be more of an ETL tool which although still valuable, it's not of the same caliber of data science-powered APIs.
Value in data science is often found in the information distilled from the data, particularly through a predictive analytics model. Elements of it, however, are already encountered in the data discovery stage of the pipeline, where the data scientist evaluates the features at hand and the metadata available. This is often conducted through the creation of data models, which is why it is part of the data modeling part of the pipeline. I talk about all this in detail in the Data Science Modeling Tutorial, available on the O'Reilly (formerly known as Safari) platform.
Value in data science is a big topic and if I were to continue this article would be irksomely long. It would be best if I continue this in another article, or even a series of articles, in the weeks to come. Cheers!
The knowledge vs. faith conundrum has been a philosophical debate for eons, yet it usually is geared towards abstract matters, such as life after death. So, how does this apply to a pragmatic field such as data science? Well, contrary to what many people think, most data science practitioners often rely on faith to a great extent, when dealing with data science matters. But why is that?
Unfortunately, most people learning the craft have a strict time table to keep, so they don't have a chance to go in depth on the material covered. This increasingly severe temporal limitation is also coupled with other factors, such as the plethora of "cookbooks" on the topic. Not to be confused with actual cookbooks, comprising of various recipes, oftentimes original tried and tested dishes developed by experienced chefs; these cookbooks are fine and probably have a bigger bang for your buck, compared to the technical cookbooks that are basically a bunch of methods and functions, usually in a popular programming language, organized by someone who oftentimes doesn't even understand them. If you rely mainly on such sources of knowledge, you are basically putting your faith in these people and creating gaps in your understanding of the craft.
So, if you obtain technical knowledge quickly or from a source that doesn't go much in depth, it is unlikely to truly know data science. That's not to say that you shouldn't read books; far from it. Books are useful but no matter how good they are, the best way to learn something remains the empirical approach. Going under the hood of the methods involved, implementing methods from scratch and even experimenting with your own ideas, are all good ways to learn something in more depth and remember it for longer periods of time. Also, through empirical knowledge of the craft, you are more confident about what you know and oftentimes more aware of the boundaries of your knowledge.
There is room for faith in our field, as for example when you trust what your data science lead/director tells you, when you accept advice from a mentor, and when you rely on the know-how of an academic paper written by someone who knows data science in-depth. However, it's good to balance it with empirical knowledge to the extent your time allows. Perhaps in abstract matters, it's hard to obtain empirical knowledge, but on things that you can test yourself, the only limitations are man-made ones. Are you willing to transcend them?
There are many mistakes that can be made in data science, many of which can go unnoticed for a while. The reason is that unlike coding bugs, these mistakes don't throw an error or an exception, making them harder to spot and fix, as a result. In my view, the biggest such mistake is that of thinking that one aspect of data science is so significantly better than the others that the latter don't matter much. I used to think like that back in PhD days (my thesis was on Machine Learning and heuristics) but fortunately, I discovered the error of my thinking and started broadening my perspective on this matter, something I continue to do as I learn more about this fascinating field.
Let's look into this more closely. For starters, there are several frameworks or tool-kits available in data science today, ranging from Statistics to Machine Learning, and lately, A.I. based models. All of them have their own set of advantages as well as limitations. Many Machine Learning models, for example, particularly A.I. based ones (mainly ANNs) are very hard to interpret and are often referred to as black boxes. Stats models, on the other hand, may be easy to interpret, but they may not be as accurate, while they tend to have a number of assumptions which may not always hold true. That's why claiming that one of these frameworks or tool-kits is the best one at the expense of others is a very shaky position.
However, with all the hype around the latest and greatest Deep Learning methods (and other A.I. based models used in Data Science), it's difficult to argue against this position. Also, with Statistics having such a good reputation in academia and proven applicability across different domains, it's also hard to argue that it's not as good a framework. This may be good in a way since it keeps us humble, but it may also obstruct progress. How can you have the nerve to put forward something new if it doesn't comply with what is considered "the best" or if it doesn't comply with the traditional approaches to data learning, such as Statistical Learning?
I'm not claiming to have a solution to this conundrum, by the way, and perhaps it's not something that can be answered simply. However, this kind of riddles that plague the data science field are what can be good food for thought and bring about a sense of genuine wonder about the prospects and the future of data science. Maybe when someone asks us what the best framework of data science is it's better to say "I don't know" and consider using different ones in tandem, instead of flocking into this or the other group of people who have made up their minds about this, and who are unlikely to ever change it. After all, open-mindedness is something that never gets old, at least not in a truly scientific field.
Being open-minded is a key trait of any scientist, since the beginning of Science. The scientific method is basically a practice that relies on open-mindedness, focusing on testing a hypothesis based on the evidence at hand. However, nowadays there is a trend towards a heretic behavior (in lack of a better word) when it comes to the science of data, as well as the application of A.I. in it.
Open-mindedness is not just being open about the results of an experiment though. That’s easy. Being open to other people’s ideas and beliefs is also important. It’s easy to dismiss some people, especially those writing about this matter, even though they lack the training you may have on the field. Still, those people may have some interesting insights, which they often express in their articles. You don’t have to agree with them, in order to gain from this, expanding your perspective. However, dismissing an article because it makes use of this or the other term (which in your opinion is not that relevant to the topic they tackle) is closed-minded.
That’s not to say that we should accept everything we read, however. Some of the material out there is of low informational value and can be biased towards this or the other technology, for various reasons. That’s normal since the field of data science (as well as A.I. to some extent) is closely linked to the business world and is influenced by the dynamics of the markets of tools and frameworks related to data analytics.
So, what do we do about all this? For starters, we can read an article before we dismiss it as irrelevant or otherwise problematic. Also, if we don’t agree about something with the author, we can construct arguments against that point and express them without attacking the other person. There are people who are incredibly toxic to the field and pose a threat to the field, by propagating their erroneous beliefs, but fortunately, these are few. Also, they are probably beyond salvation, since they have too large a following to ever question their beliefs. Still, by going against their propaganda, we can still help the people who haven’t made up their minds yet on the topic.
Perhaps that’s why the most important thing you can learn about data science and A.I. is to have a mindset that is congruent to your development as a professional, always maintaining an open mind. Just because there are fanatics in this field who are getting paid way more than they should and maintain a large following due to their charisma, it doesn’t mean that this is the best way to go. It’s not easy to be open-minded in a place where fanaticism thrives, but in the long run, it’s a viable strategy. After all, data science is here to stay, in one form or another, while the views on it that are now popular are bound to change.
It's funny how when you think you know something, you often discover that you don't know it that much. This is particularly the case in data science, a field that holds more mystery than most people think. For example, a great deal of heuristics and models are based on the idea of similarity and there have been developed several metrics to gauge the latter. Many of them are based on distances but others are more original, in various ways.
During my exploration of the hidden aspects of data science (my favorite hobby), I came across the idea of a similarity metric that is not subject to dimensionality constraints, like all of the distance-based ones, while also fast and easy to calculate. Also, this is something original that I haven't encountered anywhere else and I've looked around quite a bit, especially when I was writing the book "Data Science Mindset, Methodologies and Misconceptions" where I talk about similarity metrics briefly.
Anyway, I cannot explain it in detail here because this metric makes use of operators and heuristics that are themselves original, part of my new frameworks of data analytics. Let's just say that it makes use of Math in a way that seems familiar and comprehensible, but has not been used before. Also, it yields values in [0, 1], with 1 being completely similar and 0 being completely dissimilar. The idea is to find a way to gauge similarity from different perspectives and combine the result, something that would unfortunately only work if the data is properly normalized. Given that all conventional ways of normalizing data are inherently flawed, this metric is bound to work only in properly normalized data spaces. Because such spaces are more or less balanced (even if they have outliers), the average similarity of all the data points in them is always around 0.5 (neutral similarity), something that makes the metric very easy to interpret.
As with other metrics and heuristics, it's not they themselves that are the most important thing, but the doors they open, revealing new possibilities (e.g. a new kind of discernibility metric). That's why I found the picture of the fractal above quite relevant since it is all about self-similarity, a concept that led us to the discovery of a new kind of Mathematics related to Chaos. Interestingly, even with such advanced knowledge, we are unable to fully comprehend the chaos that reigns modern A.I. systems, something that has its own set of problems. So, I ask you to wonder for a moment how much better A.I. would be if it were developed using comprehensible heuristics, making it transparent and interpretable. Perhaps its thinking patterns wouldn't be as dissimilar to ours and we wouldn't see it as much of a threat.
As experience and knowledge accumulate in our minds, it’s increasingly easy to lose touch of that original spark that brought us into this journey of learning, in the fascinating field of data science. I’m referring to that sense of wonder that made all this otherwise dry know-how of math, programming, and data, something we could lose sleep over. Because if you are really in a state of wonder, it’s easy to forget to eat, postpone other tasks, and even find sleep somewhat less important, when your other option is delving more into the learning of the craft.
A sense of wonder, however, is much more than curiosity or even interest in data science. It is all that, but it’s also a way of feeling, a higher sentiment if you will. Being at wonder is what incites wondering and going into more depth. It is what makes a seemingly mundane task, such as data cleaning, appear intriguing and valuable. It is what makes learning about a new model something truly interesting, not just as a memory-based activity, but also as something that sparks imagination and innovation. It is wonder that makes us ask “what if?” instead of just being content with what is presented to us.
Naturally, this sense of wonder is fleeting, just like the perspective we have as newcomers to data science. The more we learn, the more limited our wanderings in the vast knowledge that the field entails, since being more focused on specific tasks and time frames are of the essence. That’s normal since as data scientists we need to be practical and akin to the way the world works, otherwise, we’d be unemployable. Yet, at a certain point of aptitude and understanding of the craft, it is this sense of wonder that enables us to go further and grow beyond what we are expected to be.
The sense of wonder can be cultivated through a sincere wish to become better for the sake of being better, a wish nourished by our love for data science. Ambition can only take us so far, plus after a while, it can become stressful. Wanting to become better because of a lasting motivation is therefore essential for bringing about the sense of wonder. However, we also need to make time for it and allocate resources to such endeavors. Learning through a book or a crash course may be efficient but it’s what we do beyond this that enables us to learn deeply and cultivate the sense of wonder. Liaising with people who already have this sense strong in them, such as beginners who are dedicated learners of the craft, can be a great aid too. Finally, we need to think about the craft and experiment with new ideas. If we just rely on what this or the other expert says, we are bound to be limited by them. We need to study existing ideas, but also dare to venture beyond them, exploring new models and new metrics. Most of them are bound to yield nowhere but some of them are bound to work and help us look at data science from a different angle.
Cultivating a sense of wonder isn’t easy and it’s an ongoing challenge. However, through it, new perspectives come about (such as some of the stuff I talk about in this blog periodically) while the connectedness of the various aspects of the field becomes apparent. All in all, it’s this perspective that makes the field truly wonderful, much more than a line of work. That’s something to wonder about...
I've talked about mentoring quite a bit lately, as well as in a video of mine available on Safari. Although this topic is not that much in vogue these days, I'd like to say a few more things about it and why it is relevant in data science and A.I. these days, perhaps more than ever.
First of all, mentoring is good for both parties and it can even be profitable. Although it's doubtful you'll be rich by being a mentor, you have a lot to gain in terms of a deeper understanding of the craft, once you start explaining concepts for your mentees, while there is the opportunity to revive the "beginner's mind" through this whole experience. If you are a mentee, you'll save lots of time when learning data science / A.I. since your mentor will answer your queries and even guide you towards the resources you need. If the mentor is good, they may even help you develop the mindset of data science, something that's hard to do on your own.
Also, if you are part of the Thinkful online school, you have a whole set of benefits too. As a mentor, you'll have an easier time finding a mentee and even get paid to mentor them. Also, you'll have a structured learning path, through the corresponding data science courses, so that your mentee won't need you for everything since the platform provides him/her with plenty of resources for all the basics of data science. As a mentee of the Thinkful school, you'll have access to vetted data science professionals who will help you learn, while your mentor can help you further through tailored hands-on advice and guidance through your data science learning.
On another note, through mentoring, you get to have a chance to stay grounded, regardless of your role in this partnership. As a mentor, it's easy to get detached from the world, due to the more high-level way of experiencing the craft, while as a mentee it's easy to get lost in the math or programming side of things. Mentoring helps you stay close to the essence of the field, which has to do with how it is applied and the various methodologies involved in its use.
Interestingly, with today's web technologies, it's easy to experience mentoring wherever you are, as long as you have a reliable internet connection and a decent computer, particularly one with a webcam. Although mentoring in person is generally better, you don't have to depend on physical proximity in order to mentor or be mentored in the fascinating field of data science.
In the previous article, I talked about the dichotomy of Student – Mentee in a data science context. However, it’s really the dichotomy of teacher – Mentor that is at the root of all this, while the data science field itself has a role to play too, something many learners of the craft have forgotten. In this article, we’ll explore just that, in an attempt to gain a better perspective of how true learning works and what it takes to connect to the essence of this fascinating field that’s being tainted by the ones who see it as merely a career-boosting opportunity.
Although there is nothing wrong with the role of a professor in any field, especially the fields of science, it’s important to highlight a distinct difference between a professor and a mentor. The former is usually geared towards giving a set of lectures, in order to fulfill the requirements of his/her professional position, something that may or may not be adequate for conveying the essence of the field, especially for a field as complex as data science. It’s not that the professor doesn’t care for all this, but the nature of this profession makes it incredibly difficult, if not impossible, to do this justice. After all, most of these professional educators have other priorities, such as their research.
Mentors, on the other hand, help others learn about data science, not because it’s our job, but because we care about the field, while we have other sources of income to cover our daily expenses. Of course, we may still have monetary benefits linked to mentoring, but it’s generally not the key motivation of all this. Also, we share knowledge about the field based on our own experience, rather than some curriculum which may not always align with the field. Finally, the connection with the people we help (mentees) is more direct and tailored to their needs, rather than generic and impersonal.
It’s important to note that these two roles although different, may still have an overlap. There are professors who can act as mentors, though they usually do this outside the classroom, as in the case of their supervisory role for a PhD student. Also, someone can be a mentor and yet also work part-time in a university. So, it’s good to maintain a flexible view of this whole matter.
Anyway, if you are willing to learn data science in depth, it’s definitely better to do so through a mentor, particularly one with a diversity of experiences in the field. But what about the mentor himself? Where does he learn about all this? In many cases, a mentor may have another mentor to learn from, though it is also possible that the data science field itself is that person’s mentor. After all, data science is a living field, dynamic and ever-changing, with plenty of things to teach to those who are willing to learn from it. Many of its secrets have been discovered but there is still a lot that it’s uncharted territory. That’s something data science can teach anyone who is willing to learn from it. All it takes is a solid understanding of the fundamentals, a strong sense of discipline, and the open-mindedness to abandon what you know for what you can know if you maintain a beginner’s mind...
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.