I've talked about mentoring in the past and what a good mentee looks like. Here I'd like to highlight the differences between a student and a mentee since it's easy to confuse the two.
First of all, a student is someone who is generally more passive than a mentee. The latter takes initiative and feels responsible for her progress in the field of study. The former often outsources this to the instructor(s) and focuses more on passing exams rather than actually learning.
In addition, a mentee has a closer connection with the person helping him learn, namely the mentor. The student's relationship with his instructors is more impersonal, mainly because the latter have lots of students to deal with and can't usually focus on every single one unless it's for their dissertation project or something. The mentor, however, is more dedicated to getting to know the mentee better and coach him accordingly.
Moreover, the mentee-mentor relationship is beyond academia, even though it can exist in a university too (e.g. in the case of a PhD program). More often than not, mentees and mentors are working professionals, while the former tends to already have a degree.
Furthermore, mentees tend to have a more focused approach to learning, usually related to a specific field, much like an apprentice. The student, on the other hand, may study lots of different fields, as part of her curriculum at the college or university.
Interestingly, although there are several university courses on Data Science these days, most people who learn the craft, tend to do so either independently or through the help of a mentor. Perhaps there is something to this, other than a coincidence. That's not to say that university courses are bad, but oftentimes learning data science as a mentee tends to be more cost-effective and efficient, time-wise.
What are your thoughts and experiences on the matter?
Short answer: Nope! Longer answer: clustering can be a simple deterministic problem, given that you figure out the optimal centroids to start with. But isn’t the latter the solution of a stochastic process though? Again, nope. You can meander around the feature space like a gambler, hoping to find some points that can yield a good solution, or you can tackle the whole problem scientifically. To do that, however, you have to forget everything you know about clustering and even basic statistics, since the latter are inherently limited and frankly, somewhat irrelevant to proper clustering.
Finding the optimal clusters is a two-fold problem: 1. you need to figure out which solutions make sense for the data (i.e. a good value for K), and 2. figure out these solutions in a methodical and robust manner. The former has been resolved as a problem and it’s fairly trivial. Vincent Granville talked about it in his blog, many years ago and since he is better at explaining things than I am, I’m not going to bother with that part at all. My solution to it is a bit different but it’s still heuristics-based. The 2nd part of the problem is also the more challenging one since it’s been something many people have been pursuing a while now, without much success (unless you count the super slow method of DBSCAN, with more parameters than letters in its name, as a viable solution).
To find the optimal centroids, you need to take into account two things, the density of each centroid and the distances of each centroid to the other ones. Then you need to combine the two in a single metric, with you need to maximize. Each one of these problems seems fairly trivial, but something that many people don’t realize is that in practice, it’s very very hard, especially if you have multi-dimensional data (where conventional distance metrics fail) and lots of it (making the density calculations a major pain). Fortunately, I found a solution to both of these problems using 1. a new kind of distance metric, that yields a higher U value (this is the heuristic used to evaluate distance metrics in higher dimensional space), though with an inevitable compromise, and 2. a far more efficient way of calculating densities. The aforementioned compromise is that this metric cannot guarantee that the triangular inequality holds, but then again, this is not something you need for clustering anyway. As long as the clustering algo converges, you are fine.
Preliminary results of this new clustering method show that it’s fairly quick (even though it searches through various values of K to find the optimum one) and computationally light. What’s more, it is designed to be fairly scalable, something that I’ll be experimenting with in the weeks to come. The reason for the scalability is that it doesn’t calculate the density of each data point, but of certain regions of the dataset only. Finding these regions is the hardest part, but you only need to do that once, before you start playing around with K values.
Anyway, I’d love to go into detail about the method but the math I use is different to anything you’ve seen and beyond what is considered canon. Then again, some problems need new math to be solved and perhaps clustering is one of them. Whatever the case, this is just one of the numerous applications of this new framework of data analysis, which I call AAF (alternative analytics framework), a project I’ve been working on for more than 10 years now. More on that in the coming months.
With all the talk about Data Science and A.I., it’s easy to forget about the person doing all this and how his awareness grows as he gets more involved in the field of data science. What’s more, all those gurus at the social media will tell you anything about data science except this sort of stuff, since they prefer to have you as a dependent follower rather than an autonomous individual making his own way as a data scientist.
So, as you enter the field of data science, you are naturally focused on its applications and the tangible benefits of it. As a professional in this field, you may care about the high salary such a vocation entails or the cool stuff you may build, using data science methods. Everything else seems like something you have to put up with in order to arrive at this place where you can reap the fruits of your data science related efforts. It’s usually at this level of awareness that you see people complain about the field as being too hard, or not engaging enough after a while. Still, this level is important because it often provides you with a strong incentive to continue learning about this field, growing more aware of it.
The second level of data science awareness involves a deeper understanding of it and an appreciation of its various tools, methods, and algorithms. People who dwell in this level of awareness either come from academia or end up spending a lot of time in academic endeavors, while in the worst case, they become fanatics of this or the other technology, seeing all others as inferior, just like the people who prefer them. The same goes with the methods involved since there are data scientists who swear by the models they use and wouldn’t use any other ones unless they absolutely had to. This is the level where most people end up with since it’s quite challenging to transcend it, especially on your own.
Finally, when you reach the third level of data science awareness, you are more interested in the data and the possibilities it offers. You have a solid understanding of most of the methods used and can see beyond them since they all seem like instances of the same thing. Your interest in data engineering grows and you become more comfortable with processes that are either esoteric or mundane, for most people. Heuristics seem far more interesting, while you begin to question things that others take for granted, regarding how data should be used. The best part is that you can see through the truisms (and other useless information) of the various “experts” in the social media and value your experience and intuition more than what you may read in this or the other book on this subject.
It’s fairly easy to figure out which level you are in, in your data science journey. Most importantly, it doesn’t matter as much as being aware of it and making an effort to move on, going deeper into the field. Because, just like other aspects of science, data science can be a path of sorts, rather than just the superficial field of work that many people make it appear. So, if you want to find meaning in all this, it’s really up to you!
This famous Buddhist quote is one of my personal favorites and one that Bruce Lee also used in one of his movies. Although it may seem more relevant to some Eastern philosophy or martial arts, it actually has a lot of relevance in data science too.
Through this blog, my books, and my videos, I’ve put forward some ideas and hopefully some useful knowledge for anyone interested in data science and A.I. However, it’s easy to mistake conviction with cult-like hegemony, something I’ve observed in social media a lot. Whenever someone competent enough to have a good professional role and some prestige comes about, many people choose to become his or her followers, treating that person as a guru of sorts. This, in my view, is one of the most toxic things someone can do and it’s best to avoid at all costs. That’s not to say that all those people who have followers are bad, far from it! However, the act of blindly following someone just because of their status and/or their conviction is dangerous. You may get lots of information this way, but you will lose the most important thing in your quest: initiative.
Of course, some of these people are happy to have a following and couldn’t care less about your loss of initiative. After all, they often measure their value in terms of how many followers they have, how many downloads their free book has, and how many likes they receive. This in and of itself should raise some serious red flags because no matter how much data science or A.I. know-how these individuals have, the path they are on doesn’t go anywhere good.
I’m a firm believer in free will and I value it more than anything else, especially in the domain of science. As data science (and A.I.) are part of this domain, it’s imperative to show respect to this quality, even at the expense of a large following. That’s why whenever I share something with you, be it some data science methodology, some A.I. system, some heuristic, or some ideas about our field, I expect you to experiment with it and draw your own conclusions. Don’t take my word for it, because even though I make an effort to verify everything I write about, some inaccuracies are inevitable. After all, data science and A.I. are not an exact science!
Naturally, it takes more than experimentation to learn data science and A.I., but with some guidance, some contemplation, some skepticism, and some experimentation, it is quite doable to learn and eventually master this craft. That has been my experience both for my own journey in data science and A.I., as well as in the journeys of my mentees. Hopefully, your experience will be equally rewarding and educational...
Sounds like a bold statement, doesn’t it? Well, regardless of how it sounds, this is a project I’ve been working on for a long time and which I’ve been refining for the past couple of weeks, while also doing some additional testing. So, this is not some half-baked idea like many of the things that tech evangelists write about to promote this or the other agenda. This is the kind of stuff I’d publish a paper about if I still cared about publications.
In a nutshell, the diversity heuristic is a simple metric for measuring how diverse the points of a dataset are. This is quite different to spread metrics (e.g. standard deviation), since the latter focuses on the spread of a distribution, while it can take any positive value. Diversity, on the other hand, takes place between 0 and 1, inclusive. So, if all the vast majority of the data points are crammed into a single or a couple of places, the diversity is 0, while if the data points are more or less evenly distributed in the data space, the diversity is 1. Interestingly, even a random set of points has a diversity score that’s less than 1, since perfect uniformity is super rare unless you are using a really good random number generator!
Also, this diversity metric is pretty fast because, well, if a heuristic is to be useful, it has to scale well. So, I designed it to be quite fast to compute, even for a multiple-dimensional dataset. Because of this, it can be used several times without the computer overheating. As a result, it is fairly easy and computationally cheap to have a diversity-based sampling process, i.e. a sampling method that aims to optimize the yielded sample in terms of diversity. Naturally, a diverse sample is bound to cram more of the original dataset’s signal in it, though some information loss is inevitable. Nevertheless, the diverse sample, which usually has higher diversity than the original dataset, can be used as a proxy of the original dataset for a dimensionality reduction process, such as PCA. Interestingly, the meta-features that stem from the sample are not exactly the same as those of the original dataset, but they good enough, in terms of predictive power. So, by taking the rotation matrix of the PCA model of the sample, we can use it to reduce the original dataset, making dimensionality reduction a piece of cake.
So, there you have it: diversity can be used to reduce a dataset not just in terms of the number of data points it has (sampling) but also in terms of its dimensions. I know this may sound very simple as a process, but considering the computational cost of the alternative (not using diversity-based sampling), I believe it’s a step forward. Naturally, this is just one application of this new heuristic, which can perhaps help in other aspects of data science.
Anyway, I’d love to write more about this but I’m saving it for a video I plan to do on this topic. Currently, I’m still busy with the new book so, stay tuned...
Although when people think of math in data science, it’s usually Calculus, Linear Algebra, and Graph Theory that comes to mind, Geometry is also a very important aspect of our craft. After all, once we have formatted the data and turned into a numeric matrix (or a numeric data frame), it’s basically a bunch of points in an m-dimensional space.
Of course, most people don’t linger at this stage to explore the data much since there are various tools that can do that for you. Some people just proceed to data modeling or dimensionality reduction, using PCA or some other method. However, oftentimes we need to look at the data and explore it, something that is done with Clustering to some extent. The now trending methodology of Data Visualization is very relevant here and if you think about it, it is based on Geometry.
Geometry does more than just help us visualize the data though. Many data models use geometry to make sense of the data, for example, particularly those models based on distances. I talked about distances recently, but it’s hard to do the topic justice in a blog post, especially without the context that geometry offers.
Perhaps geometry seems old-fashioned to those people used to fancy methods that other areas of math offer. However, it is through geometry that revolutionary ideas in science took root (e.g. Theory of Relativity) while cutting edge research in Quantum Physics is also using geometry as a way to understand those other dimensions and how the various fundamental particles of our world relate to each other.
In data science, geometry may not be in the limelight, unless you are doing research in the field. However, understanding it can help you gain a better appreciation of the data science work and the possibilities that exist in the field. After all, a serious mistake someone can make when delving into data science is to think that the theory in a course curriculum or some book is all there is to it. When you reduce data science to a set of methods and algorithms you are basically limiting the potential of it and how you can use the field as a data scientist. If however, you maintain a sense of mystery, such as that which geometry can offer, you are bound to have a healthier relationship with the craft and a channel for new ideas. After all, data science is still in its infancy as a field while the best data science methods are yet to come...
As the field of Data Science matures and everything in it is categorized and turned into a teaching module, compartmentalization may seem easier and more efficient as a learning strategy. After all, there is a bunch of books on specialized topics of the craft. That’s all great and for some people, it may even work satisfactorily, but that’s where the risk lies and it’s a pretty big risk too!
Learning about something specialized in data science, particularly without a good sense of context or its limitations, can be catastrophic. The old saying “for someone who only knows how to use a hammer everything starts looking like a nail” is applicable here too. Learning about a specialized aspect of data science can often make you think that this is the best approach to solving data science related problems. After all, the author seems to know what he’s talking about and some employers value this skill. However, if this know-how is out of context, it is bound to be ineffective at best and problematic at worst. Data science is an interdisciplinary field with lots of different tools in it, from various areas. Anyone who tries to dissect it and focus mainly on one of them is doing a disservice to the field and if you as a data science learner pay attention to this person, you are bound to warp your knowledge of the craft and delay your mastery of it.
Also, this overspecialization in know-how may make you think that you are better than the other data science practitioners who have not developed that niche skill yet. This will limit your ability to learn and perhaps even cooperate with these people, significantly. After all, you are an expert in this, so why bother with less fancy know-how at all? Well, sometimes even the more humble aspects of the field, such as feature engineering, can turn to be more effective at solving a problem well, than some fancy model, so it’s good to remember that.
That’s why I’ve always promoted the idea of the right mindset in data science, something that no matter how the field evolves, it is bound to remain stable in the years to come and help you adapt to whatever know-how becomes the norm. Also, no matter how important the algorithms are, it’s even more important knowing how to create your own algorithms and change existing ones, optimizing them for the problem at hand. That’s something that no data science book teaches adequately, as the emphasis is covering material related to certain buzzwords, sometimes without the supervision of an editor. The latter can help immensely in making the contents of a book more comprehensible and relevant to data science in general, providing you with a sense of perspective.
So, be careful with what you let enter your data science curriculum as you learn about the craft. Some books may be a waste of time while others, especially those not published through a publisher, may even hinder your development as a data scientist.
For some reason, people who delve into data science tend to focus more on certain aspects of the craft at the expense of others. One of these things that often doesn’t get nearly enough attention is the concept of distance. If you ask a data scientist (especially one who is fairly new to the craft or overspecialized in one aspect of it), they’ll tell you about the distance metrics they are familiar with and how distance is a kind of similarity metric. Although all of this is true, it only portrays just one part of the picture.
I’ve delved into the topic for several years now and since my Ph.D. is based on transductive systems (i.e. data science systems that are based on distances), I’ve come to have a particular perspective on the matter, one that helps me see the incompleteness of it all. After all, no matter how many distance heuristics we develop, the way distance is perceived will remain limited until we look at it through a more holistic angle. So, let’s look at the different kinds of distances out there and how they are useful in data science.
Distances of the first kind are those most commonly used and are expressed through the various distance heuristics people have devised over the centuries. The most common ones are Euclidean distance and Manhattan distance. Mathematically, it is defined as the norm of a vector connecting two points.
Another kind of distances is the normalized ones. Every distance metric out there that is not in this category is crude and limited to the particular set of dimensions it was calculated in. This makes comparisons of distances between two datasets of different dimensionality impossible (if the meaning is to be maintained), even if mathematically it’s straight-forward. Normalizing the matrix of distances of the various data points in the dataset requires finding the largest distance, something feasible when the number of data points is small but quite challenging otherwise. What if we need the normalized distances of a sample of data points only because the whole dataset is too large? That’s a fundamental question that needs to be answered efficiently (i.e. at a fairly low big O complexity) if normalized distances are to be practical.
The last and most interesting kind of distances is the weighted distance. Although this kind of distance is already well-documented, the way it has been tackled is fairly rudimentary, considering the plethora of possibilities it offers. For example, by warping the feature space based on the discernibility scores of the various features, you can improve the feature set’s predictive potential in various transductive systems. Also, using a specialized weighted distance, you can better pinpoint the signal of a dataset and refine the similarity between two data points in a large dimensionality space, effectively rendering the curse of dimensionality a non-issue. However, all this is possible only through a different kind of data analytics paradigm, one that is not limited by the unnecessary assumptions of the current one.
Naturally, you can have a combination of the latter two kinds of distances for an even more robust distance measure. Whatever the case, understanding the limitations of the first kind of distances is crucial for gaining a deeper understanding of the concept and apply it more effectively.
Note that all this is my personal take on the matter. You are advised to approach this whole matter with skepticism and arrive at your own conclusions. After all, the intention of this post is to make you think more (and hopefully more deeply) about this topic, instead of spoon-feeding you canned answers. So, experiment with distances instead of limiting your thinking to the stuff that’s already been documented already. Otherwise, the distance between what you can do and what you are capable of doing, in data science, will remain depressingly large...
Lately, there has been an explosion of interest in Data Science, mainly due to the appealing job prospects of someone who has the relevant know-how. It is easy, unfortunately, to get into the state of complacency whereby data science become all too familiar and you find yourself working the same methods and the same processes in general when dealing with the problems you are asked to solve. This situation can be quite toxic though, even if it’s unlikely someone will tell you so. After all, as long as you deliver what you have to deliver no one cares, right? Unfortunately, no. If you stop evolving as a data scientist, chances are that you’ll become obsolete in a few years, while your approach to the problems at hand will cease to be as effective. Besides, the field evolves as do the challenges we as data scientists have to face.
The remedy to all this is exploring data science with a renewed sense of enthusiasm, something akin to what is referred to as “beginner’s mind” in the Zen tradition. Of course, enthusiasm doesn’t come about on its own after you’ve experienced it once. You need to create the conditions for it and what better way to do that than exploring data science further. This exploration can be in more breadth (i.e. additional aspects of the craft, including but not limited to new methods), and in more depth (i.e. understand the inner workings of various algorithms and the variants they may have). Research in the field can go a long way when it comes to both of these exploration strategies. It’s important to note that you don’t need to publish a paper in order to do proper research. In fact, you can do perfectly adequate research with just a computer and a few datasets, as long as you know how.
It’s also good to keep the breadth and depth in balance when you are exploring data science. Going too much in breadth can lead you to have a more superficial knowledge of the field while going too much in depth can make you overspecialized. What you do first, however, is totally up to you. Also, it’s important to use reliable resources when exploring the field, since nowadays it seems that everyone wants to be a data science content creator, without having the essential training or educational mindset. A good rule of thumb is to stick to content that has undergone extensive editings, such as the stuff made available through a publisher, particularly one specializing in data related books and videos.
Whatever the case, it’s always good to explore data science in an enjoyable manner too. Find a dataset you are interested in, before starting to apply some obscure method. This way the whole process will become more manageable and perhaps even fulfilling. Fortunately, there is no shortage of datasets out there, so you have many options. Happy exploration!
These days I did something I’d been putting off for a while now, as if it didn’t work out, it would mean that I’d have to throw away my computer, so to speak. I didn’t exactly meddle with any of the computer’s hardware but came as close to it as I could, without physically changing the machine. Namely, I tweaked the boot software and configured a new OS that I’m now using. “What’s wrong with the old OS?” you ask. Well, I’d tweaked it way too much in the past, so it was now quite unstable. Yet, even at this pitiful state, it was better than some other OSes I’ve had over the years, so it’s hard to complain about it.
Whatever the case, getting down to the nitty gritty of a computer isn’t easy and there is a surprising lack of people out there able or willing to help out. Also, the forums although generally useful, don’t always have the exact issue you are looking to solve, so you basically need to rely on your own skills. Fortunately, I did a thorough back-up of all my data beforehand, so nothing could get lost. Also, I was quite meticulous with the whole process and had a back-up plan in place. A lot of shell scripting was involved and although I'm not super confident about this type of interaction with a computer, it's not as daunting as it seems either. Of course, if you do it more, like professionals in the field, it may even seem the best way to interface with a computer. I'm not there yet though, but I have a deeper appreciation of the merits of this approach to interfacing than I did before.
This whole thing is akin to the engineering approach to things, where failure is always taken into account since things break more often than people think. Thinking that everything is going to be fine, just because it worked fine in someone’s presentation or tutorial is naive and doesn’t really spell out professionalism. That’s why having the right mindset about all this stuff is essential. Algorithms, equations, and coding libraries can only get you so far. After that, you are on your own and you need more than just a solid understanding of the theory but also the ability to deal with the adverse circumstances that will probably present themselves sooner rather than later.
Now, in you work as a data scientist or an A.I. professional you’ll probably have no need to do low level work on a computer (unless you are setting up a new pipeline), but if such a challenge presents itself, you are better off facing it. And who knows, maybe you’ll do more than just upgrade your computer through this whole process, since chances are that you’ll also be upgrading yourself.
So, what did I learn from this whole experience? First of all, I now have a deeper appreciation to all those people who do the low-level work in a data science pipeline. It may appear straight-forward from a high-level perspective, but when you get down to it, it isn't simple at all, even if you enjoy working on a CLI. Also, I learned that just because something isn't common enough to be on a forum or a blog article, it doesn't mean it's not important or worth doing. The OS upgrade I did helped realize how vast the spectrum of possibilities is when it comes to OSes and how deviating from the most popular approaches to it is probably the best way to go (or at least the most fox-like way!). Finally, I learned that when you've assembled something yourself, even if it's a fairly straight-forward OS, it makes you appreciate it more. Most things nowadays come preassembled and we don't have to do anything to get them to work, but those things that require our own energy to come to life, be it an OS or a custom data science model, these are the things we tend to remember the most since they change us inside...
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.