Recently I came across this interesting platform for sharing curated content, called Wakelet. It's also a British startup from Manchester, by the way, one that appears quite promising, given that they find a way to monetize their project.
Anyway, the platform is a bit like Pinterest but with more features and an offline presence too. These are the most important features, in my view:
* Very intuitive and fast to learn
* Can work with a variety of content types: videos, images, formatted text, PDFs, and website links
* Every list can be exported to a PDF
* Free to use
* No account is required to view the lists
* Lots of free images to use as thumbnails and backgrounds
* QR code is generated for each list you wish to share
* Private lists are also an option
* Plenty of tutorials online that explain the various features and use-cases
You can check out a wake (that's how these curated lists are called) that I've made in the space of a few minutes, here. In the future I'll probably be using it more, particularly on this blog. Whatever the case, do let me know what you think of this platform and of my wake. Cheers!
If you wish to put yourself out there as a content creator, now more than ever, videos are a great way to do it. This may seem somewhat daunting to some, but with the plethora of software options out there and the ease of use of many of them, it’s just a matter of making your resolve to do it. Apart from the obvious benefit of personal branding, creating a data science or A.I. video can also be lucrative as an endeavor.
I’m not referring to the amateur videos many people on YouTube make, in their vain attempts to gather likes and shares, much like beggars gather pitiful coins from the passers-by. If you want to create a technical video that will be worth your while, there are better and more self-respecting options to do so, options that you would be happy to include in your resume/CV. Namely, you can create a video that you promote through a respectable publisher, such as Technics Publications. Such an alternative will enable you to receive royalties every 6 months and not have to worry about promoting your work all by yourself. Of course, there is also the option of a one-time payment that some publishers offer, but this isn’t nearly as appealing since the amount of money you can potentially earn through royalties is higher and the requirements are easier to meet.
When creating a video, many people think that it’s just you standing in front of a camera and talking adlib about a topic, perhaps using some props like a whiteboard. Although that’s one straightforward way to do it, it may not appeal to the less charismatic presenters or those who don’t consider themselves particularly photogenic. Besides, have a screen-share video or a slideshow with voice-over, always based on a script, is much easier to produce and sometimes more effective at illustrating the points you wish to make. Alternatively, you can try combining both approaches, though this may require more takes.
Whatever the case, making a video is the easy part of the whole project, relatively speaking. What ensues this is what is the most challenging task for most people: promoting the video to your target audience. Although social media have an important role to play in all this, having some support from a publisher is priceless. After all, promoting technical content is what publishers are really good at, especially if they have a good niche in the market. Still, if you have a large enough network, it doesn’t hurt to spread the word yourself too, for additional exposure, though you are not required to do so.
If you are interested in covering a data science or A.I. related topic with a video, through a publisher, feel free to contact me directly, as I’d be happy to help you in that, particularly if you are serious about it. The world could definitely use some new content out there for data science and A.I. since there is way too much noise, confounding those who wish to study these fields. Perhaps this new content could come from you.
When I created this heuristic about a year and a half ago, I wasn't planning to make a video about it. However, after exploring its various benefits, I felt this should become more well-known to data science and A.I. practitioners. So, after a series of experiments and some extra research, I've made this video demonstrating the various aspects of this intriguing heuristic metric. Check it out whenever you have the chance!
Please note that Safari Books Online (O'Reilly) is a paid platform for quality content, so you need to have a subscription to it in order to view this and any other video in their entirety. However, it's a worthy investment that every data science and A.I. learner ought to consider making.
Dimensionality reduction has been a standard methodology to deal with datasets that have a lot of features, more than a typical model can handle effectively. Reducing the number of features can also save time and storage space, while when it comes to sensitive data it can be a big plus as it enables anonymity in the people involved. What’s more, in some cases, a reduced dimensionality dataset can be more effective as there is less noise in it. However, conventional dimensionality reduction methods don’t always do the trick due to the inherent limitations they have. For example, PCA only considers linear relationships among the variables and a linear combination of features, as a solution.
Of course, other people are not sitting idle when it comes to this issue. There are several dimensionality reduction options that are being pursued, the most interesting of which is autoencoders. This AI-based method involves a data-driven approach to figuring out the nature of the data and creating new variables that can represent the underlying signal, by minimizing the error. The issue with this is that it often requires a lot of data and some specialized know-how in order to configure optimally. Also, this whole process may be fairly slow, due to the large number of computations involved.
An alternative approach has to do with feature fusion in a non-AI way. The idea is to maintain transparency to the extent this is possible, while at the same time optimize the whole process in terms of speed. The use of multiple operators, some linear and some non-linear, is essential, while the option of dropping useless features is also very useful. Naturally, this whole process would be more effective in the presence of a target variable, but it should be able to work without it, for better applicability. Whatever the case, the use of a metric able to handle non-linear correlations is paramount since the conventional correlation metric used leaves a lot to be desired.
Based on all this, it’s clear that the dimensionality reduction area is still capable of enhancements. Despite the great work that has been done already, there is still room for new methods that can address the limitations the existing methods have, which aren’t going away any time soon. Perhaps it would be best to explore this methodology of data engineering more, instead of focusing the latest and greatest system, which although intriguing, may sacrifice too much (e.g. transparency) in the name of accuracy, a trade-off that may no longer be cost-effective. Something to think about...
This famous Buddhist quote is one of my personal favorites and one that Bruce Lee also used in one of his movies. Although it may seem more relevant to some Eastern philosophy or martial arts, it actually has a lot of relevance in data science too.
Through this blog, my books, and my videos, I’ve put forward some ideas and hopefully some useful knowledge for anyone interested in data science and A.I. However, it’s easy to mistake conviction with cult-like hegemony, something I’ve observed in social media a lot. Whenever someone competent enough to have a good professional role and some prestige comes about, many people choose to become his or her followers, treating that person as a guru of sorts. This, in my view, is one of the most toxic things someone can do and it’s best to avoid at all costs. That’s not to say that all those people who have followers are bad, far from it! However, the act of blindly following someone just because of their status and/or their conviction is dangerous. You may get lots of information this way, but you will lose the most important thing in your quest: initiative.
Of course, some of these people are happy to have a following and couldn’t care less about your loss of initiative. After all, they often measure their value in terms of how many followers they have, how many downloads their free book has, and how many likes they receive. This in and of itself should raise some serious red flags because no matter how much data science or A.I. know-how these individuals have, the path they are on doesn’t go anywhere good.
I’m a firm believer in free will and I value it more than anything else, especially in the domain of science. As data science (and A.I.) are part of this domain, it’s imperative to show respect to this quality, even at the expense of a large following. That’s why whenever I share something with you, be it some data science methodology, some A.I. system, some heuristic, or some ideas about our field, I expect you to experiment with it and draw your own conclusions. Don’t take my word for it, because even though I make an effort to verify everything I write about, some inaccuracies are inevitable. After all, data science and A.I. are not an exact science!
Naturally, it takes more than experimentation to learn data science and A.I., but with some guidance, some contemplation, some skepticism, and some experimentation, it is quite doable to learn and eventually master this craft. That has been my experience both for my own journey in data science and A.I., as well as in the journeys of my mentees. Hopefully, your experience will be equally rewarding and educational...
Sounds like a bold statement, doesn’t it? Well, regardless of how it sounds, this is a project I’ve been working on for a long time and which I’ve been refining for the past couple of weeks, while also doing some additional testing. So, this is not some half-baked idea like many of the things that tech evangelists write about to promote this or the other agenda. This is the kind of stuff I’d publish a paper about if I still cared about publications.
In a nutshell, the diversity heuristic is a simple metric for measuring how diverse the points of a dataset are. This is quite different to spread metrics (e.g. standard deviation), since the latter focuses on the spread of a distribution, while it can take any positive value. Diversity, on the other hand, takes place between 0 and 1, inclusive. So, if all the vast majority of the data points are crammed into a single or a couple of places, the diversity is 0, while if the data points are more or less evenly distributed in the data space, the diversity is 1. Interestingly, even a random set of points has a diversity score that’s less than 1, since perfect uniformity is super rare unless you are using a really good random number generator!
Also, this diversity metric is pretty fast because, well, if a heuristic is to be useful, it has to scale well. So, I designed it to be quite fast to compute, even for a multiple-dimensional dataset. Because of this, it can be used several times without the computer overheating. As a result, it is fairly easy and computationally cheap to have a diversity-based sampling process, i.e. a sampling method that aims to optimize the yielded sample in terms of diversity. Naturally, a diverse sample is bound to cram more of the original dataset’s signal in it, though some information loss is inevitable. Nevertheless, the diverse sample, which usually has higher diversity than the original dataset, can be used as a proxy of the original dataset for a dimensionality reduction process, such as PCA. Interestingly, the meta-features that stem from the sample are not exactly the same as those of the original dataset, but they good enough, in terms of predictive power. So, by taking the rotation matrix of the PCA model of the sample, we can use it to reduce the original dataset, making dimensionality reduction a piece of cake.
So, there you have it: diversity can be used to reduce a dataset not just in terms of the number of data points it has (sampling) but also in terms of its dimensions. I know this may sound very simple as a process, but considering the computational cost of the alternative (not using diversity-based sampling), I believe it’s a step forward. Naturally, this is just one application of this new heuristic, which can perhaps help in other aspects of data science.
Anyway, I’d love to write more about this but I’m saving it for a video I plan to do on this topic. Currently, I’m still busy with the new book so, stay tuned...
With so many ways to get a book out there, even in a fairly challenging subject such as data science, you may wonder what this process entails and what is the best way to go about it. After all, these days it’s easier than ever to reach an audience online and promote your work, all while branding yourself as a professional in the field.
Writing a book in data science is first and foremost an education initiative, targeting a particular audience. Usually, this is data science learners though it may be other professionals involved in data science, such as managers, developers, etc. A data science book generally tries to explain what data science can do, what its various methodologies are, and how all of that can be useful for solving particular problems (emphasis on the last part!). If you see a book that focuses a lot of the methods, particularly those of a particular methodology, it may be too specialized to be of most audiences, unless you are targeting that particular niche that requires this specific know-how.
A key thing to note when exploring the option of writing a book is a publisher. Even if you prefer to self-publish, your book must be able to compete with other books in this area and a publisher is usually the best way to figure that out. If a publisher is interested in your book, then it’s likely to be somewhat successful. Also, if you are new to book authoring, you may want to start with a publisher since there are a lot of things you’d never learn without one. Also, a book published through a publisher is bound to have more credibility and a larger life-span.
Understandably, you may have explored the various deals publishers make with their authors and figured out that you’ll never make a lot of money by publishing books. Fair enough; you’ll probably never make a living by selling your words (although it is possible still). However, if your book is good, you’ll probably make enough money to justify the time you’ve put into this project. Also, remember that most publishing deals provide you with a passive income, even if the publisher wants you to promote your book to some extent. So, even though you won’t make a lot of cash, you’ll have a revenue stream for the duration of your book’s lifetime.
With all the data science material available on the web these days, acquiring all the relevant information and compiling it into a book is a fairly straight-forward task. However, just because it is fairly feasible, it doesn’t mean that it’s what the readers need. Without someone to guide you through the whole process and give you honest feedback (that’s also useful feedback), it’s really hard to figure out what is necessary to put in the book, what should be included in an appendix, and what should be mentioned in a link. Your readers may or may not be able to provide you with this information, while if your main means of interacting with them is how many of them download your book or visit your website, you are just satisfying your ego!
A publisher's honest feedback often hurts but that’s what gradually turns you into a real author, namely one who has some authority in his/her written works. Otherwise, you’ll be yet another writer, which is fine if you just want to talk about writing a book or how you have written a book that you have on Amazon, things that are bound to be forgotten quicker than you may think…
Although when people think of math in data science, it’s usually Calculus, Linear Algebra, and Graph Theory that comes to mind, Geometry is also a very important aspect of our craft. After all, once we have formatted the data and turned into a numeric matrix (or a numeric data frame), it’s basically a bunch of points in an m-dimensional space.
Of course, most people don’t linger at this stage to explore the data much since there are various tools that can do that for you. Some people just proceed to data modeling or dimensionality reduction, using PCA or some other method. However, oftentimes we need to look at the data and explore it, something that is done with Clustering to some extent. The now trending methodology of Data Visualization is very relevant here and if you think about it, it is based on Geometry.
Geometry does more than just help us visualize the data though. Many data models use geometry to make sense of the data, for example, particularly those models based on distances. I talked about distances recently, but it’s hard to do the topic justice in a blog post, especially without the context that geometry offers.
Perhaps geometry seems old-fashioned to those people used to fancy methods that other areas of math offer. However, it is through geometry that revolutionary ideas in science took root (e.g. Theory of Relativity) while cutting edge research in Quantum Physics is also using geometry as a way to understand those other dimensions and how the various fundamental particles of our world relate to each other.
In data science, geometry may not be in the limelight, unless you are doing research in the field. However, understanding it can help you gain a better appreciation of the data science work and the possibilities that exist in the field. After all, a serious mistake someone can make when delving into data science is to think that the theory in a course curriculum or some book is all there is to it. When you reduce data science to a set of methods and algorithms you are basically limiting the potential of it and how you can use the field as a data scientist. If however, you maintain a sense of mystery, such as that which geometry can offer, you are bound to have a healthier relationship with the craft and a channel for new ideas. After all, data science is still in its infancy as a field while the best data science methods are yet to come...
As the field of Data Science matures and everything in it is categorized and turned into a teaching module, compartmentalization may seem easier and more efficient as a learning strategy. After all, there is a bunch of books on specialized topics of the craft. That’s all great and for some people, it may even work satisfactorily, but that’s where the risk lies and it’s a pretty big risk too!
Learning about something specialized in data science, particularly without a good sense of context or its limitations, can be catastrophic. The old saying “for someone who only knows how to use a hammer everything starts looking like a nail” is applicable here too. Learning about a specialized aspect of data science can often make you think that this is the best approach to solving data science related problems. After all, the author seems to know what he’s talking about and some employers value this skill. However, if this know-how is out of context, it is bound to be ineffective at best and problematic at worst. Data science is an interdisciplinary field with lots of different tools in it, from various areas. Anyone who tries to dissect it and focus mainly on one of them is doing a disservice to the field and if you as a data science learner pay attention to this person, you are bound to warp your knowledge of the craft and delay your mastery of it.
Also, this overspecialization in know-how may make you think that you are better than the other data science practitioners who have not developed that niche skill yet. This will limit your ability to learn and perhaps even cooperate with these people, significantly. After all, you are an expert in this, so why bother with less fancy know-how at all? Well, sometimes even the more humble aspects of the field, such as feature engineering, can turn to be more effective at solving a problem well, than some fancy model, so it’s good to remember that.
That’s why I’ve always promoted the idea of the right mindset in data science, something that no matter how the field evolves, it is bound to remain stable in the years to come and help you adapt to whatever know-how becomes the norm. Also, no matter how important the algorithms are, it’s even more important knowing how to create your own algorithms and change existing ones, optimizing them for the problem at hand. That’s something that no data science book teaches adequately, as the emphasis is covering material related to certain buzzwords, sometimes without the supervision of an editor. The latter can help immensely in making the contents of a book more comprehensible and relevant to data science in general, providing you with a sense of perspective.
So, be careful with what you let enter your data science curriculum as you learn about the craft. Some books may be a waste of time while others, especially those not published through a publisher, may even hinder your development as a data scientist.
Before starting the new data science book, I made one video on a very fascinating topic that I've delved in for a while now: Cryptanalysis. Although I'm not a hacker, I've researched this topic sufficiently and even broke a few ciphers myself over the years. This video (available on Safari/O'Reilly) is a gentle introduction to the topic and ties very well with my other Cybersecurity videos. Check it out when you have the chance!
Note that in order to view the video in its entirety, you'll need an account (e.g. through a subscription). If you are an employee of a tech company, you may have full access to the Safari platform already. The latter is a useful resource for both videos and books, all of which you can access through a mobile device too.
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.