Many things have changed in data science over the past few years, yet recruiting doesn’t seem to be one of them. There are still companies and agencies evaluating candidates as if we were software developers or something, giving a disproportional amount of emphasis to experience, as well as other not-so-relevant factors. A data scientist, however, is much more than his / her coding skills or any other facet that’s reflected in the “years of experience” metric. This is especially true in our era where A.I. is gaining more and more ground, transforming the field in unprecedented ways.
Back in the day, you’d need to know a technology or a piece of know-how quite well, if you were to use it. Particularly when it came to data models, you had to know the ins and outs of them because chances were that you’d have to code one of them from scratch at some point, especially in cases where the two-language problem was an unresolved issue.
Now, however, these issues have withered as new technologies have come along. A.I. models have been proven to be better than even the best-performing conventional models, at least in cases where sufficient data is available. As big data is becoming more and more widespread, having sufficient data is less of an issue. Also, all of these A.I. models are made available as part of this or the other framework, so a data scientist usually needs to just create the necessary wrapper functions, a fairly easy task that can be mastered within a month or two. Therefore, a candidate having 5+ years of experience won’t necessarily be more adept at data modeling than a new data scientist who is trained in the latest technologies of our craft.
As for the two-language problem, that’s something that seems to linger more than reason would dictate. Nowadays there are various programming languages that can be used end-to-end in the data science pipeline (with Julia being one of the most prominent ones). Therefore, coding some method from scratch so that it can be deployed is masochistic at best. Even in Python there are APIs that make the deployment of a model feasible, without having to translate the corresponding code to C++ or Java.
As we are now in the post-AI era, there is a growing consensus among data scientists that the increased automation that A.I. provides is bound to bleed into our field too, meaning less and less low-level tasks needed to be carried out by the data scientist as the specialized A.I. will be able to do them instead. This doesn’t necessarily mean that humans won’t be in the loop, however. The data scientists involved will just have a different role to play, one that is still uncorrelated to years of experience and other obsolete metrics.
So, the only thing that stands in the way of new, enthusiastic, competent data scientists and their placement in data science jobs is the outdated recruitment processes that are attached to the job market like fleas. Fortunately, there are companies like ResourceFlow that evaluate candidates in a more holistic way, looking at both the technical and the non-technical aspects of them, while also opting for a good understanding of what exactly is required by the company having the vacancy, in terms of data science and A.I. expertise. So, even if most recruiting companies are still slow to adapt to the new realities of our field, fortunately companies like ResourceFlow give us hope about the future of data science recruitment.
Your comment will be posted after it is approved.
Leave a Reply.
Zacharias Voulgaris, PhD
Passionate data scientist with a foxy approach to technology, particularly related to A.I.