AI News, Interview questions for data scientists

Interview questions for data scientists

Data science is one of the most ill-defined fields in tech.

Your responsibility as a recruiter is to give a job description as clear as possible: do you need a data engineer, a visualization expert, a data analyst, an algorithm engineer, or a machine learning researcher?

The good candidates have a basic knowledge of many topics, are hands-on, and have strong knowledge in some domains.

Those lessons are essentials 😄 My opinion is that “open” 1-1 interviews are better than written in-office tests.

For experienced candidates it can be a lot of work to ask - ask instead about their projects, or presence on Kaggle/Github.

If you are starting such a team, rely on someone with expertise: it is the only way to avoid wasting time with over-hyped tools.

Hiring data scientists (part 3): interview questions

[This is part 3 in my series on hiring data scientists.

If you want an overview of the skills I look for in candidates and the archetypes of candidates I see, start from the beginning] When interviewing for a data scientist position, you need quickly discover a lot about the candidate.

I typically only have an hour to get a feel for how a candidate fares in each of those areas (followed a week or two later by a case study if I think the candidate has potential).

Without further ado, here is the set of technical questions I ask and why: This question tests if they have a good mental model of what a linear regression is, and if they can explain it in non-technical terms.

The common way people mess it up by being too technical “suppose we have normally distributed errors in our dependent variables.” I’m looking for an answer that sounds something like “it’s a way of predicting a value as being proportional to some other values” along with a simple example.

Even if they totally bomb the code but still mention aggregating by class I consider it a pass (but I won’t ask them the medium question).

To pass at the medium level, I ask them to improve their code by making so that the number/word pairs are part of the input, and I could pass an arbitrary amount of them (for instance I could add that 17 prints Jazz).

For these particular sets of graphs, if you look in aggregate the consumer marketing emails had a higher click-through rate, however when you split into the different customer spend buckets, the business emails do better.

For some reason, most of the business emails went to the low value customers with lower click rates, and so while in aggregate the business emails did worse, when you account for spend they did better.

41 Essential Machine Learning Interview Questions (with answers)

We’ve traditionally seen machine learning interview questions pop up in several categories.

The third has to do with your general interest in machine learning: you’ll be asked about what’s going on in the industry and how you keep up with the latest machine learning trends.

Finally, there are company or industry-specific questions that test your ability to take your general machine learning knowledge and turn it into actionable points to drive the bottom line forward.

We’ve divided this guide to machine learning interview questions into the categories we mentioned above so that you can more easily get to the information you need when it comes to machine learning interview questions.

This can lead to the model underfitting your data, making it hard for it to have high predictive accuracy and for you to generalize your knowledge from the training set to the test set.

The bias-variance decomposition essentially decomposes the learning error from any algorithm by adding the bias, the variance and a bit of irreducible error due to noise in the underlying dataset.

For example, in order to do classification (a supervised learning task), you’ll need to first label the data you’ll use to train the model to classify data into your labeled groups.

K-means clustering requires only a set of unlabeled points and a threshold: the algorithm will take unlabeled points and gradually learn how to cluster them into groups by computing the mean of the distance between different points.

It’s often used as a proxy for the trade-off between the sensitivity of the model (true positives) vs the fall-out or the probability it will trigger a false alarm (false positives).

More reading: Precision and recall (Wikipedia) Recall is also known as the true positive rate: the amount of positives your model claims compared to the actual number of positives there are throughout the data.

Precision is also known as the positive predictive value, and it is a measure of the amount of accurate positives your model claims compared to the number of positives it actually claims.

It can be easier to think of recall and precision in the context of a case where you’ve predicted that there were 10 apples and 5 oranges in a case of 10 apples.

Mathematically, it’s expressed as the true positive rate of a condition sample divided by the sum of the false positive rate of the population and the true positive rate of a condition.

Say you had a 60% chance of actually having the flu after a flu test, but out of people who had the flu, the test will be false 50% of the time, and the overall population only has a 5% chance of having the flu.

(Quora) Despite its practical applications, especially in text mining, Naive Bayes is considered “Naive” because it makes an assumption that is virtually impossible to see in real-life data: the conditional probability is calculated as the pure product of the individual probabilities of components.

clever way to think about this is to think of Type I error as telling a man he is pregnant, while Type II error means you tell a pregnant woman she isn’t carrying a baby.

More reading: Deep learning (Wikipedia) Deep learning is a subset of machine learning that is concerned with neural networks: how to use backpropagation and certain principles from neuroscience to more accurately model large sets of unlabelled or semi-structured data.

More reading: Using k-fold cross-validation for time-series model selection (CrossValidated) Instead of using standard k-folds cross-validation, you have to pay attention to the fact that a time series is not randomly distributed data —

More reading: Pruning (decision trees) Pruning is what happens in decision trees when branches that have weak predictive power are removed in order to reduce the complexity of the model and increase the predictive accuracy of a decision tree model.

For example, if you wanted to detect fraud in a massive dataset with a sample of millions, a more accurate model would most likely predict no fraud at all if only a vast minority of cases were fraud.

More reading: Regression vs Classification (Math StackExchange) Classification produces discrete values and dataset to strict categories, while regression gives you continuous results that allow you to better distinguish differences between individual points.

You would use classification over regression if you wanted your results to reflect the belongingness of data points in your dataset to certain explicit categories (ex: If you wanted to know whether a name was male or female rather than just how correlated they were with male and female names.) Q21- Name an example where ensemble techniques might be useful.

They typically reduce overfitting in models and make the model more robust (unlikely to be influenced by small changes in the training data).  You could list some examples of ensemble methods, from bagging to boosting to a “bucket of models” method and demonstrate how they could increase predictive power.

(Quora) This is a simple restatement of a fundamental problem in machine learning: the possibility of overfitting training data and carrying the noise of that data through to the test set, thereby providing inaccurate generalizations.

There are three main methods to avoid overfitting: 1- Keep the model simpler: reduce variance by taking into account fewer variables and parameters, thereby removing some of the noise in the training data.

More reading: How to Evaluate Machine Learning Algorithms (Machine Learning Mastery) You would first split the dataset into training and test sets, or perhaps use cross-validation techniques to further segment the dataset into composite sets of training and test sets within the data.

More reading: Kernel method (Wikipedia) The Kernel trick involves kernel functions that can enable in higher-dimension spaces without explicitly calculating the coordinates of points within that dimension: instead, kernel functions compute the inner products between the images of all pairs of data in a feature space.

This allows them the very useful attribute of calculating the coordinates of higher dimensions while being computationally cheaper than the explicit calculation of said coordinates. Many algorithms can be expressed in terms of inner products.

More reading: Writing pseudocode for parallel programming (Stack Overflow) This kind of question demonstrates your ability to think in parallelism and how you could handle concurrency in programming implementations dealing with big data.

For example, if you were interviewing for music-streaming startup Spotify, you could remark that your skills at developing a better recommendation model would increase user retention, which would then increase revenue in the long run.

The startup metrics Slideshare linked above will help you understand exactly what performance indicators are important for startups and tech companies as they think about revenue and growth.

Your interviewer is trying to gauge if you’d be a valuable member of their team and whether you grasp the nuances of why certain things are set the way they are in the company’s data process based on company- or industry-specific conditions.

This overview of deep learning in Nature by the scions of deep learning themselves (from Hinton to Bengio to LeCun) can be a good reference paper and an overview of what’s happening in deep learning —

More reading: Mastering the game of Go with deep neural networks and tree search (Nature) AlphaGo beating Lee Sidol, the best human player at Go, in a best-of-five series was a truly seminal event in the history of machine learning and deep learning.

The Nature paper above describes how this was accomplished with “Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play.” Cover image credit: https://www.flickr.com/photos/iwannt/8596885627

109 Commonly Asked Data Science Interview Questions

For a data science interview, an interviewer will ask questions spanning a wide range of topics, requiring strong technical knowledge and communication skills from the part of the interviewee.

From this list of data science interview questions, an interviewee should be able to prepare for the tough questions, learn what answers will positively resonate with an employer, and develop the confidence to ace the interview.

We’ve broken the data science interview questions into six different categories: statistics, programming, modeling, behavior, culture, and problem-solving.

    Table of Contents Statistical computing is the process through which data scientists take raw data and create predictions and models backed by the data.

accordingly it is likely a good interviewer will try to probe your understanding of the subject matter with statistics-oriented data science interview questions.

Be prepared to answer some fundamental statistics questions as part of your data science interview.  Here are examples of rudimentary statistics questions we’ve found: Examples of similar data science interview questions found from Glassdoor:

To test your programming skills, employers will ask two things during their data science interview questions: they’ll ask how you would solve programming problems in theory without writing out the code, and then they will also offer whiteboarding exercises for you to code on the spot.

For the latter types of questions we will cover a few examples below, but if you’re looking for in-depth practice solving coding challenges, visit Interview Cake.

2.1 General 2.2 Big Data 2.3 Python For additional Python questions that focus on looking at specific snippets of code, check out this useful resource created by Toptal.

2.4 R 2.5 SQL Often, SQL questions are case-based, meaning that an employer will task you with solving an SQL problem in order to test your skills from a practical standpoint.

If you can’t describe the theory and assumptions associated with a model you’ve used, it won’t leave a good impression.  Take a look at the questions below to practice.

There are several categories of behavioral questions you’ll be asked: Before the interview, write down examples of work experience related to these topics to refresh your memory –

Of course, if you can highlight experiences having to do with data science, these questions present a great opportunity to showcase a unique accomplishment as a data scientist that you may not have discussed previously.

and asking questions that clarify points of uncertainty are a great way to show that you know how to ask the right questions (a trait that any data scientist should have).

There is no exact formula for preparing for data science interview questions, but hopefully by reviewing these common interview questions you will be able to walk into your interviews well-practiced and confident.

In an interview, what are good interview questions to ask to assess a candidate’s understanding of machine learning/data science?

“Walk me through a project that you are particularly satisfied with or proud of*, starting from the problem as it was initially presented to you, the questions you asked, what you tried, and so on.” Don't be shy about interrupting to get clarification of steps or details they skip over: “Why did you choose that solution?” “What else did you consider?” “What were the trade-offs?” “That must have been frustrating.

Tell me something about your project? - Effective answers by Arunabha Bhattacharjee

Here is a video on “Tell me something about your project? ?” by Arunabha Bhattacharjee which is brought you by Freshersworld.com – The No.1 Job portal for ...

How to Succeed in any Programming Interview 2018

I'll show you the 5 steps to succeed in any technical interview. We'll go over what a great study plan looks like, resources to help you find jobs, and how you ...

Tell Me About Yourself - Learn This #1 Trick To Impress Hiring Managers ✓

Need a better job? Register to LIG: Most LIG members find their ideal employment within as little as 15 days. Watch Next: Interview ..

08 common Interview question and answers - Job Interview Skills

08 common Interview question and answers - Job Interview Skills 1. "Tell me a little about yourself." You should take this opportunity to show your ...

How You Can Train for America's Hottest Job: Data Scientist

NYC Boot Camp Trains You for America's Hottest Job: Data Scientist Data scientist is the hottest job in America for the second year in a row. So what does a data ...

Google Coding Interview Question and Answer #1: First Recurring Character

Find the first recurring character in the given string! A variation of this problem: find the first NON-recurring character. This variation problem and many others are ...

Hot IT Skills In Demand For Software Engineers Eyeing Top Jobs

Hot IT Skills In Demand For Software Engineers Eyeing Top Jobs Read the full article on the Hottest IT Skills WITH links to the best learning resources for these ...

Job Roles For DATA ENTRY OPERATOR – Entry Level,DataBase,Arts,Science,WPM, Data Management

Job Roles For DATA ENTRY OPERATOR : Know more about job roles and responsibility in DATA ENTRY . Coming to DATA ENTRY OPERATOR opportunities ...

Dandelion Mane - Really Quick Questions with a Tensorflow Engineer

I ask 67 questions to Dandelion Mane as we walk around Google HQ in Mountain View, California. Dandelion used to be my roommate and is now working on ...

Career Lunch & Learn: Getting Started in Data Science - the MIT Way

Would you like to transition into the booming field of data science? But, every time you look there's a new technique to learn, a new MOOC to view, a new ...