Data Science Interview Questions
Data Science Interview Questions
Here's a list of the most popular data science interview questions on the technical concept which
you can expect to face, and how to frame your answers.
Unsupervised learning
Uses known and labeled data as input
has no feedback
Supervised learning has a feedback mechanism mechanism
The most commonly used supervised learning The most commonly used
algorithms are decision trees, logistic regression, unsupervised learning
and support vector machine algorithms are k-means
clustering, hierarchical
clustering, and apriori
algorithm
Logistic regression measures the relationship between the dependent variable (our label of what
we want to predict) and one or more independent variables (our features) by estimating
probability using its underlying logistic function (sigmoid).
3. Calculate your information gain of all attributes (we gain information on sorting
different objects from each other)
4. Choose the attribute with the highest information gain as the root node
5. Repeat the same procedure on every branch until the decision node of each branch is
finalized
For example, let's say you want to build a decision tree to decide whether you should accept or
decline a job offer. The decision tree for this case is as shown:
A random forest is built up of a number of decision trees. If you split the data into different
packages and make a decision tree in each of the different groups of data, the random forest
brings all those trees together.
1. Randomly select 'k' features from a total of 'm' features where k << m
2. Among the 'k' features, calculate the node D using the best split point
3. Split the node into daughter nodes using the best split
4. Repeat steps two and three until leaf nodes are finalized
5. Build forest by repeating steps one to four for 'n' times to create 'n' number of trees
Overfitting refers to a model that is only set for a very small amount of data and ignores the
bigger picture. There are three main methods to avoid overfitting:
1. Keep the model simple—take fewer variables into account, thereby removing some of
the noise in the training data
3. Use regularization techniques, such as LASSO, that penalize certain model parameters
if they're likely to cause overfitting
Univariate
Univariate data contains only one variable. The purpose of the univariate analysis is to describe
the data and find patterns that exist within it.
167.3
170
174.2
178
180
The patterns can be studied by drawing conclusions using mean, median, mode, dispersion or
range, minimum, maximum, etc.
Bivariate
Bivariate data involves two different variables. The analysis of this type of data deals with causes
and relationships and the analysis is done to determine the relationship between the two
variables.
Example: temperature and ice cream sales in the summer season
20 2,000
25 2,100
26 2,300
28 2,400
30 2,600
36 3,100
Here, the relationship is visible from the table that temperature and sales are directly proportional
to each other. The hotter the temperature, the better the sales.
Multivariate
2 0 900 $4000,00
3 2 1,100 $600,000
4 3 2,100 $1,200,000
The patterns can be studied by drawing conclusions using mean, median, and mode, dispersion
or range, minimum, maximum, etc. You can start describing the data and using it to guess what
the price of the house will be.
7. What are the feature selection methods used to select the right
variables?
There are two main methods for feature selection, i.e, filter, and wrapper methods.
Filter Methods
This involves:
ANOVA
Chi-Square
The best analogy for selecting features is "bad data in, bad answer out." When we're limiting or
selecting the features, it's all about cleaning up the data coming in.
Wrapper Methods
This involves:
Forward Selection: We test one feature at a time and keep adding them until we get a
good fit
Backward Selection: We test all the features and start removing them to see what
works better
Recursive Feature Elimination: Recursively looks through all the different features
and how they pair together
Wrapper methods are very labor-intensive, and high-end computers are needed if a lot of data
analysis is performed with the wrapper method.
8. In your choice of language, write a program that prints the
numbers ranging from one to 50.
But for multiples of three, print "Fizz" instead of the number, and for the multiples of five, print
"Buzz." For numbers which are multiples of both three and five, print "FizzBuzz"
Note that the range mentioned is 51, which means zero to 50. However, the range asked in the
question is one to 50. Therefore, in the above code, you can include the range as (1,51).
If the data set is large, we can just simply remove the rows with missing data values. It is the
quickest way; we use the rest of the data to predict the values.
For smaller data sets, we can substitute missing values with the mean or average of the rest of the
data using the pandas' data frame in python. There are different ways to do so, such as df.mean(),
df.fillna(mean).
10. For the given points, how will you calculate the Euclidean
distance in Python?
plot1 = [1,3]
plot2 = [2,5]
Check out the Simplilearn's video on "Data Science Interview Question" curated by industry
experts to help you prepare for an interview.
The Dimensionality reduction refers to the process of converting a data set with vast dimensions
into data with fewer dimensions (fields) to convey similar information concisely.
This reduction helps in compressing data and reducing storage space. It also reduces computation
time as fewer dimensions lead to less computing. It removes redundant features; for example,
there's no point in storing a value in two different units (meters and inches).
-2 -4 2
-2 1 2
4 2 5
Expanding determinant:
- λ3 + 4λ2 + 27λ – 90 = 0,
λ3 - 4 λ2 -27 λ + 90 = 0
33 – 4 x 32 - 27 x 3 +90 = 0
Hence, (λ - 3) is a factor:
For X = 1,
-5 - 4Y + 2Z =0,
-2 - 2Y + 2Z =0
3 + 2Y = 0,
Y = -(3/2)
Z = -(1/2)
Monitor
Constant monitoring of all models is needed to determine their performance accuracy. When you
change something, you want to figure out how your changes are going to affect things. This
needs to be monitored to ensure it's doing what it's supposed to do.
Evaluate
Evaluation metrics of the current model are calculated to determine if a new algorithm is
needed.
Compare
The new models are compared to each other to determine which model performs the best.
Rebuild
A recommender system predicts what a user would rate a specific product based on their
preferences. It can be split into two different areas:
Collaborative Filtering
As an example, Last.fm recommends tracks that other users with similar interests play often.
This is also commonly seen on Amazon after making a purchase; customers may notice the
following message accompanied by product recommendations: "Users who bought this also
bought…"
Content-based Filtering
As an example: Pandora uses the properties of a song to recommend music with similar
properties. Here, we look at content, instead of looking at who else is listening to music.
RMSE and MSE are two of the most common measures of accuracy for a linear
regression model.
RMSE indicates the Root Mean Square Error.
We use the elbow method to select k for k-means clustering. The idea of the elbow method is to
run k-means clustering on the data set where 'k' is the number of clusters.
Within the sum of squares (WSS), it is defined as the sum of the squared distance between each
member of the cluster and its centroid.
This indicates strong evidence against the null hypothesis; so you reject the null hypothesis.
This indicates weak evidence against the null hypothesis, so you accept the null hypothesis.
p-value at cutoff 0.05
Example: height of an adult = abc ft. This cannot be true, as the height cannot be a string value.
In this case, outliers can be removed.
If the outliers have extreme values, they can be removed. For example, if all the data points are
clustered between zero to 10, but one point lies at 100, then we can remove this point.
Try a different model. Data detected as outliers by linear models can be fit by
nonlinear models. Therefore, be sure you are choosing the correct model.
Try normalizing the data. This way, the extreme data points are pulled to a similar
range.
You can use algorithms that are less affected by outliers; an example would
be random forests.
It is stationary when the variance and mean of the series are constant with time.
In the second graph, the waves get bigger, which means it is non-stationary and the variance is
changing with time.
You can see the values for total data, actual values, and predicted values.
The formula for accuracy is:
= 609 / 650
= 0.93
21. Write the equation and calculate the precision and recall rate.
= 262 / 277
= 0.94
= 0.90
The engine makes predictions on what might interest a person based on the preferences of other
users. In this algorithm, item features are unknown.
For example, a sales page shows that a certain number of people buy a new phone and also buy
tempered glass at the same time. Next time, when a person buys a phone, he or she may see a
recommendation to buy tempered glass as well.
23. Write a basic SQL query that lists all orders with customer
information.
Usually, we have order tables and customer tables that contain the following columns:
Order Table
Orderid
customerId
OrderNumber
TotalAmount
Customer Table
Id
FirstName
LastName
City
Country
FROM Order
JOIN Customer
ON Order.CustomerId = Customer.Id
24. You are given a dataset on cancer detection. You have built
a classification model and achieved an accuracy of 96 percent.
Why shouldn't you be happy with your model performance?
What can you do about it?
Cancer detection results in imbalanced data. In an imbalanced dataset, accuracy should not be
based as a measure of performance. It is important to focus on the remaining four percent, which
represents the patients who were wrongly diagnosed. Early diagnosis is crucial when it comes to
cancer detection, and can greatly improve a patient's prognosis.
Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate),
Specificity (True Negative Rate), F measure to determine the class wise performance of the
classifier.
K-means clustering
Linear regression
Decision trees
The K nearest neighbor algorithm can be used because it can compute the nearest neighbor and if
it doesn't have a value, it just computes the nearest neighbor based on all the other features.
When you're dealing with K-means clustering or linear regression, you need to do that in your
pre-processing, otherwise, they'll crash. Decision trees also have the same problem, although
there is some variance.
Looking forward to becoming a Data Scientist? Check out the Data Science Course and get certified
today.
26. Below are the eight actual values of the target variable in the
train file. What is the entropy of the target variable?
[0, 0, 0, 1, 1, 1, 1, 1]
1. Logistic Regression
2. Linear Regression
3. K-means clustering
4. Apriori algorithm
1. K-means clustering
2. Linear regression
3. Association rules
4. Decision trees
As we are looking for grouping people together specifically by four different similarities, it
indicates the value of k. Therefore, K-means clustering (answer A) is the most appropriate
algorithm for this study.
1. One-way ANOVA
2. K-means clustering
3. Association rules
4. Student's t-test
31. What do you understand about true positive rate and false-
positive rate?
The True Positive Rate (TPR) defines the probability that an actual positive will turn
out to be positive.
The True Positive Rate (TPR) is calculated by taking the ratio of the [True Positives (TP)] and
[True Positive (TP) & False Negatives (FN) ].
TPR=TP/TP+FN
The False Positive Rate (FPR) defines the probability that an actual negative result
will be shown as a positive one i.e the probability that a model will generate a false
alarm.
The False Positive Rate (FPR) is calculated by taking the ratio of the [False Positives (FP)] and
[True Positives (TP) & False Positives(FP)].
The graph between the True Positive Rate on the y-axis and the False Positive Rate on the x-axis
is called the ROC curve and is used in binary classification.
The False Positive Rate (FPR) is calculated by taking the ratio between False Positives and the
total number of negative samples, and the True Positive Rate (TPR) is calculated by taking the
ratio between True Positives and the total number of positive samples.
In order to construct the ROC curve, the TPR and FPR values are plotted on multiple threshold
values. The area range under the ROC curve has a range between 0 and 1. A completely random
model, which is represented by a straight line, has a 0.5 ROC. The amount of deviation a ROC
has from this straight line denotes the efficiency of the model.
The image above denotes a ROC curve example.
TRUE-POSITIVE RATE: The true-positive rate gives the proportion of correct predictions of
the positive class. It is also used to measure the percentage of actual positives that are accurately
verified.
FALSE-POSITIVE RATE: The false-positive rate gives the proportion of incorrect predictions
of the positive class. A false positive determines something is true when that is initially false.
The primary and vital difference between Data Science and traditional application programming
is that in traditional programming, one has to create rules to translate the input to output. In Data
Science, the rules are automatically produced from the data.
36. What is the difference between the long format data and
wide format data?
LONG FORMAT DATA: It contains values that repeat in the first column. In this format, each
row is a one-time point per subject.
WIDE FORMAT DATA: In the Wide Format Data, the data’s repeated responses will be in a
single row, and each response can be recorded in separate columns.
NAME HEIGHT
RAMA 182
SITA 160
Data Scientists and technical analysts must convert a huge amount of data into effective ones.
Data Cleaning includes removing malwared records, outliners, inconsistent values, redundant
formatting etc. Matplotlib, Pandas etc are the most used Python Data Cleaners.
Tensor Flow
Pandas
NumPy
SciPy
Scrapy
Librosa
MatPlotLib
Variance is the value that depicts the individual figures in a set of data which distributes
themselves about the mean and describes the difference of each value from the mean value. Data
Scientists use variance to understand the distribution of a data set.
Entropy is the measure of randomness or disorder in the group of observations. It also determines
how a decision tree switches to split data. Entropy is also used to check the homogeneity of the
given data. If the entropy is zero, then the sample of data is entirely homogeneous, and if the
entropy is one, then it indicates that the sample is equally divided.
Information gain is the expected reduction in entropy. Information gain decides the building of
the tree. Information Gain makes the decision tree smarter. Information gain includes parent
node R and a set E of K training examples. It calculates the difference between entropy before
and after the split.
The k-fold cross validation is a procedure used to estimate the model's skill in new data. In k-
fold cross validation, every observation from the original dataset may appear in the training and
testing set. K-fold cross-validation estimates the accuracy but does not help you to improve the
accuracy.
Normal Distribution is also known as the Gaussian Distribution. The normal distribution shows
the data near the mean and the frequency of that particular data. When represented in graphical
form, normal distribution appears like a bell curve. The parameters included in the normal
distribution are Mean, Standard Deviation, Median etc.
46. What is Deep Learning?
Deep Learning is one of the essential factors in Data Science, including statistics. Deep Learning
makes us work more closely with the human brain and reliable with human thoughts. The
algorithms are sincerely created to resemble the human brain. In Deep Learning, multiple layers
are formed from the raw input to extract the high-level layer with the best features.
RNN is an algorithm that uses sequential data. RNN is used in language translation, voice
recognition, image capturing etc. There are different types of RNN networks such as one-to-one,
one-to-many, many-to-one and many-to-many. RNN is used in Google’s Voice search and
Apple’s Siri.
2. Look for a split that maximizes the separation of the classes. A split is any test that
divides the data into two sets.
3. Apply the split to the input data (divide step).
6. This step is called pruning. Clean up the tree if you went too far doing splits.
Root cause analysis was initially developed to analyze industrial accidents but is now widely
used in other areas. It is a problem-solving technique used for isolating the root causes of faults
or problems. A factor is called a root cause if its deduction from the problem-fault-sequence
averts the final undesirable event from recurring.
Logistic regression is also known as the logit model. It is a technique used to forecast the binary
outcome from a linear combination of predictor variables.
Recommender systems are a subclass of information filtering systems that are meant to predict
the preferences or ratings that a user would give to a product.
Cross-validation is a model validation technique for evaluating how the outcomes of a statistical
analysis will generalize to an independent data set. It is mainly used in backgrounds where the
objective is to forecast and one wants to estimate how accurately a model will accomplish in
practice.
The goal of cross-validation is to term a data set to test the model in the training phase (i.e.
validation data set) to limit problems like overfitting and gain insight into how the model will
generalize to an independent data set.
Most recommender systems use this filtering process to find patterns and information by
collaborating perspectives, numerous data sources, and several agents.
They do not, because in some cases, they reach a local minima or a local optima point. You
would not reach the global optima point. This is governed by the data and the starting conditions.
This is statistical hypothesis testing for randomized experiments with two variables, A and B.
The objective of A/B testing is to detect any changes to a web page to maximize or increase the
outcome of a strategy.
These are extraneous variables in a statistical model that correlates directly or inversely with
both the dependent and the independent variable. The estimate fails to account for the
confounding factor.
It is a traditional database schema with a central table. Satellite tables map IDs to physical names
or descriptions and can be connected to the central fact table using the ID fields; these tables are
known as lookup tables and are principally useful in real-time applications, as they save a lot of
memory. Sometimes, star schemas involve several layers of summarization to recover
information faster.
Eigenvectors are for understanding linear transformations. In data analysis, we usually calculate
the eigenvectors for a correlation or covariance matrix.
Selection bias, in general, is a problematic situation in which error is introduced due to a non-
random population sample.
65. What are the types of biases that can occur during sampling?
1. Selection bias
2. Undercoverage bias
3. Survivorship bias
The underlying principle of this technique is that several weak learners combine to provide a
strong learner. The steps involved are:
This exhaustive list is sure to strengthen your preparation for data science interview questions.
Some of the popular machine learning algorithms which are low on the bias scale are -
Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and Decision Trees.
While trying to get over bias in our model, we try to increase the complexity of the machine
learning algorithm. Though it helps in reducing the bias, after a certain point, it generates an
overfitting effect on the model hence resulting in hyper-sensitivity and high variance.
Bias-Variance trade-off: To achieve the best performance, the main target of a supervised
machine learning algorithm is to have low variance and bias.
The following things are observed regarding some of the popular machine learning algorithms -
The Support Vector Machine algorithm (SVM) has high variance and low bias. In
order to change the trade-off, we can increase the parameter C. The C parameter
results in a decrease in the variance and an increase in bias by influencing the margin
violations allowed in training datasets.
In contrast to the SVM, the K-Nearest Neighbors (KNN) Machine Learning algorithm
has a high variance and low bias. To change the trade-off of this algorithm, we can
increase the prediction influencing neighbors by increasing the K value, thus
increasing the model bias.
Markov Chains defines that a state’s future probability depends only on its current state.
The below diagram explains a step-by-step model of the Markov Chains whose output depends
on their current state.
A perfect example of the Markov Chains is the system of word recommendation. In this system,
the model recognizes and recommends the next word based on the immediately previous word
and not anything before that. The Markov Chains take the previous paragraphs that were similar
to training data-sets and generates the recommendations for the current paragraphs accordingly
based on the previous word.
R has multiple libraries like lattice, ggplot2, leaflet, etc., and so many inbuilt functions
as well.
The frequency of a certain feature’s values is denoted visually by both box plots
and histograms.
Boxplots are more often used in comparing several datasets and compared to histograms, take
less space and contain fewer details. Histograms are used to know and understand the probability
distribution underlying a dataset.
NLP is short for Natural Language Processing. It deals with the study of how computers learn a
massive amount of textual data through programming. A few popular examples of NLP are
Stemming, Sentimental Analysis, Tokenization, removal of stop words, etc.
The difference between a residual error and error are defined below -
A residual error is used to show how the sample population An error is how actual population
data and observed data differ from
data and the observed data differ from each other.
each other.
Standardization Normalization
The technique of
converting all data values
The technique of converting data in such a way
to lie between 1 and 0 is
that it is normally distributed and has a standard
known as Normalization.
deviation of 1 and a mean of 0.
This is also known as
min-max scaling.
Here,
Confidence Interval: A range of values likely containing the population parameter is given by the
confidence interval. Further, it even tells us how likely that particular interval can contain the
population parameter. The Confidence Coefficient (or Confidence level) is denoted by 1-alpha,
which gives the probability or likeness. The level of significance is given by alpha.
Point Estimates: An estimate of the population parameter is given by a particular value called the
point estimate. Some popular methods used to derive Population Parameters’ Point estimators
are - Maximum Likelihood estimator and the Method of Moments.
To conclude, the bias and variance are inversely proportional to each other, i.e., an increase in
bias results in a decrease in the variance, and an increase in variance results in a decrease in bias.
To crack a data science interview is no walk in the park. It requires in-depth knowledge and
expertise in various topics. Furthermore, the projects that you have worked on can significantly
boost your potential in a lot of interviews. In order to help you with your interviews, we have
compiled a set of questions for you to relate to. Since data science is an extensive field, there are
no limitations on the type of questions that can be inquired. With that being said, you can answer
each of these questions depending on the projects you have worked on and the industries you
have been in. Try to answer each one of these sample questions and then share your answer with
us through the comments.
Pro Tip: No matter how basic a question may seem, always try to view it from a technical
perspective and use each question to demonstrate your unique technical skills and abilities.
One of the popular and versatile machine learning algorithms is the Random Forest. It's an
ensemble method that combines multiple decision trees, providing high accuracy, handling both
classification and regression tasks, and reducing overfitting. Its ability to handle large datasets
and diverse feature types makes it a powerful choice in various applications.
The most important skill that makes a good data scientist is a strong foundation in statistics. Data
scientists need to understand statistical concepts to analyze and interpret data accurately, draw
meaningful insights, and make data-driven decisions. This skill allows them to select appropriate
modeling techniques, handle uncertainty, and effectively communicate findings to stakeholders,
ensuring the success of data-driven projects.
Data science is popular today due to the explosion of data and the potential to extract valuable
insights from it. Organizations across various industries recognize the importance of data-driven
decision-making to gain a competitive edge. Moreover, advancements in technology and
accessible tools have made data science more approachable, attracting professionals from diverse
backgrounds to harness data's power for innovation and problem-solving.
79. Explain the most challenging data science project that you
worked on.
The most challenging data science project I encountered involved analyzing vast amounts of
unstructured text data from various sources. Extracting meaningful insights required advanced
natural language processing techniques, sentiment analysis, and topic modeling. Additionally,
handling data quality issues and ensuring scalable processing posed significant hurdles.
Collaborating with domain experts and iteratively refining models were crucial to deliver
accurate and actionable results.
For projects, I can provide support individually, in small teams, or as part of larger teams. My
adaptability allows me to assist in diverse settings, leveraging my capabilities to meet project
requirements effectively and contribute to successful outcomes, regardless of team size.
82. What are some unique skills that you can bring to the team
as a data scientist?
As a data scientist, I bring expert knowledge in machine learning, statistical modeling, and data
visualization. My ability to translate complex data into actionable insights is valuable. I have
proficiency in programming languages like Python, R, and SQL, crucial for data manipulation
and analysis. Additionally, my experience with big data platforms and tools, along with strong
problem-solving skills, uniquely position me to contribute.
83. Were you always in the data science field? If not, what made
you change your career path and how did you upgrade your
skills?
No, I have switched to Data Science field recently due to the ever increasing opportunities in the
domain.
84. If we give you a random data set, how will you figure out
whether it suits the business needs or not?
To ensure a random dataset suits business needs, first understand the business objectives and key
performance indicators. Then, assess the dataset's relevance, quality, and completeness with
respect to these objectives. If necessary, perform exploratory data analysis to uncover patterns or
trends. Confirm that the dataset contains actionable insights that can drive business decisions.
85. Given a chance, if you could pick a career other than being a
data scientist, what would you choose?
The role of a Data Engineer is a vital and rewarding profession. They are responsible for
designing, building, and managing the data infrastructure. They create the architecture that
enables data generation, processing, storage, and retrieval. Their work allows data scientists to
perform analyses and make meaningful contributions.
86. Given the constant change in the data science field, how
quickly can you adapt to new technologies?
I'm a keen learner and always ready to upskill. I think I will be able to adapt to new technologies
in no time.
87. Have you ever been in a conflict with your colleagues
regarding different strategies to go about a project? How were
you able to resolve it?
3. Find Common Ground: Identify shared goals or priorities that everyone agrees on.
5. Compromise: Recognize that a perfect solution may not exist and compromise might
be needed.
6. Feedback and Follow-up: Regularly review the strategy's progress and adjust as
needed.
88. Can you break down an algorithm you have used on a recent
project?
89. What tools did you use in your last project and why?
2. Libraries: Pandas, NumPy, Scikit-learn for data processing and machine learning.
3. Visualization Tools: Matplotlib, Seaborn, Tableau for data visualization.
90. What is your most favored strategy to clean a big data set
and why?
My most favored strategy is iterative cleaning, where data is cleaned in stages or chunks, rather
than all at once. This approach, often combined with automation tools, is efficient and
manageable for large datasets. It allows for quality checks at each stage, minimizes the risk of
data loss, and enables timely error detection.