Data Science Interview Preparation (30 Days of Interview Preparation)
Data Science Interview Preparation (30 Days of Interview Preparation)
Data Science Interview Preparation (30 Days of Interview Preparation)
INTERVIEW PREPARATION
(30 Days of Interview
Preparation)
# DAY 04
Q1. What is upsampling and downsampling with examples?
Hypothesis Testing?
Null: Two samples mean are equal. Alternate: Two samples mean are not
equal.
For rejecting the null hypothesis, a test is calculated. Then the test statistic
is compared with a critical value, and if found to be greater than the critical
value, the hypothesis will be rejected.
Critical Value:-
Critical values are the point beyond which we reject the null hypothesis.
Critical value tells us, what is the probability of N number of samples,
belonging to the same distribution. Higher, the critical value which means
lower the probability of N number of samples belonging to the same
distribution.
Critical values can be used to do hypothesis testing in the following way.
IMP-If the test statistic is lower than the critical value, accept the hypothesis
or else reject the hypothesis.
Chi-Square Test:-
A chi-square test is used if there is a relationship between two categorical
variables.
Chi-Square test is used to determine whether there is a significant
difference between the expected frequency and the observed frequency in
one or more categories. Chi-square is also called the non-parametric test
as it will not use any parameter
2-Anova test:-
ANOVA, also called an analysis of variance, is used to compare multiples
(three or more) samples with a single test.
Useful when there are more than three populations. Anova compares the
variance within and between the groups of the population. If the variation is
much larger than the within variation, the means of different samples will
not be equal. If the between and within variations are approximately the
same size, then there will be no significant difference between sample
means. Assumptions of ANOVA: 1-All populations involved follow a normal
distribution. 2-All populations have the same variance (or standard
deviation). 3-The samples are randomly selected and independent of one
another.
ANOVA uses the mean of the samples or the population to reject or
support the null hypothesis. Hence it is called parametric testing.
3-Z Statics:-
In a z-test, the samples are assumed to be normal distributed. A z score is
calculated with population parameters as “population mean” and
“population standard deviation” and it is used to validate a hypothesis that
the sample drawn belongs to the same population.
The statistics used for this hypothesis testing is called z-statistic, the score
for which is calculated as z = (x — μ) / (σ / √n), where x= sample mean μ =
population mean σ / √n = population standard deviation If the test statistic is
lower than the critical value, accept the hypothesis or else reject the
hypothesis
4- T Statics:-
A t-test used to compare the mean of the given samples. Like z-test, t-test
also assumed a normal distribution of the samples. A t-test is used when
the population parameters (mean and standard deviation) are unknown.
There are three versions of t-test
5- F Statics:-
The F-test is designed to test if the two population variances are equal. It
compares the ratio of the two variances. Therefore, if the variances are
equal, then the ratio of the variances will be 1.
The F-distribution is the ratio of two independent chi-square variables
divided by their respective degrees of freedom.
F = s1^2 / s2^2 and where s1^2 > s2^2.
If the null hypothesis is true, then the F test-statistic given above can be
simplified. This ratio of sample variances will be tested statistic used. If the
null hypothesis is false, then we will reject the null hypothesis that the ratio
was equal to 1 and our assumption that they were equal.
µ X̄ = µ
Where,
µ X̄ = Mean of the sample means µ= Population mean and, the standard
deviation of the sample mean is denoted as:
σ X̄ = σ/sqrt(n)
Where,
σ X̄ = Standard deviation of the sample mean σ = Population standard
deviation n = sample size
A sufficiently large sample size can predict the characteristics of a
population accurately. For Example, we shall take a uniformly distributed
data:
Randomly distributed data: Even for a randomly (Exponential) distributed
data the plot of the means is normally distributed.
The advantage of CLT is that we need not worry about the actual data
since the means of it will always be normally distributed. With this, we can
create component intervals, perform T-tests and ANOVA tests from the
given samples.
Meaning
learning?
Machine Learning is a technique to learn from that data and then apply wha
t has been learnt to make an informed decision | The main difference betwe
en deep and machine learning is, machine learning models become better
progressively but the model still needs some guidance. If a machine-
learning model returns an inaccurate prediction then the programmer need
s to fix that problem explicitly but in the case of deep learning, the model do
es it by himself.
>Machine Learning can perform well with small size data also | Deep Learn
ing does not perform as good with smaller datasets.
>It is generally recommended to break the problem into smaller chunks, sol
ve them and then combine the results | It generally focusses on solving the
problem end to end
>Results are more interpretable | Results Maybe more accurate but less int
erpretable
> Solves comparatively less complex problems | Solves more complex prob
lems.
Though traditional ML algorithms solve a lot of our cases, they are not
useful while working with high dimensional data that is where we have a
large number of inputs and outputs. For example, in the case of
handwriting recognition, we have a large amount of input where we will
have different types of inputs associated with different types of handwriting.
The second major challenge is to tell the computer what are the features it
should look for that will play an important role in predicting the outcome as
well as to achieve better accuracy while doing so.
Image recognition
Object Detection
Sigmoid or Logistic
Mathematical expression?
Gradient descent is an optimisation algorithm used to minimize some
function by iteratively moving in the direction of steepest descent as
defined by negative of the gradient. In machine learning, we used gradient
descent to update the parameters of our model. Parameters refer to
coefficients in the Linear Regression and weights in neural networks.
The size of these steps called the learning rate. With the high learning rate,
we can cover more ground each step, but we risk overshooting the lower
point since the slope of the hill is constantly changing. With a very lower
learning rate, we can confidently move in the direction of the negative
gradient because we are recalculating it so frequently. The Lower learning
rate is more precise, but calculating the gradient is time-consuming, so it
will take a very large time to get to the bottom.
Math
Now let’s run gradient descent using new cost function. There are two
parameters in cost function we can control: m (weight) and b (bias). Since
we need to consider that the impact each one has on the final prediction,
we need to use partial derivatives. We calculate the partial derivative of the
cost function concerning each parameter and store the results in a
gradient.
Math
It does not need any special mentions of the features of the function
to be learned.
2. Multiply the sample with the square root of (2/ni). Where ni is the
number of input units for that layer.
2. Multiply the sample with the square root of (1/ni) where ni is several
input units for that layer.
Q13: What is optimiser is deep learning, and which one is the best?
Gradient Descent
Where θ is the weight parameter, η is the learning rate, and ∇J(θ;x,y) is the
gradient of weight parameter θ
In the batch gradient, we use the entire dataset to compute the gradient of
the cost function for each iteration for gradient descent and then update the
weights.
Mini -batch gradient descent is widely used and converges faster and is
more stable.
An input layer, a hidden layer which is also known as encoding layer, and a
decoding layer. This network is trained to reconstruct its inputs, which
forces the hidden layer to try to learn good representations of the inputs.
An autoencoder neural network is an unsupervised Machine-learning
algorithm that applies backpropagation, setting the target values to be
equal to the inputs. An autoencoder is trained to attempts to copy its input
to its output. Internally, it has a hidden layer which describes a code used
to represent the input.
Autoencoder Components:
1- Encoder: In this, the model learns how to reduce the input dimensions
and compress the input data into an encoded representation.
3- Decoder: In this, the model learns how to reconstruct the data from the
encod represented to be as close to the original inputs as possible.
Types of Autoencoders :
Convolutional layers are the major building blocks which are used in
convolutional neural networks.
1. Input Layer: It holds the raw input of image with width 32, height 32
and depth 3.
4. Pool Layer: This layer is periodically inserted within the covnets, and
its main function is to reduce the size of volume which makes the
computation fast reduces memory and also prevents overfitting. Two
common types of pooling layers are max pooling and average
pooling. If we use a max pool with 2 x 2 filters and stride 2, the
resultant volume will be of dimension 16x16x12.
Pooling Layer
The most common form is a pooling layer with filters of size 2x2 applied
with a stride of 2 downsamples every depth slice in the input by two along
both width and height, discarding 75% of the activations. Every MAX
operation would, in this case, be taking a max over four numbers (little 2x2
region in some depth slice). The depth dimension remains unchanged.
LeNet in 1998
AlexNet in 2012
VGG in 2014
VGG was submitted in the year 2013, and it became a runner up in the
ImageNet contest in 2014. It is widely used as a simple architecture
compared to AlexNet.
GoogleNet in 2014
In 2014, several great models were developed like VGG, but the winner of
the ImageNet contest was GoogleNet.
ResNet in 2015
There are 152 layers in the Microsoft ResNet. The authors showed
empirically that if you keep on adding layers, the error rate should keep on
decreasing in contrast to “plain nets” we're adding a few layers resulted in
higher training and test errors.
Learning Rate
The learning rate controls how much we should adjust the weights
concerning the loss gradient. Learning rates are randomly initialised.
Lower the values of the learning rate slower will be the convergence to
global minima.
Higher values for the learning rate will not allow the gradient descent to
converge
Since our goal is to minimise the function cost to find the optimised value
for weights, we run multiples iteration with different weights and calculate
the cost to arrive at a minimum cost
----------------------------------------------------------------------------------------------------
------------------------