0% found this document useful (0 votes)
10 views3 pages

2marks ML

The document covers various fundamental concepts in machine learning, including definitions of machine learning, reinforcement learning, and hyperparameter tuning. It explains gradient and gradient descent as optimization techniques, compares linear and logistic regression, and discusses the lazy learning nature of the KNN algorithm. Additionally, it touches on ensemble learning and the workings of the perceptron model.

Uploaded by

oasisolga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

2marks ML

The document covers various fundamental concepts in machine learning, including definitions of machine learning, reinforcement learning, and hyperparameter tuning. It explains gradient and gradient descent as optimization techniques, compares linear and logistic regression, and discusses the lazy learning nature of the KNN algorithm. Additionally, it touches on ensemble learning and the workings of the perceptron model.

Uploaded by

oasisolga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

1) Define Machine Learning with a suitable example.

2) What is meant by Reinforcement Learning?

3) What is a gradient? How Gradient – Descent is useful in Machine


Learning?
Gradient:

 A gradient represents the rate of change of a function with respect to its


input variables. In other words, it tells us how much the function value
changes as we move along each input dimension.

Gradient Descent: Purpose and Usage

 Gradient descent is an optimization technique used to find the minimum


(or maximum) of a function.
 In machine learning, we often use gradient descent to minimize a cost
function (also known as an error or loss function).
 Here’s how it works:
1. Cost Function:
 Consider a model with parameters (coefficients) that affect its
predictions.
 The cost function measures the discrepancy between the
model’s predictions and the actual outcomes.
2. Objective:
 Our goal is to find the parameter values that minimize the
cost.
3. Iterative Process:
 Start with initial parameter values.
Calculate the gradient of the cost function concerning each
parameter.
 Update the parameters in the direction that reduces the cost
(opposite to the gradient).
 Repeat until convergence (when the cost reaches a minimum).
4. Learning Rate:
 The learning rate controls the step size during parameter
updates.
 A smaller rate ensures stability but may slow convergence; a
larger rate can overshoot the minimum.
5. Applications:
 Linear regression, neural networks, logistic regression, and
other models use gradient descent for training.
 It helps models learn from data by adjusting parameters
effectively.

4) Compare and Contrast Linear Regression and Logistic Regression.

5) Define Ensemble Learning.


6) Is KNN a Lazy Learning Algorithm? Justify your answer.
KNN algorithm is the Classification algorithm. It is also called as K Nearest Neighbor
Classifier. K-NN is a lazy learner because it doesn’t learn a discriminative function
from the training data but memorizes the training dataset instead. There is no training
time in K-NN. The prediction step in K-NN is expensive. Each time we want to make
a prediction, K-NN searches for the nearest neighbors in the entire training set. An
eager learner has a model fitting or training step. A lazy learner does not have a
training phase.
7) State Hyperparameter Tuning.
A Machine Learning model is defined as a mathematical model with several
parameters that need to be learned from the data. By training a model with
existing data, we can fit the model parameters.
However, there is another kind of parameter, known as Hyperparameters, that
cannot be directly learned from the regular training process. They are usually
fixed before the actual training process begins. These parameters express
important properties of the model such as its complexity or how fast it should
learn.
Hyperparameter Tuning
Hyperparameter tuning is the process of selecting the optimal values for
a machine learning model’s hyperparameters. Hyperparameters are settings that
control the learning process of the model, such as the learning rate, the number of
neurons in a neural network, or the kernel size in a support vector machine. The
goal of hyperparameter tuning is to find the values that lead to the best
performance on a given task.
8) How does Perceptron work?

You might also like