0% found this document useful (0 votes)
29 views18 pages

ML.3-Regression Techniques

1. The document discusses various machine learning techniques including linear regression, nonlinear functions, derivatives, and gradient descent. 2. It explains linear and nonlinear regression as well as linear and nonlinear functions. Various activation functions are also covered including sigmoid, tanh, ReLU, and others. 3. Derivatives and how to find extreme points of functions are explained. Gradient descent is then introduced as an optimization algorithm for minimizing loss functions in machine learning models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views18 pages

ML.3-Regression Techniques

1. The document discusses various machine learning techniques including linear regression, nonlinear functions, derivatives, and gradient descent. 2. It explains linear and nonlinear regression as well as linear and nonlinear functions. Various activation functions are also covered including sigmoid, tanh, ReLU, and others. 3. Derivatives and how to find extreme points of functions are explained. Gradient descent is then introduced as an optimization algorithm for minimizing loss functions in machine learning models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Nhân bản – Phụng sự – Khai phóng

Regression Techniques
Machine Learning
CONTENTs

•Linear Regression
•Linear Problems

•Gradient Descent

Machine Learning 2
Linear Regression

•Linear Problems
• Linear Regression

• Nonlinear Regression
• Derivatives and Finding Extreme Points
•Gradient Descent

Machine Learning 3
Linear and Nonlinear Functions
• A linear function is a systematic or sequential increase or
decrease represented by a straight line.
• Example : Linear Regression
Linear and Nonlinear Functions

Searching minimal loss Loss function


Linear and Nonlinear Functions
• A non-linear function is a function where the data does not increase or
decrease in a systematic or sequential way.
• Activation function is an important concept in machine learning,
especially in deep learning. They basically decide whether a neuron
should be activated or not and introduce non-linear transformation to a
neural network. The main purpose of these functions is to convert an
input signal of a neuron and produce an output to feed in the next
neuron in the next layer
• Example: Activation Functions
Linear and Nonlinear Functions

Activation Functions
Advantages
Linear and Nonlinear Functions
• Sigmoid Activation Function:
• Range from [0,1]
• Not Zero Centered
• Have Exponential Operation

• Hyperbolic Tangent Activation Function(tanh):


• Ranges Between [-1,1]
• Zero Centered

• Rectified Linear Unit Activation Function (ReLU):


• It doesn’t Saturate
• It converges faster than some other activation functions
Linear and Nonlinear Functions
• Leaky ReLU:
• Leaky ReLU improvement over ReLU Activation function.
• It has all properties of ReLU
• It will never have dead ReLU problem.

• Maxout:
• It has property of Linearity in it
• it never saturates or die
• But is Expensive as it doubles the parameters.

• ELU(Exponential Linear Units):


• No Dead ReLU Situation.
• Closer to Zero mean Outputs than Leaky ReLU
• More Computation because of Exponential Function
Derivatives and Finding Extreme Points
• Suppose we have a function y = f(x) which is dependent on x then the derivation of
this function means the rate at which the value y of the function changes with
change in x.
• In geometry slope represents the steepness of a line. It answers the question: how
much does y or f(x) change given a specific change in x?
• Using this definition we can easily calculate the slope between two points. But what
if I asked you, instead of the slope between two points, what is the slope at a single
point on the line? In this case there isn’t any obvious “rise-over-run” to calculate.
Derivatives help us answer this question
Derivatives and Finding Extreme Points

Finding Extreme Points


Derivatives and Finding Extreme Points

Partial derivative
Gradient Descent
• A gradient is a vector that stores the partial derivatives of
multivariable functions. It helps us calculate the slope at a
specific point on a curve for functions with multiple
independent variables.
• The gradient vector is the vector generating the line
orthogonal to the tangent hyperplane. Then you take the
opposite of this vector (hence “descent”), multiply it by the
learning rate lr.
Gradient Descent
• The projection of this vector on the parameter space (here:
the x-axis) gives you the new (updated) parameter. Then you
repeat this operation several times to go down the cost (error)
function, with the goal of reaching a value for w where the
cost function is minimal.
• The parameter is thus updated as follow at each step:
parameter <-- parameter - lr*gradient
Gradient Descent
Gradient Descent
SUMMARY

•Introduction
•Applications of ML

•Types of ML Systems
•Main Challenges of ML
•Testing & Validating

Machine Learning 17
Nhân bản – Phụng sự – Khai phóng

Enjoy the Course…!

Machine Learning 18

You might also like