Machine - Learning - Unit - 1
Machine - Learning - Unit - 1
Machine Learning
Ms. Deepa A
Assistant Professor
Machine Learning
Algorithms with the ability to learn
without being explicitly
programmed
✔ At some task(T)
• Kaggle
UNIT 1
INTRODUCTION
• What is Machine Learning?
• Types of Machine Learning
• Supervised Learning: Regression and Classification
• Machine Learning Process
• Some Terminology
• Testing ML algorithm
• Turning data into probabilities
• Naïve Bayes Classifier
• The brain and Neuron
• Neural Networks
• Perceptron
Types of Machine Learning
Machine Learning
Find y when
x=0.44
Contd..
-a statistical technique that relates a dependent variable to one or
more independent variables.
-ultimate goal of the regression algorithm is to plot a best-fit line or a
curve between the data.
Parameter
and model • Selecting the best algorithm and model architecture
Selection suited for a particular task
• Outputs-Output vector if y
• Activation function-g(.)
• Error E
Weight Space
• Since we are using neural network to implement the solution, we
need to find the distance between the input and the neuron. This is
computed by Euclidean distance,
1. True positive: An instance for which both predicted and actual values are
positive.
2. True negative: An instance for which both predicted and actual values are
negative.
3. False Positive: An instance for which predicted value is positive but actual
value is negative.
4. False Negative: An instance for which predicted value is negative but actual
value is positive.
• Confusion Matrix
•By using different values for the learning rate tends to make
the network unstable, so that it never settles down. We
therefore use a moderate learning rate, typically 0.1<ƞ<0.4,
depending on how much error we expect in the inputs.
Bias Input
• When we discussed the McCulloch and Pitts neuron, we gave each
neuron a firing threshold Ɵ that determined what value it needed
before it should fire.
• This threshold should be adjustable, so that we can change the value
that the neuron fires at.
• In a network, if all the input value is zero, no matter what is the value
of weights were set.
• Suppose if we set all the threshold value for neuron at zero. Now we
add extra input weight to the neuron, with the value of the input to
that weight always being fixed (usually +-1 )
Contd..
• Usually we will take -1, even when all other inputs are zero. This input
is called a bias node.
The Perceptron Learning Algorithm
• The perceptron algorithm is divided into two parts: a training phase
and a recall phase
• Training phase-
Complexity O(mn)
• Recall phase-
Complexity O(Tmn)
Example of Perceptron Learning: Logic
Functions
Contd..
• Initially assign weights to small random numbers ,
w0=-0.05 w1=-0.02, w2=0.02
So the value reaches to the neuron for(0,0)
-0.05*-1+-0.02*-1+0.02*-1=0.05. So this value is 1, so the neuron fires
and output is 1.It is wrong.
Hence we will apply learning rate to find it out (0,0)
0.2*-1+-0.02*-1+0.02*-1=-0.2
So this value is 0- neuron does
not fires
Output is 0