ML Unit 1
ML Unit 1
Introduction To Machine
Learning
By
D JAYANARAYANA REDDY
Assistant Professor
Department of CSE
UNIT - 1
1. Introduction
What is Machine Learning?
The Field of study that gives computers a capability to learn without
being explicitly programmed.
Ex: online shopping
Machine learning adopts to the user based on data
Artificial Intelligence
Machine Learning
Deep Learning
Well Posed Learning Problems
• An agent solves a problem or task ‘T’, performance ‘P’ and gain some experience ‘E’
• If ‘P’ is measured at ‘T’ it can improve ‘E’ (learning by experience)
• Examples: 1) Playing checkers problem
2) Handwritten recognition problem
3) Robot driving learning problem
Handwriting recognition Classifying the images and Better classification A database of homework
learning text text
Robot driving learning Drive the car in a 4 lane Source to destination, the Images, vehicles on road
problem ighway average distance travelled
(long & safe)
2. Perspectives and Issues of Machine Learning
• Perspective of machine learning involves searching very large space of
possible hypothesis to determine one that best fits the observed data and
any prior knowledge held by learner.
Issues in Machine Learning
1. What algorithm should be used
2. Which algorithm perform best for which types of problems
3. How much training data is sufficient and testing data
4. What kind of methods should be used
5. What methods should be used to reduce learning overhead
6. For which type of data which methods should be used
Designing a Learning System
• To get a successful learning system, it should be designed for a proper design, several steps
should be followed
1. Choosing the training experience
2. Choosing the target function
3. Choosing a representation for target function
4. Choosing a learning algorithm for approximating target function.
5. Final Design
Step 1: Choosing a Training Experience
• In choosing a training experience, 3 attributes are taken
1. Type of Feedback -> direct/indirect
2. Degree
3. Distribution of examples
Type of Feedback: Whether the training experience provides
direct/indirect feedback regarding the choices made by performance
system. –Learning driving
Degree: degree to which learner will control the sequence of training
--With trainers,partial and complete training
Distribution of examples: How well it represents the distribution of
examples over which the performance of final system is measured.
Step 2: Choosing a Target Function
• What type of knowledge is learnt and how it is used by the performance system. Example is
checker game.
• While moving diagonally set of all possible moves is called legal moves.
• Travel only in forward direction
• Only one move per chance
• Only in diagonal direction
• Jump over opponent
• Target function -> v(b)
• Board state -> b
• Legal moves set -> B
1. If b is final board state that is won, then v(b)= 100
2. If b is final board state that is lost, then v(b)= -100
3. If b is final board state that is draw, then v(b) = 0
4. If b is not final board state then v(b) = v(), where is best final state
Step 3: Choosing a representation for Target Function
• For any board state ‘b’, we calculate function ‘c’ as linear combination of following board feature
is c(b)
• Features:
x1 - > No. of black pieces on board
x2 -> No. of red pieces on board
x3 ->No. of black kings on board
x4 -> No. of red kings on board
x5 -> No. of black pieces threatened by red
(blacks which can be beaten by red)
x6 -> No. of red pieces threatened by black
(red which can be beaten by black)
(b) = ++++++
Where to = numerical coefficients or weights of each feature
is additive constant
Step 4: Choosing a Learning algorithm for approximating the Target
Function
• To learn a target function (f) we need a set of training examples (describe a particular
board state (b) and training value
• Ordered pair = (b,
• Example : Black won the game (i.e., x2 =0 which means no red)
• b = (x1=3, x2=0, x3=1, x4=0, x5=0, x6=0)
• <(x1=3, x2=0, x3=1, x4=0, x5=0, x6=0)+100>, We need to do 2 steps in this phase.
1. Estimating Training Values
In every step, we consider successor ( depending on the next step of opponent)
),
It represents the next board state (estimating that this move will help (destroy
opponent), Where represents approximation.
2. Adjusting the Weights
There are some algorithm to find weights of linear functions.
Here we are using LMS (Least Mean Square) used to minimize the
error.
If error=0, we need to change weights.
If error is positive, each weight is increased in proportion.
If error is negative, each weight is decreased in proportion.
Error ‘E’ =
Step 5: Final Design
• The final design of our checkers learning system can be naturally described by four
distinct program modules that represent the central components in many learning
systems. These four modules, summarized in below Figure as follows:
2. Concept Learning
Concept Learning can be viewed as the task of searching through a large space of
hypotheses implicitly defined by hypotheses representation.
Example:
Features (Binary Valued Attributes)
Size – large, small -> x1
Color – black, blue -> x2
Screentype – Flat, Folded -> x3
Shape – Square, rectangle -> x4
Concept = <x1, x2, x3, x4>
Tablet = <large, black, flat, square>
Smart phone = < small, blue, folded, rectangle>
Number of possible instances =
Where <ɸ, ɸ, ɸ, ɸ> -> Reject all (Most specific hypothesis)
<?, ?, ?, ?> -> Accept all (Most general hypothesis
mid No h/w Up
mid No s/w Up
new Yes s/w Up
new No h/w Up
new No s/w Up
Information
Second Gain
Attribute = H(S) - I(Outlook) = 0.94 - 0.693 = 0.247
- Temperature
Here, when Outlook = Rain and Wind = Strong, it is a pure class of category
"no". And When Outlook = Rain and Wind = Weak, it is again a pure class
of category "yes".
Appropriate Problems for decision tree
learning
Decision Tree is best suited to problems with the following characteristics:
1. Instances are represented by attribute value pairs.
2. The target function has discrete output values.
3. Training data can have errors.
4. May contain missing attribute values also.
7. Hypothesis space search in Decision Tree Learning