0% found this document useful (0 votes)
44 views43 pages

Unit 1

The document discusses well-posed learning problems in machine learning, emphasizing the importance of task, performance measure, and experience. It outlines the steps for designing a learning system, including choosing training experience, target function, representation, and function approximation algorithms. Additionally, it covers concept learning, inductive learning hypotheses, and the search for hypotheses in relation to general-to-specific ordering.

Uploaded by

yogavamsitondapu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views43 pages

Unit 1

The document discusses well-posed learning problems in machine learning, emphasizing the importance of task, performance measure, and experience. It outlines the steps for designing a learning system, including choosing training experience, target function, representation, and function approximation algorithms. Additionally, it covers concept learning, inductive learning hypotheses, and the search for hypotheses in relation to general-to-specific ordering.

Uploaded by

yogavamsitondapu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

UNIT-1

INTRODUCTION
WELL-POSED LEARNING PROBLEMS
• Well Posed Learning Problem – A computer
program is said to learn from experience E with
respect to some class of tasks T and performance
measure P, if its performance at tasks in T , as
measured by P , improves with experience E.
• Any problem can be generated posed learning
problem if it has three features –
– Task(T)
– Performance Measure (P)
– Experience (E)
• Example:
A checkers learning problem
– Task(T) – Playing checkers game
– Performance Measure(P) – percent of games won
against opponent
– Training Experience(E) – playing Practice games
against itself
DESIGNING A LEARNING SYSTEM
• According to Arthur Samuel “Machine
Learning enables a Machine to Automatically
learn from Data, Improve performance from
an Experience and predict things without
explicitly programmed.”
Training Data Building Logical Output
ML Algorithm Mathematical
Model
• Designing a Learning System in Machine Learning :
– Steps for designing Learning system
• Step-1:Choosing the Training Experience
– It is important to note that the data or experience
that we feed to the algorithm must have a
significant impact on the Success or Failure of the
Model. So Training data or experience should be
chosen wisely
• Attributes which will impact on Success and
Failure of Data
– The training experience will be able to provide
direct or indirect feedback regarding choices
– Second important attribute is the degree to which
the learner will control the sequences of training
examples.
– Third important attribute is how it will represent
the distribution of examples over which
performance will be measured.
• Step-2: Choosing target function
It means according to the knowledge feed to
the algorithm the machine learning will
choose Next Move function which will
describe what type of legal moves should be
taken.
• Step 3- Choosing Representation for Target
function
– When the machine algorithm will know all the
possible legal moves the next step is to choose the
optimized move using any representation i.e.
using linear Equations, Hierarchical Graph
Representation, Tabular form etc. The NextMove
function will move the Target move like out of
these move which will provide more success rate.
• X1: no of black chess pieces on the board
• X2: no of white chess pieces on the board
• X3: no of major black chess pieces on the board
• X4: no of major white chess pieces on the board
• X5: no of chess black pieces are threaten by white
• X6: no of chess white pieces are threaten by black
• Step 4- Choosing Function Approximation
Algorithm:
– An optimized move cannot be chosen just with the
training data. The training data had to go through
with set of example and through these examples
the training data will approximates which steps
are chosen and after that machine will provide
feedback on it.
• Linear Equation Algorithm
• V= w0+w1.x1+w2.x2+w3.x3+w4.x4+w5.x5+w6.x6
• Step 5- Final Design
– The final design is created at last when system
goes from number of examples , failures and
success , correct and incorrect decision and what
will be the next step etc.
Perspectives and Issues in Machine
Learning
• ISSUES:
• What algorithms exist for learning general target functions from
specific training examples?
• How much training data is sufficient?
• When and how can prior knowledge held by the learner guide the
process of generalizing from examples?
• What is the best strategy for choosing a useful next training
experience, and how does the choice of this strategy alter the
complexity of the learning problem?
• What is the best way to reduce the learning task to one or more
function approximation problems?
• How can the learner automatically alter its representation to improve
its ability to represent and learn the target function?
Introduction to Concept Learning and the
General-to-Specific Ordering
• Concepts or Categories
– “birds”
– “car”
– “situations in which I should study more in order
to pass the exam”
– Concept
• some subset of objects or events defined over a larger
set, or a boolean valued function defined over this
larger set.
– Learning
• inducing general functions from specific training
examples
– Concept Learning
• acquiring the definition of a general category given a
sample of positive and negative training examples of
the category
A Concept Learning Task
• Target Concept
– “days on which Aldo enjoys water sport”
• Hypothesis
– vector of 6 constraints (Sky, AirTemp, Humidity,
Wind, Water, Forecast, EnjoySport )
– Each attribute (“?”, single value or “0”)
– e.g. <?, Cold, High, ?, ?, ?>
• Training examples for the target concept
EnjoySport
• Condition: airtemp  cold and humidity high
• Eg:Test DS<?, sunny, cold, high, low, warm, same, yes>

Instance Sky AirTemp Humidity Wind Water Forecast EnjoySport

A Sunny Warm Normal Strong Warm Same No


B Sumny Warm High Strong Warm Same Yes
C Rainy Cold High Strong Warm Change No
D Sunny Warm High Strong Cool Change Yes
• Problem: Learning the day when manikanta
enjoys the sport
• Task T: learn the predict the value of
‘Enjoysport’ for an arbitrary day based on the
values of the attributes of the day.
• Performance P: Total percent of days
(Enjoysport) correctly predicted
• Experience E: A set of days with given labels
(Enjoy sport : yes/no)
• Given :
– instances (X): set of iterms over which the concept is
defined.
– target concept (c) : c : X → {0, 1}
– training examples (positive/negative) : <x,c(x)>
– training set D: available training examples
– set of all possible hypotheses: H
• Determine :
– to find h(x) = c(x) (for all x in X)
• Hi
• Hi(x): <x1, x2, x3, x4, x5, x6>
• Where x1, x2, x3, x4, x5, x6 are the values of Sky,
AirTemp, Humidity, Wind, Water, Forecast,
EnjoySport
• h1 1st row in the table
• h2 2nd row in the table
• H1(x=1): <sunny, warm, high, strong, warm,same)
• <?, cold, high, ?, ?, ?>
Inductive Learning Hypothesis
• Inductive Learning Hypothesis
– Any good hypothesis over a sufficiently large set of
training examples will also approximate the target
function. well over unseen examples.

Training Datasets/ instances Inductive learning Concept Description


System
Concept Learning as Search
• Issue of Search
– to find training examples hypothesis that best fits
training examples
• Kinds of Space in EnjoySport (yes or no)
• Sky, AirTemp, Humidity, Wind, Water, Forecast,
EnjoySport
– 3*2*2*2*2 = 96: instant space
– 5*4*4*4*4 = 5120: syntactically distinct hypotheses
within H
– 1+4*3*3*3*3 = 973: semantically distinct hypotheses
• Most general hypothesis=<?,?,?,?,?,?>=5120
• Most specific hypothesis =<sunny,?,?,?,?,?>
• Search Problem
– efficient search in hypothesis space(finite/infinite)
General-to-Specific Ordering of Hypotheses
• Many algorithms for concept learning organize the search
through the hypothesis space by relying on a general-to-specific
ordering of hypotheses.
• By taking advantage of this naturally occurring structure over the
hypothesis space, we can design learning algorithms that
exhaustively search even infinite hypothesis spaces without
explicitly enumerating every hypothesis.
• Consider two hypotheses
– h1 = (Sunny, ?, ?, Strong, ?, ?) =1*2*2*1*2*2=16 instances
– h2 = (Sunny, ?, ?, ?, ?, ?) =1*2*2*2*2*2=32 instances
• Now consider the sets of instances that are classified
positive by hl and by h2.
– Because h2 imposes fewer constraints on the instance, it
classifies more instances as positive.
– In fact, any instance classified positive by hl will also be
classified positive by h2.
– Therefore, we say that h2 is more general than hl.
More-General-Relation
• h2 > h1 and h2 > h3
• But there is no more-general relation between
h1 and h3
Remarks on Version Spaces and Candidate-
Elimination in Inductive Bias
• This version learned by the Candidate-
Elimination algorithm that will leads to
correctly hypothesis tests.
There is no errors in the training examples and
There is some hypothesis in H that correctly
describes the target concept
• What will happens if the training dataset
contains errors
• Instance 1: x1=<sunny, warm, normal, strong,
warm, same>
G1={<?,?,?,?,?,?>}
s1= {<sunny, warm, normal, strong,
warm, same>}
• Instance 2:
x2=<sunny,warm,high,strong,warm,same>
• S2={<sunny, warm, ?,strong, warm, same>}
• G2= {<?,?,normal, ?, ?, ?>}
• Instance 3:
x3= <rainny, cold, high,strong, warm, change>
S3=<?,?,?,strong, warm, ?>
G3 = {<?,?,normal, ?, ?, ?>}
• Instance 4:
• X4= < sunny, warm, high, strong, cool,
change>
• S4=<?,?,?,strong,?, ?>5/6=0.9
• G4 = {<?,?,normal, ?, ?, ?>}=0.9
• S={}
• G={}
• What will happen if the hypothesis is not
present:
• Then all the instances are belong to the
positive examples and most of our prediction
are turns to false training examples
• H={ <sunny, warm, ?, strong, ? ? >}
• H1:<sunny,warm,?,strong, ?,?>
• H2:<sunny, ?,?,strong,?,?>
• H3:<sunny,warm,?,?,?,?>
• H4:<?,warm,?,strong,?,?>
• H5:<Sunny,?,?,?,?,?>
• H6:<?,warm,?,?,?,?>
• Ex1: sunny, warm, normal, strong, cool, change
• H1:<sunny,warm,?,strong, ?,?>
• H2:<sunny, ?,?,strong,?,?>
• H3:<sunny,warm,?,?,?,?>
• H4:<?,warm,?,strong,?,?>
• H5:<Sunny,?,?,?,?,?>
• H6:<?,warm,?,?,?,?>
• H1, h2, h3, h4, h5, h6
• Decision / output  yes
• Ex2: rainy, cold, normal, light, warm, same
• H1:<sunny,warm,?,strong, ?,?>
• H2:<sunny, ?,?,strong,?,?>
• H3:<sunny,warm,?,?,?,?>
• H4:<?,warm,?,strong,?,?>
• H5:<Sunny,?,?,?,?,?>
• H6:<?,warm,?,?,?,?>
• Output  no
• Ex3: sunny, warm, normal, light, warm, same
• H1:<sunny,warm,?,strong, ?,?>
• H2:<sunny, ?,?,strong,?,?>
• H3:<sunny,warm,?,?,?,?>
• H4:<?,warm,?,strong,?,?>
• H5:<Sunny,?,?,?,?,?>
• H6:<?,warm,?,?,?,?>
• H3, h5, h6
• Ex:4: sunny, cold, normal, strong, warm, same

You might also like