Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
17 views
Learning
Uploaded by
Priya Bhadoriya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Learning For Later
Download
Save
Save Learning For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
17 views
Learning
Uploaded by
Priya Bhadoriya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Learning For Later
Carousel Previous
Carousel Next
Save
Save Learning For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 35
Search
Fullscreen
MACHINE LEARNING SWAPNA.CMachine Learning Learning Intelligence Def: (Learning): Acquisition of knowledge or skills through study , experience, or being taught. Def: Intelligence is the ability to learn and use concepts to solve problems. Machine Learning <> Artificial Intelligence — Def: AI is the science of making machines do things that require intelligence if done by human. (Minsky 1986). Def: Machine Learning is an area of AI concerned with development of techniques which allow machines to learn.Why Machine Learning Why Machine Learning? <> Why Artificial Intelligence? = To build machines exhibiting intelligent behaviour (i.¢., able to reason, predict, and adapt) while helping humans work, study, and entertain themselves. Recent programs in algorithm and theory. Growing flood of online data. More Computational power is available than human. Budding industry.Relevant disciplines and examples of their influence on machine learning. * Artificial intelligence Bayesian methods Computational complexity theory * Control theory Information theory * Philosophy Psychology and neurobiology StatisticsWell-posed Learning Problems * Def: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. * Def 2 (Hadamard 1902): A (machine learning) problem is well-posed if a solution to it exists, if that solution is unique, and if that solution depends on the data / experience but it is not sensitive to (reasonably small) changes in the data / experience. + Forexample, a computer program that learns to play checkers might improve its performance as measured by its ability to win at the class of tasks involving playing checkers games, through experience obtained by playing games against itself.Successful Applications Learning to recognize spoken words. All of the most successful speech recognition systems employ machine learning in some form. For example the SPHINX system , Learning to drive an autonomous vehicle. Machine learning methods have been used to train computer- controlled vehicles to steer correctly when driving on a variety of road types. For example, the ALVINN system.Learning to classify new astronomical structures. Machine learning methods have been applied to a variety of large databases to learn general regularities implicit in the data. For example, decision tree learning algorithms have been used by NASA to learn how to classify celestial objects from the second Palomar Observatory SkySurvey (Fayyad et al, 1995). Learning to play world-class backgammon. The most successful computer programs for playing games such as backgammon are based on machine learning algorithms. For example, the world's top computer program for backgammon, TD-GAMMOTo have a well-defined learning problem Identify 3 Features: the class of tasks, the measure of performance to be improved, and the source of experience. checkers learning problem: Task T: playing checkers Performance measure P: percent of games won against opponents. Training experience E: playing practice games against itself. Some other learning problems:A handwriting recognition learning problem: * Task T: recognizing and classifying handwritten words within images * Performance measure P: percent of words correctly classified. Training experience E: a database of handwritten words with given classificationsA Robot Driving Learning Problem: Task T: driving on public four-lane highways using vision sensors Performance measure P: average distance travelled before an error (as judged by human overseer) Training experience E: a sequence of images and steering commands recorded while observing a human driver.Designing a learning system 1. Choosing the training experience — Examples of best moves, games outcome ... 2. Choosing the target function — board-move, board-value, ... 3. Choosing a representation for the target function — linear function with weights (hypothesis space) 4. Choosing a learning algorithm for approximating the target function — Amethod for parameter estimationDesigning A Learning System 1.Choosing the Training Experience: One key attribute is whether the training experience provides direct or indirect feedback regarding the choices made by the performance system, A second important attribute of the training experience is the degree to which the learner controls the sequence of training. A third important attribute of the training experience is how well it represents the distribution of examples over which the final system performance P must be measured.Checkers Game2.Choosing the Target Function The next design choice is to determine exactly what type of knowledge will be learned and how this will be used by the performance program. B-->M(Priori) (Evaluation Target function) V:;B>R Let us therefore define the target value V(b) for an arbitrary board state b in B, as follows: 1. if b is a final board state that is won, then V(b) = 100 2. if b is a final board state that is lost, then V(b) =-100 3. if b is a final board state that is drawn, then V(b) =0 4. if b is a not a final state in the game, then V(b) = V(b’) where b' is the best final board state that can be achieved starting from b and playing optimally until the end of the game (assuming the opponent plays optimally, as well).3.Choosing a Representation for the Target Function let us choose a simple representation: for any given board state, the function ¢ will be calculated as a linear combination of the following board features: xi: the number of black pieces on the board x2: the number of red pieces on the board x3: the number of black kings on the board x4: the number of red kings on the board x5: the number of black pieces threatened by red (i.e., which can be captured on red's next turn) X6: the number of red pieces threatened by black* Thus, our learning program will represent c(b) as a linear function of the form W4(b)=w0+w 1x 1+w2x2+w3x3+w4x4+w5x5+w6x6 * where wo through W6 are numerical coefficients, or weights, to be chosen by the learning algorithm. . Learned values for the weights w | through W6 will determine the relative importance of the various board features in determining the value of the board, whereas the weight w0 will provide an additive constant to the board value.Partial design of a checkers learning program: Task T: playing checkers Performance measure P; percent of games won in the world tournament. Training experience E: games played against itself Targetfunction: V:Board ->R Targetfunction representation V*(b)=w0+w 1x 1+w2x2+w3x3+w4x4+w5x5+w6x64.Choosing a function Approximation Algorithm In order to learn the target function f we require a set of training examples, each describing a specific board state b and the training value Vtrain(b)for b. In other words, each training example is an ordered pair of the form (b, V’,,,i,(b)). For instance, the following training example describes a board state b in which black has won the game (note x2 = 0 indicates that red has no remaining pieces) and for which the target function value Vtrain(b)is therefore +100. b=(x1=3,x2=0,x3=1x4=0,x5=0x6=0) <
>. Estimating training values, Adjusting the Weights4.1Estimating Training Values ESTIMATING TRAINING VALUES: In ex. Training information only available is about game was won or not. An approach is made to assign the training value of Vtrain(b) for intermediate board state b to be V*(successor(b)). Where v(b) the learners current approximation to V.(Successor(b) denotes the next board state of b. V,,,i.(b)c¢ c(~successor(b)) Each training example is an ordered pair of the form k for estimating training values< b, Vtrain (b)> Recall that according to our formulation of the learning problem, the only training information available to our learner is whether the game was eventually won or lost. Rule for estimating training values. Vtrain(b)V*(Successor(b))124) ADJUSTING THE WEIGHTS Al that remains is to specify the learning algorithm for choosing the weights w; t, best fit the set of training examples {(b, Viain(b))}. As a first step we must define what we mean by the best fit to the training data, One common approach is to define the best hypothesis, or set of weights, as that which minimizes the squared error E between the training values and the values predicted by the hypothesis V. B= Plt) — VO {b,Vria(b))€ training examplesAdjusting The Weights * Several algorithms are known for finding weights of a linear function that minimize E defined in this way. * In our case, we require an algorithm that will incrementally refine the weights as new training examples become available and that will be robust to errors in these estimated training values. One such algorithm is called the Least Mean Squares, or LMS training rule. * For each observed training example it adjusts the weights a small amount in the direction that reduces the error on this training example.LMS weight update rule. For each training example (b, Vi-aix(b)) © Use the current weights to calculate V(b) © For cach weight w,, update it as wy © wy, +9 (Vraia(b) — V(b) 35 Here 7 is a small constant (e.g., 0.1) that moderates the size of the weight update. To get an intuitive understanding for why this weight update rule works, notice that when the error (Vjrin(b) — V(b) is zero, no weights are changed. When (Virain(b) — V(b) is positive (Le., when (0) is too low), then each weight is increased in proportion to the value of its corresponding feature. This will raise the value of (6), reducing the error. Notice that if the value of some feature x, is zero, then its weight is not altered regardless of the error, so that the only4.3Final Design * The Final design of our checkers learning system can be naturally described by four distinct program modules that represent the central components in many learning systems. * Performance System : It is the module that must solve the given performance task, in this case playing checkers, by using the learned target functions(s).‘Training examples bp Way (B;) % Kye Wig Bo) > ol Solution tne (game history) FIGURE 1.1 Final design of the checkers learning program.The Critic takes as input the history or trace of the game and produces as output a set of training examples of the target function [he Generalizer takes as input the training examples and produces an output hypothesis that is its estimate of the target function. The Experiment Generator takes as input the current hypothesis (currently learned function) and outputs a new problem (i.e., initial board state) for the Performance System to explore. Its role is to pick new practice problems that will maximize the learning rate of the overall system. In our example, the Experiment Generator follows a very simple strategy: It always proposes the same initial game board to begin a new game.Determine Type of Training Experience Games against “Fable of correct mowes, Games against wel Determine ‘Target Function Determine Representation of Learned Function Polynomial = ‘Linear function GURE 1.2 Summary of choices in designing the checkers learning program.Perspective & Issues in Machine Learning Perspective: * It involves searching a very large space of possible hypothesis to determine the one that best fits the observed data. Issues: * Which algorithm performs best for which types of problems & representation? * How much training data is sufficient? * Can prior knowledge be helpful even when it is only approximately correct? * The best strategy for choosing a useful next training experience. * What specific function should the system attempt to learn? + How can learner automatically alter it’s representation to improve it’s ability to represent.and learn the target function?ML Study PAC Learning 2014.09.11 Sanghyuk ChunStato. tation and Search “Teaveting Satesman aut ygags Dye, oa al of cnn on tn tte ha rout a te pana, ‘ings“Foo mips ore citfetent tea ana reat rer the Cate, Atte Doping Sma atl te ape ror Ore laftnaat Getuoan, ana since Euan tye onan hee ‘Sica araciotay reine ns Se by ne uateacs me cine aroma he ip Sweerany mast ty niea orl sree ee” pirgistaass tay areal Saree tes ce care bows cet coun we near a pees ate SLIDELL tae aekasany fark man nce areDr. Varun KumarDecision Tree R. Akerkar TMRF, Kolhapur, India AirANALYTICAL LEARNING SWAPNA.CINDUCTIVE BIAS Swapna.C Asst.Prof. IT Dept. Sridevi Women’s Engineering College2-Comnider the image segment shown Below. (a}4¢ We (0,2) and compote 04. Jn Ore alstanens Betaroan Ba 9) By 9) = wt ‘0510. = mmaat xp = x0 vO =ya = amc, 3 3~~ PRESENTED BY, JFRIEDA, R.MANI MEGALAI, MAMUTHULAKSHM, MPhil(CSE), MSUNIVERSITY, TIRUNELVELL
You might also like
Ai&ml Unit 4
PDF
No ratings yet
Ai&ml Unit 4
21 pages
Effective Applications of Learning: Speech Recognition
PDF
No ratings yet
Effective Applications of Learning: Speech Recognition
52 pages
Designing A Learning System: DR - Chandrika.J Professor CSE Course Faculty
PDF
No ratings yet
Designing A Learning System: DR - Chandrika.J Professor CSE Course Faculty
22 pages
Unit 1 ML
PDF
No ratings yet
Unit 1 ML
60 pages
Svit Dept of Computer Science and Engineering Machine Learning B.Tech Iiiyr
PDF
No ratings yet
Svit Dept of Computer Science and Engineering Machine Learning B.Tech Iiiyr
53 pages
Machine Learning Notes-1 (ML Design)
PDF
No ratings yet
Machine Learning Notes-1 (ML Design)
7 pages
ML Unit-1
PDF
No ratings yet
ML Unit-1
61 pages
ML Unit-I
PDF
No ratings yet
ML Unit-I
121 pages
Unit 1 ML
PDF
No ratings yet
Unit 1 ML
14 pages
ML1
PDF
No ratings yet
ML1
28 pages
ML Unit-I Chapter-I Introduction
PDF
No ratings yet
ML Unit-I Chapter-I Introduction
36 pages
UNIT 1 Machine Learning MTech
PDF
No ratings yet
UNIT 1 Machine Learning MTech
167 pages
Unit 1 1
PDF
No ratings yet
Unit 1 1
26 pages
ML Chapter-1
PDF
No ratings yet
ML Chapter-1
39 pages
ML - Unit 1 - Part I
PDF
No ratings yet
ML - Unit 1 - Part I
24 pages
Ecs 403 ML Module I
PDF
No ratings yet
Ecs 403 ML Module I
33 pages
Unit 4
PDF
No ratings yet
Unit 4
45 pages
ML-1
PDF
No ratings yet
ML-1
86 pages
ML Unit-1
PDF
No ratings yet
ML Unit-1
70 pages
Designing A Learning System
PDF
No ratings yet
Designing A Learning System
23 pages
Machine Learning
PDF
No ratings yet
Machine Learning
111 pages
What Is Learning?: CS 391L: Machine Learning
PDF
No ratings yet
What Is Learning?: CS 391L: Machine Learning
6 pages
Unit 1
PDF
No ratings yet
Unit 1
14 pages
Unti 1 ML
PDF
No ratings yet
Unti 1 ML
26 pages
Module 1 Notes PDF
PDF
No ratings yet
Module 1 Notes PDF
26 pages
Machine Learning (Unit-1)
PDF
No ratings yet
Machine Learning (Unit-1)
24 pages
Unit 1 1
PDF
No ratings yet
Unit 1 1
64 pages
M01 Machine Learning
PDF
No ratings yet
M01 Machine Learning
25 pages
Module 1
PDF
No ratings yet
Module 1
28 pages
ML-UNIT-1 - Introduction PART-1
PDF
No ratings yet
ML-UNIT-1 - Introduction PART-1
60 pages
Video Tutorial: Machine Learning 17CS73
PDF
100% (2)
Video Tutorial: Machine Learning 17CS73
27 pages
Eid 403 ML Module I Lecture Notes
PDF
No ratings yet
Eid 403 ML Module I Lecture Notes
26 pages
Module 1
PDF
No ratings yet
Module 1
27 pages
MACHINE LEARNING TECHNIQUES - PPSX
PDF
No ratings yet
MACHINE LEARNING TECHNIQUES - PPSX
26 pages
Unit 1.2 Desigining A Learning System
PDF
No ratings yet
Unit 1.2 Desigining A Learning System
15 pages
Unit 1
PDF
No ratings yet
Unit 1
15 pages
Introduction To ML,: Module-I
PDF
No ratings yet
Introduction To ML,: Module-I
48 pages
ML Module Notes
PDF
No ratings yet
ML Module Notes
139 pages
ml notes
PDF
No ratings yet
ml notes
47 pages
Module 2 PDF
PDF
No ratings yet
Module 2 PDF
26 pages
CSE860 - 16 - Learning System Design
PDF
No ratings yet
CSE860 - 16 - Learning System Design
15 pages
Unit-1 Notes
PDF
No ratings yet
Unit-1 Notes
26 pages
ML Unit I Notes
PDF
No ratings yet
ML Unit I Notes
27 pages
PPTX
PDF
No ratings yet
PPTX
12 pages
ML UNIT-1 NOTES
PDF
No ratings yet
ML UNIT-1 NOTES
15 pages
Course. Introduction To Machine Learning Lecture 1. Introduction To ML
PDF
No ratings yet
Course. Introduction To Machine Learning Lecture 1. Introduction To ML
46 pages
1 Introduction To Machine Learning
PDF
No ratings yet
1 Introduction To Machine Learning
20 pages
Module 1
PDF
No ratings yet
Module 1
27 pages
Module 1 Concept Learning Notes
PDF
No ratings yet
Module 1 Concept Learning Notes
26 pages
ML Unit 1 CS
PDF
100% (2)
ML Unit 1 CS
102 pages
ML Module1 Chapter1
PDF
No ratings yet
ML Module1 Chapter1
38 pages
Symbolic Machine Learning: M.S.Kaysar, M.Engg Cse, Iub
PDF
100% (2)
Symbolic Machine Learning: M.S.Kaysar, M.Engg Cse, Iub
112 pages
ML Lec 03 Machine Learning Process
PDF
No ratings yet
ML Lec 03 Machine Learning Process
42 pages
ADALINE:Machine Learning Application
PDF
No ratings yet
ADALINE:Machine Learning Application
16 pages
ML Unit 1
PDF
No ratings yet
ML Unit 1
156 pages
Learningintro Notes
PDF
No ratings yet
Learningintro Notes
12 pages
Mitchell Machine Learning
PDF
No ratings yet
Mitchell Machine Learning
37 pages
Chapter 1
PDF
No ratings yet
Chapter 1
3 pages
MINI PROJECT RESUME BUILDER 2
PDF
No ratings yet
MINI PROJECT RESUME BUILDER 2
11 pages
notes
PDF
No ratings yet
notes
73 pages
Ip 2
PDF
No ratings yet
Ip 2
7 pages
Ip 5
PDF
No ratings yet
Ip 5
6 pages
CN 5
PDF
No ratings yet
CN 5
10 pages