0% found this document useful (0 votes)
10 views238 pages

AIML Book

The document outlines the syllabus for the Artificial Intelligence and Machine Learning course (CS 3491) for B.E. IV Semester CSE and B.Tech. IT branches at Anna University, Chennai. It covers key topics such as problem-solving, probabilistic reasoning, supervised and unsupervised learning, ensemble techniques, and neural networks, along with practical exercises. The syllabus aims to provide students with a comprehensive understanding of AI and ML concepts and applications.

Uploaded by

Sneha.j Sneha.j
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
10 views238 pages

AIML Book

The document outlines the syllabus for the Artificial Intelligence and Machine Learning course (CS 3491) for B.E. IV Semester CSE and B.Tech. IT branches at Anna University, Chennai. It covers key topics such as problem-solving, probabilistic reasoning, supervised and unsupervised learning, ensemble techniques, and neural networks, along with practical exercises. The syllabus aims to provide students with a comprehensive understanding of AI and ML concepts and applications.

Uploaded by

Sneha.j Sneha.j
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 238
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING (CS 3491) For B.E. IV Semester CSE and B.Tech., IT Branches As per the Latest Syllabus of Anna University, Chennai (REGULATIONS - 2021) with 12 Practical Exercises Dr. S.N. SANGEETH/ S. JOTHIMANI SUCHITRA PUBLICATIONS (A GROUP OF LAKSHII PUBLICATIONS) | | | | | SYLLABUS ANNA UNIVERSITY, CHENNAI For BE, CSE and B.Tech. IT Branches ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING UNIT I: PROBLEM SOLVING 9 Introduction to AI~ AI Applications - Problem solving agents - search algorithms - ‘uninformed search strategies - Heuristic search strategies - Local search and optimization problems - adversarial search - constraint satisfaction problems (CSP). UNIT H: PROBABILISTIC REASONING 9 ‘Acting under uncertainty - Bayesian inference - nave bayes models. Probabilistic reasoning - Bayesian networks - exact inference in BN - approximate inference in BN - causal networks. ‘UNIT Ik: SUPERVISED LEARNING 9 Introduction to machine leaming - Linear Regiession Models: Least squares, single & multiple variables, Bayesian linear regression, gradient déscent, Linear Classification Models: Discriminant function - Probabilistic discriminative model - Logistic regression, Probabilistic generative model - Naive Bayes, Maxioum margin classifier - Support vector machine, Decision Tree, Random forests. ‘UNIT IV: ENSEMBLE TECHNIQUES AND UNSUPERVISED LEARNING 9 Combining multiple leamers: Model combination schemes, Voting, Ensemble ‘Leaming - bagging, boosting, stacking, Unsupervised leaming: K-means, Instance ‘Based Leaming: KNN, Gaussian mixture models and Expectation maximization. * UNIT V: NEURAL NETWORKS 9 erceptron - Multilayer perceptron, activation functions, network training - gradient escent optimization ~ stochastic gradient descent, error backpropagation, from shallow networks to deep networks - Unit saturation (aka the vanishing gradient problem) - ReLU, hyperparameter tuning, batch normalization, regularization, Aropout PRACTICAL EXERCISES CONTENTS 1. Immplemestation of Uninformed Search Algorithms (BFS; DFS) 2. Implementation of Informed Search Algorithms (A*, memory-bounded A*) UNITI 3. Implement naive Bayes Models 4. Implement Bayesian Networks PROBLEM SOLVING 5. Build Regression Models ; 6. Build Decision Trees and Random Forests q 141. Intredaction 7. Build SVM Models, 1.1. Definition 8. Implement Ensembling Techniques 9. Implement Clustering Algorithms oe ere 10, Implement EM for Bayesian Networks LA. Histor of Al. 11. Build Simple NN Models 12. At Applications. 42, Build Deep Leaming NN Models 1.3. Problem -Solving Agents References: 1, Stuart Russell and Peter Norvig, “Artificial Intelligence ~ A Modem Approach", Four Baition, Pearson Education, 2021 2. Ethem Alpaydin, “Introduction to Machine Leaming”, MIT Press, Fourth 13.1. Agents and Environments....e 1.3.2, Types of Agents. 1.33. Problem Solving Ages... Eattion, 2020. 134, Searchin aed Hep eplieancon! 14, Etample Problems... eaceeceaceeceeee ese Inpectowardsieasience.comd 1A, Toy Problem... Intec kdnugets com nso javapoitcom/ 1.42, Real World Problems. he See reread 15. Search AIgOTiIhIS cearnnnsnnnsnsnnnn 1.5.2. Infonmed (Heuristic) Search Strategies. 1.53. Heuristic Functions. 1.6. Local Search Algorithms and Optimization Problems... 1,7, Adversarial Scare sumeennan 1.7.1. Games... | | | | | | i | | | | | | | : | 1.5.1. Uninformed Search Strategies ' ! i | i i i i eee (cay Artificial Intelligence and Machine Learning | 2 Prin Doon Gans [73 lobe Dea rain... 1.7.41 Imperfect, Real-time Decisions 17.5] Games that include Blement of Chance... 18, Con bisa) straint faticties of CSPs.. | | UNIT II ie Semantics of Bayesian Networks vain ting Bayesian Networks. ipactness and Node Ordering : ional Independence Relations in Bayesian Networks Inference in Bayesian Networks... ference by Enumeration... e Vatiable Elimination Algorithm, Complexity of Exact Inference. 26.1. Direct Sampling Methods 2.6.2. Inference by Markov Chain Simulation... 2.6.3. Causal Networks Exercises... ‘Two Marks Questions with Answers (Part - A)... Review Questions., UNIT III SUPERVISED LEARNING 3.1, Introduction to Machine Learnin 3.1.1. Classification of Machine Learn 32. Linear Regression ovum 3.2.1, Types of Linear Regressio 3.22. Linear Regression Temminologi 3.3. Simple and Multivariable Linear Regressio 33.1. Simple Linear Regression... 33.2, Multivariate Linear Regression. 3.4, Bayesian Linear Regression ewmenen 35. Linear Classification Models: Discriminant Function ww 3.5.1. Discriminant Funetions.. 3.6. Probabilistic Diseriminative Model... 3.3. Probabilistic Generative Model 38, Nalve Bayes Classifier Algorithm. 3.8.1. Bayes' Theorem: [ea] Artificial Intelligence and Machine Learning Contents 3.82. Working of Naive Bayes! Classifier. 3.83. Advantages of Nalve Bayes Classifier: 3.12.6. Case Example... 3.12.7. Applications of Random Forest 3.12.8, Advantages of Random Forest. 3.12.9, Disadvantages of Random Forest 3.12.10. When to Avoid using Random Forests? ‘Two Marks Questions with Answers (Part - A) mvnono 3.84, Disadvantages of Nalve Bayes Classifier: ano 3.8.5. Applications of Nalve Bayes Classifier... 3.8.6. Types of Nave Bayes Model: 39, Maximum Margin Classifier 3.10. Support Vector Machine neta 3.56 Review Questions. nmr 3.102. Hyperplane and Support Vectors in the SVM Algorithm: 3.103. Linear SVM. UNIT IV ENSEMBLE TECHNIQUES AND UNSUPERVISED LEARNING |, 4.1|- 4.105 3.1L. Decision Tree a sn 3A ; 3.11.1, Decision Tree Terminologies. 3.63 4.1. Introduction. Het 3.11.2. Attribute Selection Measures anna 3.64 42. Combining Multiple Learners... bow 3.11.3. Information Gain. 43. Model Combination Schemes .uvnm-minnnennannnnn 3.11.4, Entropy:.. é a 44, Voting . 3.11.5, Gini Index: 48, Ensemble Learming .tossessnesnininne fA pmtentn 4.7 3.11.6. Pruning: Getting an Optimal Decision Tree 4.5.1. Max Voting... 3.11.7. Advantages of the Decision Tree. 3.11.8. Disadvantages of the Decision Tree. 3.12. Random Forest Algorithn. 3.12.1, Working of Random Forest. 3.12.2. Essential Features of Random Forest.. 45.2. Averaging... 453. Weighted Average 46, Stacking. 4.6.1, Architecture of Stacking 427, Blending an 3.12.3. Difference between Decision Tree & Random Forest. 48, Voting ... 3.12.4. Important Hyperparametets nnn 49. BAZBINGwrenrerrmnrennnsneeinmnnnsnsin 3.12.5. Important Tens to Know... 49.1. Bootstrapping, Artificial Intelligence and Machine Learning 4.104. Ensemble in Machine Learing.. i11, |Unsupervised Learning... 4.114. Types of Unsupervised Leaming Algorithm . . Kb Means. AIA. K-Means Algorithm AQ, Working of K-Means Algorithm Basic Concepts behind Bagging... Applications of Bagging. |, The Bagging Algorithm, 5, Advantages and Disadvantages. H6} Decoding the Hyperparametrs, | Implementing the Bagging Algorithm Boosting Vs Baggi 104. Working of Boosting Algorithm . ‘Types of Boosting. Boosting Machine Learn |. Uses of Unsupervised Leaming’ 12. Working of Unsupervised Leaming.. + Advantages of Unsupervised Learning. 4, Disidvantages of Unsupervised Learning | Choose the Value of "K Number of Clusters" fustance Based Learning... erence renee rennet ner nena Contents 4.14.1, Need of KNN Algorithm 4.142. Working of KNN : 467 4.14.3, Selection of K in KNN Algorithm. 469 4.14.4. Ways to Perform KN... an 4.14.5. Advantages of KNN Algorithm. 4.14.6. Disadvantages of KNN Algorithm. - 4.14.7. Python Implementation of the KNN Algorithm . 4.18, Gaussian Mixture Models. 4.15.1. The Gaussian Distribution 4.15.2. Probability Density Function... 4.15.3. Gaussian Mixture Model 4.154, Uses of Variance - Covariance Matrix, 4.15.5. K-Means Vs Gaussion Mixture Mode! 4.16, Expectation Maximization nu 4.16.1. EM Algorithm, 4.16.2. EM Algorithm.. 4.163. Convergence in the EM Algorithm... 4.164, Steps in EM Algorithm, 4.16.5. Gaussian Mixture Model (GMM) 4.16.6. Example 1 Seer 4.16.7. Applications of EM Algorithm. 4.168, Advantages of EM Algorithm, 4.169. Disadvantages of EM Algorithm. ‘Two Marks Questions with Answers (Part - A) Review Questions. nena $99 {cal NEURAL NETWORKS 8.1, Introduction 5.1.1 Difference between Al, ML, and DL (Astficial Intelligence Vs Machine Learning Vs Deep Learning) 5.1.2. Need for Deep Leaming: Limitations of Traditional Machine Leaming Algorithms and Techniques... 5.13. Working of Neural Network.. 5.2. Perceptron nnn 52.1, Binary Classifier. 5.22, Pereeptron Function. 5.23. Basic Components of Perceptron, 5.24, Working of Perceptron. 52.5. Charactetistics of Pescepton.. 5.2.6. Types of Perceptron Models neon 5.2.7. Advantages of Multi-Layer Perceptron: 5.2.8, Disadvantages of Multi-Layer Perceptron: 52.9. Limitations of Pereeptron Model. 53, Multilayer Pereeptror 53.1. Limitations of Single-Layer Pereeptron: 5.3.2, History of Multi-Layer ANN. 5.33. Multi-Layer ANN... 5.3.4. Implementation . 54, Activation Functions sunmnmnnennnenen 5.4.1. Properties of Activation Functions. Contents 5.42. Types of Activation Functions. 55, Network Training, 55.1. Gradient Descent Optimization, 5.6. Stochastic Gradient Descent sn Su |. SGD Algorithm... : 5.6.2. Training a Neural Network with Stochastic Gradient Descent, 5.6.3. Leaming Rate and Batch Size. 5.64. Example - Red Wine Quality.. 8.7, Error Backpropagation nw 5.2.1. Working of Backpropagation... 5.8. From Shallow Networks to Deep Neti0rk8 uur 5.8.1, Deep Nets and Shallow Nets. 5.82. Choosing a Deep Net 5.83, Restricted Boltzman Networks or Autoencoders - RENS.. 5.8.4. Deep Belief Networks - DBNS.... 5.85. Generative Adversarial Networks - GANS... 5.8.6, Recurrent Neural Networks - RNS... 5.8.7, Convolutional Deep Neural Networks - CNS. 5.9. Unit Saturation (Aka the Vanishing Gradient Problem) ,. 59.1. Sigmoid Function $9.2. Forward Propagation... 5.9.3. Back Propagation... 5.9.4, Method to Overcome the Problem... Artificial Intelligence and Machine Learning 110.2. Relu Activation Funct nnn STA Formula... [51103 Why is Relua Good Activation Function. 5)104. Advantages ofthe Relu Activation Function.. j10.5. Disadvantages of the Relu Activation Function ....:eme SiML. Hyperparameter Tuning evrnmenen 11.1, Steps to Perform Hyperparameter Tuning. |5i11.2. Hyperparameter Types... B13. Data Leakage nnn 511.4, Methods for Tuning Hyperparameters. (Batch Normalization 121. Training. 5/122. Working of Batch Nommalization. 112.3. Advantages of Baich Normalization... J. Regularization nen 13.1. Regularization Techniques. 113.2. Working of Regularization. 13.3. Ridge Regression. ‘1.1. INTRODUCTION UNIT I PROBLEM SOLVING Introduction to AI - AI Applications - Problem solving agents — search algorithms — ‘wainformed search strategies ~ Heuristic search strategies - Local search and ‘optimization problems adversarial search ~ constraint satisfaction problems (CSP). Artificial intelligence is @ branch of computer science that aims to create intelligent machines. It has beeome an essential part of the technology indusuy. Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as: + Knowledge + Reasoning + Problem-solving Perception + Leaming + Planning ‘+ Ability to manipulate and move objects Artificial Intelligence is concemed with the design of intelligence in sn ‘atifical device. The term was coined by John McCarthy in 1956, Intelligence is the ability to acquire, understand and apply knowledge to achieve goals in the wold. Al is the study of mental faculties through the use of computational models Alis the study of intellectual/mental processes as computational process (ra) Artificial Intelligence and Machine Learning + Al_programs will demonstrate a high level of intelligence to a degree that equals or exceeds the intelligence required of a human in performing some ‘asks. AL is unique, sharing borders with Mathematics, Computer Science, Philosophy, Psychology, Biology, Cognitive Science, and many others 4 Although there is no clear definition of AI or even Intelligence, it ean be described as an attempt to build machines that like humans can think and ‘ct, and are able to learn and use knowledge to solve problems on their own. 1.1.1. DEFINTION “The study of how to make computers do things at which at the moment, people are br. “Artificial Intelligence is the ability of @ computer to act like @ himan Systems that think like humans Systems that act like humans Systems that think rationally Systems that act rationally. 1.1.2, EXAMPLE OF Al Antomation: What makes a system or process function automatically? Machine Learning: The science of getting a computer to act without programming. Machine Vision: The science of allowing computers to see. 4 Natural language processing (NLP): The processing of human ~- and not a computer - language by @ computer program. ‘. Roboties: A field of engineeting focused on the design and manufacturing of robots. % Sclf-driving ears: These use a combination of computer vision, image recognition, and deep leaming 10 build automated skills at piloting a Yehiole while saying in a given lane aud avoiding unexpected obstructions, such as pedestrians, oe | | | | Problem Solving 1.1.3, HISTORY OF Al I ‘Important research that laid the groundwork for Al: i In 1931, Goedel layed the foundation of Theoretical | Science1920-30s: He published the first universal format Jeiguace and showed that math itself is either flawed or allows for unprov statements. * thereof, | In 1956, John McCarthy coined the term “Antficial Inellig topic of the Dartmouth Conference, the first conference 4 subject + In 1957, The General Problem Solver (GPS) demonstrated ‘Shaw & Simon Jn 1958, John McCarthy (MIT) invented the Lisp language. In 1959, Arthur Samuel (IBM) wrote the first game-playing checkers, to achieve the sufficient skill to challenge a world In 1963, Ivan Sutherland's MIT dissertation on Sketchpad in idea of interactive graphics into computing. / % In 1966, Ross Quillian (PRD. dissertation, Camegie Int. of now CMU) demonstrated semantic nets In 1967, the Dendral program (Edward Feigenbaum, Joshu beter Bruce Buchanan, Georgia Sutherland st Stanford) demdrated 10 interpret mass spectra on organic chemical compounds, Fis guegessful knowledge-based program for scientific reasoning. i % In 1967, Doug Eogelbart invented the mouse at SRI | In 1968, Marvin Minsky & Seymour Papert publish Hess, demonstrating the limits of simple neural nets. i i i i ‘& In 1936, Turing reformulated Goedel’s result and the churcht extension % In 1972; Prolog developed by Alain Colmerauer. In Mid "80s, Neutal Networks become widely used] < _Backpropagation algorithm (first described by Werbos in 1974) % 1990, Major advances in all areas of Al, with significant demo} machine leaming, intelligent tutoring, case-based reasoning, pith) the ratios in pies ha HI i ey | Aniifcial intelligence and Machine Learning planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. In 1997, Deep Blue beats the World Chess Champion Kasparov In 2002, iRobot, founded by researchers at the MIT Artificial Intelligence Lab, introduced Roomba, a vacuum cleaning robot. By 2006, two million had been sold. ‘Atificial Intelligence has made its way into a number of areas. ‘AL in healtheare. The biggest bets are on improving patient outcomes and | reducing costs. Companies are applying machine learning to make better ‘and faster diagnoses than humans. AI in business. Robotic process automation is being applied to highly repetitive tasks normally performed by humans. Mechine learning ‘algorithms are being integrated into analytics and CRM platforms to ‘uncover information on how to better serve customers. Al in education. Al can automate grading, giving educators more time. Al ‘can assess students and adapt to their needs, helping them work at their ‘own pace. AT tutors can provide additional support to students, ensuring they stay on track. AT could change where and how students learn, perhaps even replacing some teachers, AI in finance. Al in personal finance applications, such as Mint or Turbo ‘Tax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying « home, Today, the software performs much of the trading on Wall Street. AL in law. The discovery process, sifting through of documents, in law is often overwhelming for humans. Automating this process is a more efficient use of time. Startups are also building question-and-enswer computer assistants that éan sift programmed-to-answer questions by “examining the taxonomy and ontology associated with a database. Problem Solving 15] Al in manufacturing, This is an area that has been at the forefront of incorporating robots into the workflow. Industrial robots used to perform single tasks and were seperated from. human workers, but as technology advanced that changed. ‘The applications of Al are shown in Figure 1.1. ‘Consumer Marketing Have you ever used any kind of ereditfATM/store card while shopping? if'so, you have very likely been “input” to an Al algorithm All of this information is recorded digitally Companies like Nielsen gather this information weekly and search for pattems general changes in consumer behavior + tracking responses to new products * identifying customer segments: targeted marketing, e.g, they find out that consumers with sports cars who buy textbooks respond well to offers of new credit cards, ‘Algorithms (“data mining”) search data for patterns based on mathematical theories of leaming eee * Identification Technologies ID ecardse.g., ATM cards can be a nuisance and security risk: cards ean be los, stolen, passwords forgotten ete + Biometric Identification, walk up to a locked door + Camera + Fingerprint device + Microphone ‘+ Computer uses biometric signature for identification + Face, eyes, fingerprints, voice patra ‘© This works by comparing data from person at door with stored library © Leaming algorithms can Team the matching process by analyzing a large Hbrary databése off-line, can improve: its performance. . Anifcial Ineligence and Machine Learning ‘Intrusion Detection + Computer security - we each have specific patterns of computer use times of day, lengths of sessions, command used, sequence of commands, ete ‘+ would like to lear the “signature” of each authorized user + can identify non-euthorized users + How can the program automatically identify users? . ‘© record user's commands and time intervals ‘© characterize the patterns for each user ‘© model the variability in these pattems ‘© classify (online) any new user by similarity to stored patterns ParontDiscptnes of Al ‘Corpora Prison] J mane | JPsyetioay] [Cem Cog. S. | ‘Aca tater ~ Reasoring " Lssrning = Panring «Perception 1 Koowiedge acquisition = Ineigent seat) * Unceriainty management = thers Subjocs covered under Al eS Gare Language & iapef{ Robotics & Paya || Prome | Understanding || nongoson Fig, Ld. Applications of AI Machine Translation Language problems in intemational business + eg, at a meeting of Japanese, Korean, Vietnamese and ‘Swedish investors, no common language ‘© Ifyou ae shipping you software manuals to 127 counties, the ‘solution is ; hire translators to translate would be much cheaper if machine could do this! | | | Problem Solving & How hard is automated translation © very difficult! + eg, English to Russian + not only must the words be translated, but th "4 11.3. PROBLEM-SOLVING AGENTS 1.3.1. AGENTS AND ENVIRONMENTS: ‘An agent is anything that can be viewed as perceiving its enviro sensors and the sensor acts upon that environment through actuator {dea is illustrated in Figure 1.2 . +A human agent has eyes, ears, and other organs for senso legs, mouth, and other body parts for actuators. + A robotic agent might have cameras and infrared range fi fina ‘and various motors for actuators, | % A-software agent receives keystrokes, file contents, and net sensory inputs and acts on the environment by displaying, -ritng files, and sending network packets. Fig 12. Agents and environments Percept We use the term percept to refer to the agent's perceptual input instant. Percept Sequence | | li ‘An agent's percept sequence is the complete history of ytd ever perceived. | igh imple hands, jensors kets 33 Teen) aiven thas 3 ee Pere eon Artificial Intelligence and Machine Learning | Mat gent matically speaking, we say that an ageat’s behaviour is deseribed by the i ‘that maps any given percept sequence to an gecion. fi: Poa sgram ly, the agent function for an artificial agent mathematical description; the agent program is a concrete on, running on the agent architecture, trate these ideas, we will use a very simple example-the vecuum-cleaner jwn in Figure 1.3. a oO IS Se 0e3e ig, L3. The vacuumecleaner world ticular world has just two locations: squares A and B, The vacuum agent which square it i in and whether there is din in the square. It can choose to i, move right, suck up the dirt, or do nothing. One very simple agent )p the following: ifthe current square is dirty, then suck, otherwise move to are. A partial tabulation of this agent function is shown in Figare 1.4 [Percept Sequence ‘Aetion TA, Clean} Right [A, Dirty] _ ‘Suck [B, Clean] Left {B, Diy) Suck {A, Clean}, [A, Clean] Right [A, Clean}, [A, Dirty] Suck Partial tabulation of agent function for the vacuum cleaner world * * ° * * Problem Solving function Reflex-VACUUM-AGENT (locations, status)) returns an action else if location ~ B then return Left Example Agent Program if'starus » Dirty then return Suck else if location = A then retum Right 1.3.2, TYPES OF AGENTS Agents can be grouped into five classes based on their degree of perceived intelligence and capability. All these agents can improve their performance and generate better action overtime. These are given below: Simple Reflex Agent Model-based reflex agent Goal-based agents Usility-based agent Learning agent ‘Simple Reflex Agent ° The Simple reflex agents are the simplest agents as shown in figure 1.5. ‘These agents take decisions on the basis of the current precepts and ignore the rest of the percept history. ‘These agents only succeed in the filly observable environment. ‘The Simple reflex agent does not consider any part of percept history ‘during their decision and action process. ‘The Simple reflex agent works on the Condition-action rule, which means it maps the current state to action. Such as a Room Cleaner agent, works ‘only if there is dirt in the room. Problems with the simple reflex agent design approach: ‘They have very limited intelligence ‘They do not have knowledge of non-perceptual pars of the curent state Mostly too big to generate and store. ‘Not adaptive to changes in the environment. | Problem Solsing | hoo | © Internal State: I is a representation of the cumeat state fo ‘Sengors | Poon percept history. i 4 | © These agents have the model, "which is knowledge of the ‘aid? | irs ar | based on the model they perform actions. || © Updating the agent state requires information about: | | oe Vo eseent How the agents action affects the word, | hea ‘The model-based reflex agent which keeps track ofthe euvent state ofthe wild Zi | Actustrs —| using an internal model. It then chooses an action in the same way 3s Fig. 15. Simple reflec agents agent, Hane ese see Ecce fe function REFLEX-AGENT-WITH-STATE(percep) returns an actin 4 The Model-based agent as shown in figure 1.6 can work in a partially state: rules, a set of condition-action rules | observable environment, and track the situation. state, a description of the current world siate i action, the most recent action. | i statee- UPDATE-STATE(state, action, percept) | rulee~ RULE-MATCH( state, rule) | action RULE-ACTION[rule} \ return action Fig. 1.7. Model-based.reex agent i ‘Concon aon nes} —»| Goal-based agents ul | 4 The knowledge of the current state environinent is not always st al decide an agent what to do. © Theagent needs to know its goal which describes desirable situa Goal-based agents as shown in Figure 1.8 expand the capebilfi ‘model-based agent by having the "goal" information. wwewuonnus 4 i i bs, ot i [ Fig. 1.6, Model-based agent | agent proactive | | They choose action so that they can achieve the goal, / 4 Apel apt to npr os 2 pesmat ny mnt cate eiay elec pal 9 Model: itis knowledge about "how things happenin the world," so it ‘before deciding whether the goal is achieved or not. Such consi a sof is called a Model-based agent. different scenarios are called searching and planning, which iakes jan | | Arifcial ineligence and Machine Learning Problem Solving aaEcE {3} [ ‘+ These agents are similar to the poal-based agent but provide 21 extra | Sensors =} Prva component of utility measurement which makes them differem by { Providing a measure of success at a given state. | ees ‘at ie wor Utility-based agent act based not only on goals but also on the best way fo a Sheree achieve the goal. Fi HA ester octone to Ly Watt ib | fi The Utility-based agent is useful when there are multiple. possible bt —=]tke io action | . # ‘alternatives, and an agent bas to choose in order to perform the best action. i 3 "The uiilty function maps each state to a real number to check how i (sr) naa 3 efficiently each aetion achieves the goals as shown in figure 1.9. Steece now Learning Agent i | ae A leaming agent in Al is the type of agent that can leam from its past | eure experiences, ort has learning eapebiites. Its showm inthe figure 1.10. | pont : It starts to act with basic knowledge and then is able to act and adapt | Fig. 1.8. Goal-based agents ‘astomatically through leaming. Urea agont | I {i f Precepts Hy Sensors LH. [| [Bow the wore evatos What the word eee is tae rom ; What my actions do eee pee } fesse TL ae i ee i (A weitico aston g : T : li i aa Tow fap a Ht urity }-——=]veinaucrastet | - EU HI I ae What ion Vd Bote core eee | Fig. 1.10 learning agent | A Teaming agent has mainly four conceptual components, which are: 1. Learning element: It is responsible for making improvements by Jeaming from the environment Artificial Intelligence and Machine Learning Problem Solving \ as ° 133. The learning clement takes feedback from the critic which describes that how well the agent is doing with respect to a fixed ‘performance standard. Performance element: Itis responsible for selecting extemal ation 4. Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences. Hence, leaming agents are able to lear, analyze performance, and look for * new ways to improve the performance. PROBLEM-SOLVING AGENTS Search is one of the operational tasks that characterize Al programs best. Almost every Al program depends on a search procedure to perform its prescribed functions. Problems are typically defined in terms of state, and solution corresponds to goal slates. Problem-solving using the search technique performs two sequences of steps: 13.3, in types: The 1. Define the problem ~The given problem is identified with its required intial and goal state 2. Analyze the problem - The best search technique for the given: the problem is chosen from different AI search techniques which derive one or ‘more goal states in & minimum number of states. 4. Types of Problem ‘general, the problem can be classified under any one of the following four ‘which depend on two important properties. They are + Amount of knowledge, ofthe agent on the state and action description. How the agent is connected to its environment through its precepts and actions? ¢ four different types of problems ar: (D. Single-state problem (i Multiple state problem (if) Contingency probiem (i») Exploration problem 1.3.3.2. Problem-solving Agents Hee The problem-solving agent is one kind of gosl-based agent, whére ile agent decides what to do by finding the sequence of actions that lead to desirable Hates, Each action changes the state and the aim isto find the sequerice of ations states that lead from the inital (start) state to a final (goal) state. A wellidefiged problem can be described by: % Initial state ‘Operator or successor function - for any state x returas s(%), states reachable from x with one action State space - all states reachable ftom the initial by any se actions Path - sequence through state space i Path cost — the function that assigns a cost to a path. The cost the sum of the costs of individual actions along the path Goal test-test to determine if et the gol state ‘The complexity that arises here isthe knowledge about the formulatic (Gom current state to outcome action) of the agent. If the agent und ZerindsZerindl], ...} | goal test, can be Artificial Intelligence and Machine Learning explicit, eg, x = at Bucharest” implicit, eg, No Ding) path cost (additive) ‘eg, sum of distances, number of actions executed, etc, #(;.a;y) is the step cost, assumed 10 be =90 lution is a sequence of sctions leading from the initial state toa goal state. |) Problem formulation - is the process of deciding what actions and states ||| toconsider and follows goal formulation. mle: Route finding problem (On holiday in Romania : curently in Arad Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate Problem: States: various cities actions: drive between cities Find Solution: 7 Sequence of cities, eg, Arad, Sibiu, Fagunss, Bucharest | I I | | | | i I {il Search - is the process of finding the different possible sequences of actions that lead to a state of known value, and choosing the best one from. the states, ) Solution - a search algorithm takes a problem as input and retums a solution in the form of an action sequence. Execution phase - if the solution exists, the action it recommends can be carried out. ARCH Agent with several immediate options of unknown value can decide what to do ining different possible sequences of actions that lead tothe states of known \d then choosing the best sequence. The process of looking for a sequence of from the current state to reach the goal state is called search. le “formulate, search, exeeute” design for the agent is given below. Once jon has been executed, the agent will formulate a new goal. three mS Problem Solving function SIMPLE-PROBLEM.SOLVING-AGENT (percep#) returns an action inputs: percepr, a percept stati:seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially mall Problem, @ problem formulation state UPDATE-STATE(stare, percept) if seq is empty then do ‘goal-FORMULATE-GOAL (state) probleme-FORMULATE-PROBLEM(state, goa!) seqe~ SEARCH(problem) ‘actione-FIRST(seq); seq¢-RESTiseq) return action The search algorithm takes a problem as input and retums 2 solution in the form of an action sequence. Once a solution is found, the execution phase consists of carrying out the recommended action, ‘The agent design assumes the Environment is ® ‘Static: The entire process is carried out without paying attention to changes that might be occurring in the environment. & Observable: The initial state is known and the agent's sensor detects all aspects that are relevant to the choice of action % Discrete: With respect to the state of the environment and percepts and ‘sctions so that altemate courses of action can be taken Deterministic: The next state of the environment is completely etermined by the current state and the actions executed by the agent. Solutions to the problem are a single sequence of actions Artificial Intelligence and Machine Learning | probtem Solving An agent certies out their plan with eyes closed. This is called an open loop + A path in the state space isa sequence of states connected by a sa system because ignoring the percepts breaks the loop between the agent and the actions. nee + The goal test determines whether the given state isa goal sate ‘Well-defined problems and solutions > A path cost fimetion assigns a numeric cost to each action. A problem can be formally defined by four components: Romania problem, the cost ofthe path might be its length in kilom The inital state thatthe agent starts in. The initial state for our agent of The step cost of tcking action a to go from state x to state y is dena ‘example problem is described by fn(Arad) (x, a, y). The step cost for Romania is shown in figure 1.11. Its) ‘A Successor Function retums the possible actions available to the agent. that the step costs are non-negative Hal Given a state A solution tothe problem isa path fom the inital state toa goal ste. ‘x, SUCCESSOR-FNG) retums a set of {action, successor} ordéred pairs ‘© Anoptimal solution has the lowest path cost among.all solutions |]. | where each action is one of the legal actions in state x, and each successor i L. is a state that can be reached from x by applying the action 1.4. EXAMPLE PROBLEMS | ; a i ESTE te ae tatie me tmtn ttm | nan gfe ms oo aa wine avironments., Some best-known problems are summarized below. Thy {1Go(Sibin).InSibty)}. (Go(Timisoara) In(Pinisoard)),{Go(Zerind) n(Zerind} } 4 cnguished as a toy or real-world problems ‘State Space: The set of all states reachable from the initial state. The state space forms a graph in which the nodes are states and the arcs between nodes ae actions, ‘A toy problem is intended to illustrate various problem-solving methods. «easily used by different researchers to compare the performance of algorithms ‘A real-world problem is one whose solution’ people actually care about 14.1, TOY PROBLEM 41. Vacuurn World Example ‘States: The agent isin one of two locations, each of which might not contain dirt. Thus, there are 2 x 22= 8 possible world states, * Initial State: Any state can be designated as the intial state. Successor Funct : This generates the legal states that result fro ‘the three actions (lef, right, suck). The complete state space is figure 1.12. ° Goal Test: This tests whether all the squares.are clean, + Path Tests Bach step costs one, so that the path cost is the number} in the path i Fig. L11. A simplified Road Map of part of Romania Problem Solving Artificial hteligence and Machine Learinig z Cl] « Fle FED: g ;: Ro Hae aE Ro Dec] Foie [De 9 7 EH =a : Oo OG : 5 Fig 112. The site space forthe acm worl denote actions: L = Lefi, R= Right, $= Suck -purzle example Fig. 1.13.4 typical instance of 8-puzzle problem formullation is as follows aple: The 8-puzzie | 7 lhe lif ‘dhe HI ~ / i; 5 6 3 4 5 ie ‘ Pa : [i He fils |i ols Hs ee L | ‘Start State Goat State States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. Init it State: Any state can be designated a5 the initial state, It can be noted that any given goal can be reached from exactly half of the possible initial states, Suecessor function: This generates the legal states that result from tying the four actions (blank moves Left, Right, Vp, or down). ‘Goal Test: This checks whether the state matches the goal configuration ‘% Path cost: Each step costs 1, so the path cost is the umber of steps in the path, % The 8-puzzle belongs to the family of sliding-block puzzles, which are ‘often used as test problems for new search algorthins in Al « ‘This general class is known as NP-complete The S-puzzle has 91/2= 181,440 reachable states and is easily solved, The 15 puzzle (4 « 4 board) has around 1.3 trillion states, and the rendom instances can be solved optimally in few a milli seconds by the best search algorithms, 4 The 2A-puzzle (on a 5 x 5 board) has around 1025 states, and random instanees are sill quite difficult to solve optimally with current machines and algorithms. 3. B-queens problem The goal of the 8-queens problem is to place 8 queens on the chessboard such that no queen attacks any other. (A queen attacks any piece in the same row, columa, or diagonal), The following figure 1.14. shows an attempted solotion that fails: the queen in the rightmost column is attacked by the queen atthe top Tet An Incremental formulation involves operators that augment the state description, stating with an-empty state, for the 8-qusens problem, this ‘means each action adds a queen tothe state. A complete-state formulation starts with all 8.queens on the board end moves them around, Incither case, the path cost is of no interest because only the final state counts ‘The first incremental formulation one might try is the following : + States: Any arrangement of Oto 8 queens on board isa state. ‘+ Initial state: No queen on the board. 23) Artificial Intelligence and Machine Learning Problem Solving * Successor function: Add a queen to any empty square. + “They tend not to have a single agreed-upon description, but © Goal Test: 8 queens are on the board, and none attacked, made to give a general flavor of their forrmulation. The followin es real-world problems, + Route Finding Problem wi ‘© Touring Problems + Travelling Selesman Problem i , + Robot Navigation wi 1.ROUTE-INDING PROBLEM + Route-finding problem is defined in terms of specified loc transitions along links between them. | Ww 4 Route-finding algorithms are used in a variety of pplication LW routing in computer networks, military operations planning, travel planning systems. gh 7 Fig. 1.14. The 8-queens problem ‘+ In this formulation, we have 64,63...57 =3 x 10 possible sequences to investigate 2. AIRLINE TRAVEL PROBLEM. ‘The airline travel problem is specified as follows: & States: Each is represented by a location(e.g., an airport) and time. Initial state: This is specified by the problem. : fent “A better formulation would prohibit placing @ queen in any square that is already attached: + States: Arrangements of n queens (0. =n e= 8), one per column in ap atecceiieia ‘ 4 Successor function: This retums the states resulting from leftmost columns, with no queen attacking another state, scheduled flightGurther specified by seat class and location ephing l * Successor function: Add a queen to any square in the leftmost ‘than the currert time plus the within-arport transit time, from empty column such thet itis not attacked by any other queen airport to another. | This formulation reduces the 8-qucen state space from 3 x 101 to just Goal Tes: Are we atthe destination by some prespecified time?) + 2057, and solutions are easy to find, Path Cost: This depends upon the monetary cost, waiting tie, flight © For the 100 queens, the initial formulation has roughly 10” states whereas time, customs and immigration procedures, seat quality, time of fay, type the improved formulation has about 1052 states. of airplane, frequent-flyer mileage awards, and so on. Hl i ‘® Thisis a huge reduction, bu the improved state space is stil too big forthe 3, TOURING PROBLEMS We algorithms to handle | ‘Towing problems are closely read o rounding problem bit With ‘142. REAL WORLD PROBLEMS an important difference. | shown on the Romania map. ‘& Aral-world problem is one whose solutions people actually care about. *% Consider, for example, the problem, "Visit every city at “ jjonce! | | i Artifical Intelligence and Machine Learning Problem Song (138) * ‘As with route-finding, the actions correspond to trips between adjacent Cities. The state space, however, is quite different. ial state would be "In Bucharest; visited {Bucharest}. Intermediate state would be "In Vaslui; visited (Bucharest, Vrziceni, Vash)", Goal test would check whether the agent is in Bucharest and ‘whether all 20cities have been visited, \VELLING SALESPERSON PROBLEM (FSP) FF TSP isa touring problem in which each city must be visited exactly once. ‘The aims to find the shortest tour. The problem is known to be NP-hard. Enomnous efforts have been expended to improve the capabilities of TSP algorithms. | | These algorithms are also, used in tasks such as planning movements of || automatic cireuit-board drills and of stocking machines on shop floors. . Vist ipyout | A MLSt layout problem requires positioning millions of components’ and (on a chip to minimize ares, minimize circuit delays, minimize stray fpecifahees, and maximize manufacturing yield. The laycut problem is split into Wo pt cell layout and channel routing, A navigation ss VT navigation isa generalization of the route-inding problem, Rather than a ust be controlled, the scarch space becomes multi-dimensional, Advanced ‘are required to make the search space finite. sTIC ASSEMBLY SEQUENCING ple includes the assembly of intricate objects such as electric motors. lof assembly problems is to find the order in which 10 assemble the parts of jects. I the wrong order is chosen, there will be no way to add some parts later without undoing some work already done. Another important assembly problem is protein design, in which the goal is to find a sequence of Amino acids that will be folded into a three-dimensional protein with the right properties to cure some disease. 8, INTERNET SEARCHING In recent years there has been increased demand for software robots thet perform, Internet searching, looking for answers to questions, for related information, or for shopping deals. The segrching techniques consider the intemet as a graph of nodes {pages) connected by links. ‘8, MEASURING PROBLEM-SOLVING PERFORMANCE + The output of the problem-solving algorithm is cither a failure or a solution (Some algorithms might be struck in an infinite loop and never ‘etum an output.) 4 The algorithm's performance can be measured in four ways: © Completeness: Is the algorithm guarantced to find a solution when there is one? + Optimality: Does the strategy find the optimal solution ‘+ Time complexity: How long does it eke to find a solution? ‘+ Space complexity: How much memory is needed to perform the search? 1.5. SEARCH ALGORITHMS Search Tree Having formulated some problems, we now need to solve them. This is done by a search through the state space. A search tree is generated by the initial state and the successor function that together define the state space. In general, wre may have a search graph rather than a search tee, when the same state can be reached from ‘multiple paths. Figure 1.15 shows the Partial search trees for finding a route from ‘Arad to Bucharest. Nodes tat have been expanded are shaded.; nodes that have been generated but not yet expanded are outlined in bold; nodes that bave not yet been generated are show in faint dashed line Artificial Intelligence and Machine Learning Fig 1.15. Partial search trees for finding a route from Arad 10 Bucharest. ‘The root of the search tree is a search node corresponding to the inital state, In (Arad). The first step is to test whether this is @ goal state. The curent state is expanded by applying the successor function to the curent stat, thereby generating 2 pew set of states. In this case, we get three new staes: In (Sibiu), In (Timisoara), ‘nd In (Zerind), Now we must choose which of these three possibilities to consider farther. This is the essence of search- following up on one option now and putting the others aside for anr, in case the first choice does not lead to a solution. Search Strategy. The general tree-search algorithm is described informally in Figure 1.16. Teton GRAPH-SEARCH (problem, ings) curse soution,o are elosed an empty set ‘ingeINSERT(MAKE-NODE(INITIAL-STATE [problem finge) oop do if EMPTY?(fringe) then return failure node CREMOVE-FIRST hinge) i if GOAL-TEST[problemSTATE{node)) then return SOLUTION(node) if State[node] is not in closed then add STATE[node] to closed fiinge INSERT-ALL(EXPAND(node, problem) fringe) Fig. L16, genet treesearch algorithon ‘The choice of which state o expand is determined by the search strategy. There is an infinite number paths inthis state space, so the search tree has an infinite numaber of nodes. Problem Solving Anodes a data structure with five components: 4 STATE: a sate in the state space to which the node corresponds 4 PARENT-NODE: the node inthe search tre that generted this 4 ACTION: the ation tat vas applied tothe parent to generate & PATH-COST: ihe cos, denoted by (mf the path mn the il she to the node, a8 indicated by the parent pointers; and, i 4 DEPTH: the number of tps along tbe path fom the inal tae) It is important to remember the distinction betweein nodes and states. Bi book keeping data structure used to represent the search tee. A state co configuration of the world. paren ecion | Fig. 1.17. Nodes are data structures from which the search tree is constructed. aren and asa, Arvo pol fom eblldto parent | Fringe ‘A fringe is a colléction of nodes that have been generated but have ndflvet ‘expanded. Each clement of the fringe is a leaf node, that is, a node wi successors in the tree. The fringe of each tree consists of those nodes] outlines. The collection of these nodes is implemented as a queue, The search algorithm is shown in Figure 1.18, i function TREE-SEARCH(problem, fringe) retums ¢ solution, or failure Joop do i | {inge <-INSERT(MAKE-NODE(INITIAL, STATE[problem)),i . | | 1 i ‘filas) || Artificial Ineligence and Machine Learning Problem Solving i (239) if fringe is empty then retum fitore ‘The algorithm’s performance can be measured in four ways node + REMOVE-FRONT(Einge) 4 Completeness: Is the algorithm guaranteed to find 2 solution when there is, | if GPA:-TEST[problem) applied to STATE(node) succeeds return node on | ffinge «INSERTALL(EXPAND(nod 4 Optimality: Does the strategy find the optimal solution {EXPAND(node, problem), finge) ‘Time complexity: How long does it tke to find a solution? ion EXPAND (node, problem) returns a set of nodes Space complexity: How much memory is needed to perform the search? | rc © the empty set, if 1.5.1, UNINFORMED SEARCH STRATEGIES | ‘each actions, result in SUCCESSOR-FN[problem|(State[node]) do ‘Uninformed Search Strategies have no additional information about states beyond that provided in the problem definition. Strategies that know whether one non-goa [| s 10000 nodes/seeond Each node requires 1000 bytes of storage Depth | Nodes Time Memory 2 1100 1 seconds 1 megabytes 4 | 11,100 11 seconds 106 megabytes 6 w 19 minutes 10 gigabytes 8 10° 31 hours 1 terabytes 10 10H 129 days 101 terabytes 2 108 35 years 10 petabyres 4 10 | 3,523 years Vexabyte Table 1.2. Time and memory requirements for breadth-firt-search, ‘Time Complexity for BFS Assume every state has b successors. % The root of the search tree generates b nodes at the first level, each of which generates b more nodes, for a total of 5 atthe second level. ‘% Each of these generates 6 more nodes, yielding 23 nodes at the third level, and so on. 2 - Now suppose, that the solution i at depth d. % In the worst case, we would expand all but the last node of ‘generating b¢*!— nodes at level d + 1 Then the total number of nodes generated is b +5? + Bt. bean y= 0G"), il 4 Every node that is generated must remain in memory, because it ‘pat of the fringe or is an ancestor of Singe node. : 4 The space complexity is, therefore, the same as the time complexit 2. Uniférm-cost search | ‘Instead of expanding the sallowest node, uniform cost search exp the node m with the lowest path cost. + unform-cost search doesnot care about the mamber of steps path ‘only about their total cost, i Properties of Uniform-cost-search: | ‘completeness: i % Uniform-cost search is complete such as if there is a solttion, es find it Time Compleity: i ‘Let C* is Cost of the optimal solution, and cis cach step to get kloser| the goal node. Then the number of steps is ena ntbat +1, as we start from state 0 and end toC*/e. i + Hence, the worst-case time complexity of Uniform-cost “e often. Space Complexity: ‘% The same logic is for space complexity so, the worst-cas complexity of Uniform-cost search is O(b! + IC". Optima: Uniform-cost scarch is always optimal as it only selects a. path| Jowest path cost

You might also like