0% found this document useful (0 votes)
41 views35 pages

AI Unit 1 Part (III) of (III)

AI next part

Uploaded by

ANSH VERMA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
41 views35 pages

AI Unit 1 Part (III) of (III)

AI next part

Uploaded by

ANSH VERMA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 35
CHAPTER-3 PROBLEM SOLVING & STATE SPACE SEARCH Problem Solving Techniques in AI The process of problem-solving is frequently used to achieve objectives or resolve particular situations. In computer science, the term "problem-solving" refers to artificial intelligence methods, which may include formulating ensuring appropriate, using algorithms, and conducting root-cause analyses that identify reasonable solutions. Artificial intelligence (AI) problem-solving often involves investigating potential solutions to problems through reasoning techniques, making use of polynomial and differential equations, and carrying them out and use modeling frameworks. A same issue has a number of solutions that are all accomplished using a unique algorithm. Additionally, certain issues have original remedies. Everything depends on how the particular situation is framed. Cases involving Artificial Intelligence Issues Artificial intelligence is being used by programmers all around the world to automate systems for effective both resource and time management. Games and puzzles can pose some of the most frequent issues in daily life. The use of ai algorithms may effectively tackle this. Various problem-solving methods are implemented to create solutions for variety complex puzzles, includes mathematics challenges such crypto-arithmetic and magic squares, logical puzzles including Boolean formulae as well as N-Queens, and quite well games like Sudoku and Chess. Therefore, these below represent some of the most common issues that artificial intelligence has remedied: © Chess N-Queen problem © Tower of Hanoi Problem © Travelling Salesman Problem © Water-Jug Problem © Sudoku © Crypt-arithmetic Problems © Magic Squares © Logical Puzzles and so on. A Reflex Agent: But What's It? Depending on their ability for recognizing intelligence, these five main artificial intelligence agents were deployed today. The below would these be agencies: Simple Reflex Agents ‘Model-Based Reflex Agents © Goal-Based Agents Utility Based Agents © Learning Agents This mapping of states and actions is made easier through these agencies. These agents frequently make mistakes when moving onto the subsequent phase of a complicated issue; hence, problem-solving standardized criteria such cases. Those agents employ artificial intelligence can tackle issues utilizing methods like B-tree and heuristic algorithms. Approaches for Resolving Problems The effective approaches of artificial intelligence make it useful for resolving complicated issues. AlL fundamental problem-solving methods used throughout AI were listed below. In accordance with the criteria set, students may learn information regarding different problem-solving methods. ‘The process of problem-solving using searching consists of the following steps. Define the problem Analyze the problem Identification of possible solutions Choosing the optimal solution Implementation Heuristics The heuristic approach focuses solely upon experimentation as well as test procedures to comprehend problem and create a solution. These heuristics don't always offer better ideal answer to something like @ particular issue, though, Such, however, unquestionably provide effective means of achieving short-term objectives. Consequently, if conventional techniques are unable to solve the issue effectively, developers turn to them, Heuristics are employed in conjunction with optimization algorithms to increase the efficiency because they merely offer moment alternatives while compromising precision, Searching Algorithms Several of the fundamental ways that AI solves every challenge is through searching. These searching algorithms are used by rational agents or problem-solving agents for select the most appropriate answers. Intelligent entities use molecular representations and seem to be frequently main objective when finding solutions. Depending upon that caliber of the solutions they produce, most searching algorithms also have attributes of completeness, optimality, time complexity, and high computational Computing Evolutionary This approach to issue makes use of the well-established evolutionary idea. The idea of "survival of the fittest underlies the evolutionary theory. According to this, when a creature successfully reproduces in a tough or changing environment, these coping mechanisms are eventually passed down to the later generations, leading to something like a variety of new young species. By combining several traits that go along with that severe environment, these mutated animals aren't just clones of something like the old ones. The much more notable example as to how development is changed and expanded is humanity, which has done so as a consequence of the accumulation of advantageous mutations over countless generations Genetic Algorithms Genetic algorithms have been proposed upon that evolutionary theory. These programs employ a (cchnique called direct random search. In order {o combine the two healthiest possibilities and produce a desirable offspring, the developers calculate the fit factor. Overall health of each individual is determined by first gathering demographic information and afterwards assessing each individual. According on how well each member matches that intended need, a calculation is made. Next, its creators employ a variety of methodologies to retain their finest participants 1, Rank Selection 2, Tournament Selection 3. Steady Selection 4, Roulette Wheel Selection (Fitness Proportionate Selection) 5. Elitism Search Algorithm Terminologies: © Search: Searching is a step by step procedure to solve a search-problem in a given search space. A search problem can have three main factors: a, Search Space: Search space represents a set of possible solutions, which a system may have. b, Start State: It is a state from where agent begins the search. c. Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the root node which is corresponding to the initial state. Actions: It gives the description of all the available actions to the agent. ‘Transition model: A description of what each action do, can be represented as a transition model. Path Cost: It is a function which assigns a numeric cost to each path. Solution: It is an action sequence which leads from the start node to the goal node. Optimal Solution: If a solution has the lowest cost among all solutions. Properties of Search Algorithms: Following are the four essential properties of search algorithms to compare the efficiency of these algorithms: Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists for any random input. Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path © among all other solutions, then such a solution for is said to be an optimal solution ‘Time Complexity: Time complexity is a measure of time for an algorithm to complete its task. Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of the problem, Types of search algorithms ‘Now let's see the types of the search algorithm: Based on the search problems we can classify the search algorithms into uninformed (Blind search) search and informed search (Heuristic search) algorithms * Uninformed search + Informed search Search Algorithm |_| Breadth first search Best First Search || Uniform cost search A¥search Depth first search [>| Depth limited search Iterative deeping depth first search Bidirectional search Uninformed/Blind Search: ‘The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. It operates in a brute-force way as it only includes information about how (o traverse the tree and how to identify leaf and goal nodes. Uninformed search applies a way in which search tree is searched without any information about the search space like initial state operators and test for the goal, so itis also called blind search. IC examines each node of the tree until it achieves the goal node. It can be divided into five main types: Breadth-first search ‘Uniform cost search, Depth-first search Tterative deepening depth-first search, Bidirectional Search Informed Search Informed search algorithms use domain knowledge, In an informed search, problem information is available which can guide the search. Informed search strategies can find a solution more efficiently than aan uninformed search strategy. Informed search is also called a Heuristic search. A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a good solution in reasonable time, Informed search can solve much complex problem which could not be solved in another way. ‘An example of informed search algorithms is a traveling salesman problem, 1, Greedy Search 2. A* Search Uninformed Search Algorithms Uninformed search is a class of general-purpose search algorithms which operates in brute force-way. Uninformed search algorithms do not have additional information about state or search space other than how to traverse the tree, so itis also called blind search. Following are the various types of uninformed search algorithms: Breadth-first Search Depth-first Search. Depth-limited Search erative deepening depth-first search Uniform cost search aubene Bidirectional Search Breadth-first Sear! © Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm searches breadth wise in a tree or graph, so itis called breadth-first search. © BFS algorithm starts searching from the root node of the tree and expands all successor node at the current level before moving to nodes of next level © The breadth-first search algorithm is an example of a general-graph search algorithm. © Breadth-first search implemented using FIFO queue data structure, Advantages: © BES will provide a solution if any solution exists. © If there is more than one solution for a given problem, then BFS will provide the minimal solution which requires the least number of steps. Disadvantages: © It requires lots of memory since each level of the tree must be saved into memory to expand the next level. © BES needs lots of time if the solution is far away from the root node. Example: In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow, and the traversed path will be: A->B--->C—>D. 5 ——* | tevelo Level 1 Level 2 Level 3 —> | Level 4 ‘Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a node at every state, +b T (b) = L+bsb's. 0 (b') Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is 0 (6%. Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will find a solution, Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node Depth-first Search © Depth-first search is recursive algorithm for traversing a tree or graph data structure. © Itis called the depth-first search because it starts from the root node and follows each path to its greatest depth node before moving to the next path, © DES uses a stack data structure for its implementation, © The process of the DES algorithm is similar to the BFS algorithm, Note: Backtracking is an algorithm technique for finding all possible solutions using recursion. Advantage: © DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current node. © Ittakes less time to reach tothe goal node than BES algorithm (if it traverses in the right path. Disadvantage: © There is the possibility that many states keep re-occurring, and there is no guarantee of finding ‘the solution. © DRS algorithm goes for deep down searching and sometime it may go tothe infinite loop. Example: In the below search tree, we have shown the flow of depth-first search, and it will follow the order as: Root node--->Left node ~ > right node. It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C and then G, and here it will terminate as it found goal node. Depth First Search — aor >| Level — Eee —>| Levels Completeness: DFS search algorithm is complete within finite state space as it will expand every node within a limited search tree. ‘Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is given by: TO) = De ren toccnact n=O (n") Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth) Space Complexity: DFS algorithm needs to store only single path from the root node, hence space complexity of DFS is equivalent to the size of the fringe set, which is O (bm). Optimal: DFS scarch algorithm is non-optimal, as it may generate a large number of steps or high cost to reach to the goal node Depth-Limited Search Algorithm: [A depth-imited search algorithm is similar to depth-first search witha predetermined limit. Depth-limited search can solve the drawback ofthe infinite path in the Depth-frst search, In this algorithm, the node at the depth limit will treat as it has no successor nodes further. Depth-limited search can be terminated with two Conditions of failure: © Standard failure value: It indicates that problem does not have any solution. © Cutoff failure value: It defines no solution for the problem within a given depth limit Advantages: Depth-limited search is Memory efficient. Disadvantages: © Depth-limited search also has a disadvantage of incompleteness. © Itmay not be optimal if the problem has more than one solution, Example: Depth Limited Search Level 0 ‘Completeness: DLS search algorithm is complete if the solution is above the depth-limit Time Complexity: Time complexity of DLS algorithm is O (b'). Space Complexity: Space complexity of DLS algorithm is © (bxt) Optimal: Depth-limited search can be viewed as a special case of DF: ea. and itis also not optimal even if Uniform-cost Search Algorithm: Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm comes into play when a different cost is available for each edge. The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest cumulative cost. Uniform-cost search expands nodes according to their path costs from the root node. It can be used to solve any graphitree where the optimal cost is in demand. A uniform-cost search algorithm is implemented by the priority 9 queue. It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same, Advantages: © Uniform cost search is optimal because at every state the path with the least cost is chosen Disadvantages: © Itdoes not care about the number of steps involve in searching and only concerned about path cost. Due to which this algorithm may be stuck in an infinite loop, Example: Uniform Cost Search 6 — [ee Completeness: Uniform-cost search is complete, such as if there is @ solution, UCS will find it. ‘Time Complexity: Let C* is Cost of the optimal solution, and ¢ is each step to get closer to the goal node, Then the number of steps is = C¥e+ 1, Here we have taken +1, as we start from state 0 and end to C*/e, Hence, the worst-case time complexity of Uniform-cost search is O (b!~!*ly, Space Complexity: 10 The same logic is for space complexity so, the worst-case space complexity of Uniform-cost search is O ones, Optimal: Uniform-cost search is always optimal as it only selects a path withthe lowest path cost. Iterative deepening depth-first Search: The iterative deepening algorithm is a combination of DFS and BES algorithms. This search algorithm finds out the best depth limit and does it by gradually increasing the limit until a goal is found. This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the depth limit after each iteration until the goal node is found, This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first search's memory efficiency. The iterative search algorithm is useful uninformed search when search space is large, and depth of goal node is unknown. Advantages: © It combines the benefits of BFS and DES search algorithm in terms of fast search and memory efficiency. Disadvantages: © The main drawback of IDES is that it repeats all the work of the previous phase. Example: Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm performs various iterations until it does not find the goal node. The iteration performed by the algorithm is given as: uw Iterative deepening depth first search —* | rewto — wa — aay — | Levels 4 thlteration---> A.B, DHLECEKG In the fourth iteration, the algorithm will find the goal node. ‘Completeness: This algorithm is complete is ifthe branching factor is finite. ‘Time Complexity: Let's suppose b is the branching factor and depth is d then the worst-case time complexity is O (b’), Space Complexity: The space complexity of IDDFS will be © (hd) Optimal: IDDF: algorithm is optimal if path cost is a non- decreasing function of the depth of the node. Bi irectional Search Algorithm: Bidirectional search algorithm runs two simultaneous searches, one form initial state called as forward- search and other from goal node called as backward-search, to find the goal node. Bidirectional search replaces one single search graph with two small sub graphs in which one starts the search from an initial vertex and other starts from goal vertex. The search stops when these two graphs intersect each other. Bidirectional search in use search techniques such as BFS, DFS, DLS, ete. 2 Advantages: © Bidirectional search is fast. © Bidirectional search requires less memory Disadvantages: © Implementation of the bidirectional search tree is difficult. © In bidirectional search, one should know the goal state in advance, Example: In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16 in the backward direction, Bidirectional Search The algorithm terminates at node 9 where two searches meet. Completeness: Bidirectional Search is complete if we use BFS in both searches, ‘Time Complexity: Time complexity of bidirectional search using BFS is O (b'), Space Complexity: Space complexity of bidirectional search is © (b). Optimal: Bidirectional search is Optimal. B Informed Search Algorithms So far we have talked about the uninformed search algorithms which looked through search space for all possible solutions of the problem without having any additional knowledge about search space. But informed search algorithm contains an array of knowledge such as how far we are from the goal, path cost, how to reach to goal node, etc. This knowledge helps agents to explore less to the search space and find more efficiently the goal node. The informed search algorithm is more useful for large search space, Informed search algorithm uses the idea of heuristic, so itis also called Heuristic search. Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most promising path. It takes the current state of the agent as its input and produces the estimation of how close agent is from the goal. The heuristic method, however, might not always give the best solution, but it guaranteed to find a good solution in reasonable time, Heuristic function estimates how close a state is to the goal. It is represented by h (n), and it calculates the cost of an optimal path between the pair of states. The value of the heuristic function is always positive, Admissibility of the heuristic function is given as: b(n) <= bea) Here h (n) is heuristic cost, and h¥(n) is the estimated cost. Hence heuristic cost should be less than or equal to the estimated cost. Pure Heuristic Search: Pure heuristic search is the simplest form of heuristic search algorithms. It expands nodes based on their heuristic value h (n). It maintains two lists, OPEN and CLOSED list. In the CLOSED ist, it places those nodes which have already expanded and in the OPEN list, it places nodes which have yet not been expanded. On each iteration, each node n with the lowest heuristic value is expanded and generates all its successors and n is placed to the closed list. The algorithm continues unit a goal state is found, In the informed search we will discuss two main algorithms which are given below: © Best First Search Algorithm(Greedy search) © A* Search Algorithm Best-first Search Algorithm (Greedy Search) Greedy best-first search algorithm always selects the path which appears best at that moment, It is the combination of depth-first search and breadth-first search algorithms. It uses the heuristic function and search. Best-first search allows us to take the advantages of both algorithms. With the help of best-first search, at each step, we can choose the most promising node. In the best first search algorithm, we expand the node which is closest to the goal node and the closest cost is estimated by heuristic function, i. 14 F(o)= s(n). Were, h (n) = estimated cost from node n to the goal The greedy best first algorithm is implemented by the priority queue. Best first search algorithm: © Step 1: Place the starting node into the OPEN list. © Step 2: If the OPEN list is empty, Stop and return failure. © Step 3: Remove the node n, from the OPEN list which has the lowest value of b (1 in the CLOSED list. ), and places it © Step 4: Expand the node n, and generate the successors of node n. © Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any successor node is goal node, then return success and terminate the search, else proceed to Step 6. Step 6: For each successor node, algorithm checks for evaluation function f (n), and then check if the node has been in either OPEN or CLOSED list. If the node has not been in both lists, then add it to the OPEN list. © Step 7: Return to Step 2. Advantages: © Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms. © This algorithm is more efficient than BFS and DFS algorithms Disadvantages: © Itcan behave as an unguided depth-first searchin the worst case scenario. © Ttean get stuck in a loop as DES. © This algorithm is not optimal Example: Consider the below search problem, and we will traverse it using greedy best-first search, At each iteration, each node is expanded using evaluation function £ (n) =h (n), which is given in the below table. 15 node | Him) A rr B 4 c a D a E s F 2 H a 1 9 s 19 G ° In this search example, we are using (wo lists which are OPEN and CLOSED Lists, Following are the iteration for traversing the above example, Expand the nodes of S and put in the CLOSED list, Initialization: Open [A, B], Closed [S] Iteration 1: Open [Al, Closed [S, B] Iteration 2: Open [E, F, AJ, Closed [S, B] Open [E, A], Closed [S, B, F] Iteration 3: Open (I, G, E, Al, Closed [S, B, F] Open [I, E, Al, Closed [S, B, F, G] Hence the final solution path will be: S--=-> Bewor->F----> G 16 ‘Time Complexity: The worst case time complexity of Greedy best fist search is O (b") Space Complexity: The worst case space complexity of Greedy best first search is O(b"), Where, m is the maximum depth of the search space. Complete: Greedy best-first search is also incomplete, even if the given state space is finite. Optimal: Greedy best first search algorithm is not optimal. A* Search Algorithm: A® search is the most commonly known form of best-first search. It uses heuristic function h (a), and cost to reach the node n from the start state g (n), It has combined features of UCS and greedy best-first search, by which it solve the problem efficiently. A* search algorithm finds the shortest path through the search space using the heuristic function, This search algorithm expands less search tree and provides optimal result faster. A* algorithm is similar (o UCS except that it uses g (n) +h (n) instead of g (a). In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can combine both costs as following, and this sum is called as a fitness number ln) = a(n) + h(n) - "S ‘At each point in the search space, only those nodes is expanded which have the lowest value of f (n), and the algorithm terminates when the goal node is found. Algorithm of A* search: Step1: Place the starting node in the OPEN list. Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops. Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node n is goal node then return success and stop, otherwise Step 4: Expand node n and generate all ofits successors, and put n into the closed list. For each successor ni, check whether n’ is already in the OPEN or CLOSED list, if not then compute evaluation function for 1 and place into Open list 7 Step 5: Else if node n’ is already in OPEN and CLOSED, then it should be attached to the back pointer which reflects the lowest g(x’) value, Step 6: Return to Step 2, Advantages: © At search algorithm is the best algorithm than other search algorithms. © A* search algorithm is optimal and complete. © This algorithm can solve very complex problems. Disadvantages: © It does not always produce the shortest path as it mostly based on heuristics and approximation © A* search algorithm has some complexity issues. © The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, 0 itis not practical for various large-scale problems, Example: In this example, we will traverse the given graph using the A* algorithm, The heuristic value of all states, is given in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start stat. Here we will use OPEN and CLOSED list State h(n) alelalel>|o O19 | Pl ale Solution: 18 Iterationl: ((S-> A, 4), (S->G, 10)} Iteration2: ((S-> A->C, 4), (S—> A->B, 7), (SG, 10)} Iteration3: ((S—> A~>C— G, 6), (S—> A>C—>D, 1), Iteration 4 will give the final result, as S. /A-->C--->G it provides the optimal path with cost 6. Points to remember: © A* algorithm returns the path which occurred first, and it does not search for all remaining paths, © The efficiency of A* algorithm depends on the quality of heuristic. © A* algorithm expands all nodes which satisfy the condition f(n) ‘Complete: A* algorithm is complete as long as: © Branching factor is finite. © Cost at every action is fixed. Optimal: A* search algorithm is optimal if it follows below two conditions © Admissible: the first condition requires for optimality is that h(n) should be an admissible heuristic for A* tree search, An admissible heuristic is optimistic in nature, © Consistency: Second required condition is consistency for only A* graph-search. If the heuristic function is admissible, then A* tree search will always find the least cost path. ‘Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and the number of nodes expanded is exponential to the depth of solution d. So the time complexity is O (bd), where b is the branching factor. 19 Space Complexity: The space complexity of A* search algorithm is O (bd) Heuristics Function Heuristics refers to a problem-solving and decision-making approach where individuals or entities consider past results or experiences and the minimal relevant details to reach a practical conclusion. These strategies utilize mental shortcuts and generalized concepts to find immediate, efficient, and short-term solutions. A heuristic model acts as a rule of thumb in cases where there is no time for careful consideration of different aspects of a situation. Though this technique does not always give accurate, rational, or optimal results, it satisfies the requirements awaiting valid conclusions or decisions in complex scenarios. Its purpose is to solve a problem or achieve an outcome quickly with minimal mental effort and without analyzing everything thoroughly. Key Points: Heuristics is a problem-solving and decision-making strategy in which individuals or entities analyze previous results or experiences and the bare minimum of relevant facts to arrive at a practical conclusion These methods rely on mental shortcuts to come up with quick, effective, and short-term solutions. It works well when there is not enough time or resources to think through all of the options. While these quick fixes or conclusions may not always be the ideal way to solve personal or corporate problems, they usually suffice for the time being, The four common types of heuristics include affect, anchoring, availability, and representativeness. Heuristics are problem-solving techniques to achieve a satisfactory solution using mental shortcuts and based on previous outcomes with a similar situation, These are short-term results, letting individuals or entities tackle the issues for the time being. Herbert A. Simon, a Nobel Laureate and American economist, proposed the concept in the 1950s, He claimed that humans try to make rational decisions but ate ultimately impacted by cognitive heuristics. However, the technique gained traction following Israeli psychologist Daniel Kahneman’s book “Thinking, Fast and Slow.” He hypothesized that these biases hhave an impact on how people think and make decisions. 20 What Is Heuristics? situation solution/conctusion |_eainimat Felevant details ——_ Problem-salving and decision-making approach SH wansteertsoie Heuristics may be the most effective method for resolving problems, whether they are personal or professional. These short-term results provide ample time to make decisions that will solve difficulties permanently. The strategy has been found effective, if not accurate, as a short-term technique of dealing with problems or making correct decisions. Nevertheless, it has also been regarded as a method of reducing effort. Many theories call it an approach indicative of cognitive laziness. Those who do not want to put much mental effort opt for this method for making judgments or solving problems. For some psychologists, the technique can cause cognitive biases. The findings or choices reached in previous cases based on cognitive abilities become the basis for conclusions or decisions for others with better-thinking levels, The heuristics definition has become more prominent with the advent of technology, making all types of data available online. People can conduct research, examine, and evaluate data from multiple sources to reach conclusions and make decisions quickly and efficiently. Importance of Heuristics Function Heuristics are infamous for producing cognitive biases due to data limitations, which cause people and conporations to make poor decisions. Understanding the notion is therefore critical to avoid such @ predicament and participate in more adaptive activities. It is worth noting that mental shortcuts, Lc., readily and widely available information, aid in making quick decisions in difficult circumstances or finding appropriate solutions to complex problems with limited time and resources. 21 Heu tics Applications SP wanssroeestore ‘The approach applies in various methods, including historical data analysis, trial and error, past formulas, guesswork, and elimination processes ‘Types of Heuristics Function Here are the various sorts of heuristics based on the sources, contexts, and problems from which one derives the solutions or decisions: Heuristics Types SF watowreertere Affect It emphasizes the instant emotions generated in individuals in response to a stimulus. It could be any positive or negative feeling they experience at a particular moment and in a specific situation. In short, this strategy signifies how the emotions or reactionary feelings triggered by previous experiences can influence a decision. This emotionally-driven approach, sometimes known as gut feeling, is common in situations that require evaluating the benefits and risks of something in a short time. For example, when readers come across an article that promotes a product or service, clicking affiliate links will redirect them to the store's website. There may be people who either want to buy something or look at the offerings. On the other hand, some visitors may read the article in the hopes of learning something new, but they quickly abandon the site if it appears to be promotional, 22 Anchoring In this approach, individuals or entities make judgments based on the very first set of information they get called “anchor”, Since the decision is usually made in a hurry, it may be inaccurate, The impulsive decision-makers forget or ignore other factors, making not-so-good choices. For example, an instant message about winning the latest automobile in exchange for a particular sum of, money seems intriguing. People, however, pay just to be stuck in a deceptive bargain out of sheer excitement. Availability Itis a process in which persons or entities recall previous related instances and evaluate their clfectiveness in resolving problems. IC is duc to the most significant sources to refer to for reaching valid conclusions are the readily available ones. However, they are more likely to make poor decisions or come up with incorrect solutions to issues because of this process. For example, someone wants to invest in crypto currencies but is unsure which one to buy. So, they can look into the historical performance of the most popular digital coins on the market, Following that, they can make investment decisions depending on which has been the most successful. Representativeness This technique makes individuals or entities evaluate the likelihood of a solution to problem or conclusion ina situation based ona similar past event that act as representative dal. thus, provides a reasonable probability of selecting the most effective alternative under uncertainty, For example, a company is going through a financial crisis. So it can review how other organizations in comparable situations have recovered. It will aid in determining which methods or techniques to employ to achieve effective results Heuristics Examples Let us consider the following examples to understand the concept better: Example #1 In the first heuristics example, we take the case of a mother named Maria. Maria appeared to be terrified of the daily news reports of child abductions in the neighborhood. She became fearful and began locking her children inside, Maria became more cautious and vigilant as a result of the availability heuristic. So she devised a strategy to avoid her children becoming the next to be kidnapped from the neighborhood, 23 Example #2 The availability heuristics decision making plays a crucial role in artificial intelligence (AD. It encourages cognitive bias or rationalization, leading to the logical interpretation of the resources or data available. Even though this approach offers an effective means of making a rational decision, some issues may still be hard to avoid. Memory leakage, here, is a notable example that adopts this technique for problem-solving or decision- making. However, it ends up using obsolete information. The unconscious nature of biases makes them hard to detect in AI decisions that might lead (o failures at a later stage Constraints Satisfaction Problem We have seen so many techniques like Local search, Adversarial search to solve different problems. The objective of every problem-solving technique is one, i., to find a solution to reach the goal. Although, in adversarial search and local search, there were no constraints on the agents while solving the problems and reaching to its solutions. By the name, it is understood that constraint satisfaction means solving a problem under certain constraints or rules. Constraint satisfaction is a technique where a problem is solved when its values satisfy certain constraints or rules of the problem. Such type of technique leads to a deeper understanding of the problem structure as well as its complexity. Constraint satisfaction depends on three components, namely: * X: Itisa set of variables. + Dé It isa set of domains where the variables reside, There is a specific domain for each variable. + Cr Itis a set of constraints which are followed by the set of variables. In constraint satisfaction, domains are the spaces where the variables reside, following the problem specific constraints. These are the three main clements of a constraint satisfaction technique. The constraint value consists of a pair of (scope, rel}. The scope is a tuple of variables which participate in the constraint and rel is a relation which includes a list of values which the variables can take to satisfy the constraints of the problem. Solving Constraint Satisfaction Problems ‘The requirements to solve a constraint satisfaction problem (CSP) are: + Astate-space + The notion of the solution. A state in state-space is defined by assigning values to some or all variables such as ¢Xievi, Xoeva, and so on...} An assignment of values to a variable can be done in three ways: * Consistent or Legal Assignment: An assignment which does not violate any constraint or rule is called Consistent or legal assignment. 24 + Complete Assignment: An assignment where every variable is assigned with a value and the solution to the CSP remains consistent, Such assignment is known as complete assignment. + Partial Assignment: An assignment which assigns values to some of the variables only. Such type of assignments is called Partial assignments. Types of Domains in CSP ‘There are following two types of domains which are used by the variables: + Discrete Domai Iti an infinite domain which can have one state for multiple variables. For example, a start state can be allocated infinite times for each variable. + Finite Domain: It is a finite domain which can have continuous states describing one domain for one specific variable. It is also called a continuous domain. Constraint Types in CSP With respect to the variables, basically there are following types of constraints: + Unary Constraints: It is the simplest type of constraints that restricts the value of a single variable. + Binary Constraints: It is the constraint type which relates two variables. A value x2 will contain a value which lies between x1 and x3. + Global Constraints: It is the constraint type which involves an arbitrary number of variables. Some special types of solution algorithms are used to solve the following types of constraints: + Linear Constraints: These types of constraints are commonly used in linear programming where cach variable containing an integer value exists in linear form only. + Non-linear Constraints: These types of constraints are used in non-linear programming where each variable (an integer value) exists in a non-linear form, Note: A special constraint which works in real-world is known as Preference constraint, Constraint Propagation In local state-spaces, the choice is only one, ic. to search for a solution, But in CSP, we have two choices either: + We can search for a solution or + We can perform a special type of inference called constraint propagation. Constraint propagation is a special type of inference which helps in reducing the legal number of values for the variables. The idea behind constraint propagation is local consistency. In local consistency, variables are treated as nodes, and cach binary constraint is treated as an are in the given problem. There are following local consistencies which are discussed below: 25, + Node Consistency: A single variable is said to be node consistent if all the values in the variable’s domain satisfy the unary constraints on the variables. ‘+ Are Consistency: A variable is arc consistent if every value in its domain satisfies the binary constraints of the variables. ‘+ Path Consistency: When the evaluation of a set of two variables with respect to a third variable ‘can be extended over another variable, satisfying all the binary constraints. It is similar to are consistency. + K-consistency: This type of consistency is used to define the notion of stronger forms of propagation. Here, we examine the k-consistency of the variables. CSP Problems Constraint satisfaction includes those problems which contains some constraints while solving the problem. CSP includes the following problems: ‘+ Graph Coloring: The problem where the constraint is that no adjacent sides can have the same color. 4, ASZ 4 \IZ™S. Graph Coloring ‘+ Sudoku Playing: The game play where the constraint is that no number from 0-9 can be repeated in the same row or column, 26 SUDOKU Puzzle Solution ‘+ Nequeen problem: In n-queen problem, the constraint is that no queen should be placed either diagonally, in the same row or column. |-Queens problem Definition = Place N queens on an NN board so that no queen is attacking encther queen A queen can move horizontally, vertically, or diagonally - ‘The problem ean be solved with genetic algorithi for a nqucens problem. (n is between 8 and 30) © Crypt arithmetic Problem Crypt arithmetic Problem is a type of constraint satisfaction problem where the game is about digits and its unique replacement either with alphabets or other symbols. In crypt arithmetic problem, the digits (0-9) get substituted by some possible alphabets or symbols, The task in crypt arithmetic problem is to substitute each digit with an alphabet to get the result arithmetically correct. We can perform all the arithmetic operations on a given crypt arithmetic problem, The rules or constraints on a crypt arithmetic problem are as follows: There should be a unique digit to be replaced with a unique alphabet, The result should satisfy the predefined arithmetic rules, ie., 242 =4, nothing else Digits should be from 0-9 only. ‘There should be only one carry forward, while performing the addition operation on a problem. The problem can be solved from both sides, i.e, left-hand side (L.H.S), or right-hand side (HS) Let’s understand the crypt arithmetic problem as well its constraints better with the help of an example: 27 Given a crypt arithmetic problem, ic, SEND +MORE=MONEY SEND +MORE MONEY In this example, add both terms § EN D and M O R Eto bring M O NE ¥ as a result. Follow the below steps to understand the given problem by breaking it into its subparts: + Starting from the left hand side (L.H.S), the terms are $ and M. Assign a digit which could give a satisfactory result. Let’s assign S->9 and M->1. s 9 +M +1 Mo 10 Hence, we get a satisfactory result by adding up the terms and got an assignment for O as 0->0 as well. Now, move ahead to the next terms E and O to get N as its output. +o SN 40 Adding E and , which means 5+0=0, which is not possible because according to crypt arithmetic constraints, we cannot assign the same digit to two letters. So, we need to think more and assign some other value Qe 5 E ———> +0 +0 N 6 28 Note: When we will solve further, we will get one carry, so after applying it, the answer will be satisfied. + Further, adding the next two terms N and R we get, Ke > +R 7 N48 But, we have already assigned E->5, Thus, the above result does not satisfy the values Because we are getting a different value for E. So, we need to think more. Again, after solving the whole problem, we will get a carryover on this term, so our answer will be satisfied. a ——> +R +8 - Where 1 will be carry forward to the above term Let’s move ahead, + Again, on adding the last two terms, ic., the rightmost terms D and E, we get ¥ as its result > 7 +E +5 ‘Where 1 will be carry forward to the above term 29 Keeping all the constraints in mind, the final resultant is as follows: SEND +MORE MONEY Below is the representation of the assignment of the digits to the alphabets =] o| 2] oO] 2] x] ef of H| S| of alo < Latin square Problem: In this game, the task is to search the pattern which is occurring several times in the game. They may be shuffled but will contain the same digits Latin Squence Problem 30 * Crossword: In crossword problem, the constraint is that there should be the correct formation of the words, and it should be meaningful Iterative Improvement Algorithms Iterative best improvement is a local search algorithm that selects a successor of the current assignment that most improves some evaluation function. If there are several possible successors ‘That most improve the evaluation function, one is chosen at random. Iterative improvement algorithms = iterative refinement = local search Usable when the solutions are states, not paths. Start with a complete configuration and make modifications to improve its quality. Classes of Iterative Improvement Algorithms Hill Climbing Local Beam Search Simulated Annealing Genetic Algorithm Hill Climbing Algorithm © Hill climbing algorithm is a local search algorithm which continuously moves in the direction of incteasing elevation/value to find the peak of the mountain or best solution to the problem, It terminates when it reaches a peak value winere no neighbor has a higher value. © Hill climbing algorithm is a technique which is used for optimizing the mathematical problems, One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which we need to minimize the distance traveled by the salesman. a1 It is also called greedy local search as it only looks to its good immediate neighbor state and not ‘beyond that A node of bill climbing algorithm has two components which are state and value. Hill Climbing is mostly used when a good heuristic is available. In this algorithm, we don’t need to maintain and handle the search tree or graph as it only keeps a single current state. Features of Hill Climbing: Following are some main features of Hill Climbing Algorithm: Generate and Test variant: Hill Climbing is the variant of Generate and Test method, The Generate and Test method produce feedback which helps to decide which direction to move in the search space. Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost. No backtracking: It does not backtrack the search space, as it does not remember the previous states. State-space Diagram for Hill Climbing: The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a graph between various states of algorithm and Objective function/Cost. On Y-axis we have taken the function which can be an objective function or cost function, and state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the global minimum and local minimum. If the function of Y-axis is Objective function, then the goal of the search is to find the global maximum and local maximum, Objective function Global maximum shoulder x Local maximum LS “at” toca) maximum State space Current Different regions in the state space landscape: 32 Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another state which is higher than it. Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of objective function, Current state: Its a state in a landscape diagram where an agent is currently present. Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same value. Shoulder: It is a plateau region which has an uphill edge. ‘Types of Hill Climbing Algorithm: ‘Simple hill Climbing: 9 Steepest-Ascent hill-climbing: Stochastic hill Climbing: 1. Simple Hill Climbing: ‘Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the neighbor node state at a time and selects the first one which optimizes current cost and set it as a current state. It only checks it's one successor state, and if it finds better than the current state, then move else be in the same state, This algorithm has the following features: © Less time consumin, © Less optimal soltion andthe slo i ct guaranteed Algorithm for Simple Hill Climbing: © Step 1: Evaluate the initial state, if itis goal state then return success and Stop. © Step 2: Loop Until a solution is found or there is no new operator left to apply. © Step 3: Select and apply an operator to the current state. © Step 4: Check new state: a. IFitis goal state, then return success and quit. b, Else if itis better than the current state then assign new state as a current state. ¢. Else if not better than the current state, then return to step2. Step 5: Exit. 33, 2. Steepest-Ascent hill climbing: The steepest-Ascent algorithm is a variation of simple bill climbing algorithm. This algorithm examines all the neighboring nodes of the current state and selects one neighbor node which is closest to the goal state, This algorithm consumes more time as it searches for multiple neighbors Algorithm for Steepest-Ascent climbing: © Step 1: Evaluate the initial state, ifit is goal state then return success and stop, else make current state as intial state, © Step 2: Loop until a solution is found or the current state does not change. a, Let SUC bea state such that any successor of the current state will be better than it, b, For each operator that applies to the current state: Apply the new operator and generate a new state, b. Evaluate the new state, €. Iitis goal state, then return it and quit, else compare it to the SUCC. 4. Ifitis better than SUC, then set new state as SUC. ©. Ifthe SUC is better than the current state, then set current state to SUCC. Step 5: Exit. 3. Stochastic hill climbing: Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search algorithm selects one neighbor node at random and decides whether to choose it as a current state or examine another state. Problems in Hill Climbing Algorithm: 1, Local Maximum: A local maximum is a peak state in the landscape which is better than each of its neighboring states, but there is another state also present which is higher than the local maximum, Solution: Backtracking technique can be a solution of the local maximum in state space landscape, Create alist of the promising path so that the algorithm can backtrack the search space and explore other paths as well Local maximum \ 2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the current state contains the same value, because of this algorithm does not find any best direction to move. A hill- climbing search might be lost in the plateau area, Solution: The solution for the plateau is to take big steps or very litte steps while searching, to solve the problem. Randomly select a state which is far away from the current state so it is possible that the algorithm could find non-plateau region. Plateau/Flat maximum . 3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its surrounding areas, but itself has a slope, and cannot be reached in a single move Solution: With the use of bidirectional search, or by moving in different directions, we can improve this problem. Ridge Simulated Annealing: A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete because it can get stuck on a local maximum. And if algorithm applies a random walk, by moving a successor, then it may complete but not efficient. Simulated Annealing is an algorithm which yields both elficiency and completeness. In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then cooling gradually, so this allows the metal to reach a low-energy crystalline state. The same process is used in simulated annealing in which the algorithm picks a random move, instead of picking the best move. If the random move improves the state, then it follows the same path, Otherwise, the algorithm follows the path which has a probability of less than 1 or it moves downhill and chooses another path. 35,

You might also like