CS26110 - Artificial Intelligence: This Question Examines The Area of Uninformed Search
CS26110 - Artificial Intelligence: This Question Examines The Area of Uninformed Search
illustrate clearly your description. Iterative deepening combines Depth-first search (DFS) and Breadth-first search (BFS) to create a complete and optimal search, like BFS, that has the time complexity of the BFS and space complexity of DFS. This is better than just BFS because the space complexity of DFS is far better than space complexity of BFS. ID search iterates through the tree from starting node, A in the case bellow. IDS has a depth limit (usually 0) that is increased by one after each unsuccessful iteration (by unsuccessful iteration I mean an iteration that does not find the goal) and starts the iteration back from the beginning (A node). After any iteration we can see that DFS has been actually performed. As an example of iterative deepening I will take the following tree:
I am setting the goal to node G. For 1st and 2nd level, the iterative deepening will look just like Breadth-first search. Each time the solution is not found this search is going deeper one step, starting from the beginning. Number of steps is counted in the images bellow:
1st level:
2nd level:
3rd level
For the third level we can see that Depth-first search is being used, but only up to a certain level (3 in our case). Total number of steps required to get to node G by ID is 11.
I am setting the goal to node P and I will count how many steps are required to get there. The images bellow should also illustrate how the tree is expanded.
1st level:
2nd level:
3rd level:
4th level:
(b) Using the Big O notation for time and space complexity, show how Iterative Deepening is related to Depth First and Breadth First searches.
Time complexity: o BFS: o DFS: o ID: Space complexity: o BFS: o DFS: b*m o ID: b*d , where b=branching average and d=goal depth; , where b=branching average and m=maximum depth; , where b=branching average and d=goal depth; , where b=branching average and d=goal depth; , where b=branching average and m=maximum depth; , where b=branching average and d=goal depth;
If a goal exists than it must be at a finite depth, therefore d will be a finite value, irrespective of the maximum depth of the graph. That means that time and space complexity of BFS it is a finite number even in the case where graph maximum depth is infinite. We cannot say the same thing about the DFS. If the maximum depth of a graph is infinite and the iteration takes an infinite path than the time and space complexity of DFS will be infinite without even finding the goal. By combining DFS and BFS we obtain the IDS and we can clearly see above that IDS has the time complexity of BFS and space complexity of DFS.
(c) Use your answer from the previous part to calculate the time complexity for ID search where the branching factor for the problem is 8 and the depth of the shallowest solution is 6. Show all your working. I know that time complexity for ID can be calculated using the formula , where b=branching factor and d=goal depth (or shallowest solution depth). Therefore the time complexity for the problem presented will be:
O(
)=O(
)=O(
)=O(
(d) Is the above value for time complexity more or less than for Breadth First Search? Explain the difference. Give an example for comparison.
The above value for time complexity is same as the value for time complexity of BFS. There is no difference as the worst case of BFS will explore all the nodes, as well as the worst case of IDS:
= O (bd ) =O
2. Figure 1 shows a number of cities linked by roads. It is only possible to travel between the cities by using the marked roads. The road distance between two cities is indicated by the number next to the link. The straight line distance from each city to the goal city of G is given in brackets after each node label. (a) For each of the search strategies given in i., ii., and iii. below, show how you would search from A to G, stating the evaluation function used, the order in which the nodes are expanded, the calculated function value for each new node, and the choice of the next node to expand. i. Hill Climbing The evaluation function f(x) = the closest node to current node. The order the nodes will be expanded: A-D-G (total distance: 30); Leaving from node A to goal node G: B: f(B) = 8; D: f(D) = 7; Out of those two nodes, the D node is chosen. Further away, from D we go straight to the goal node G. ii. Greedy Search
6
Greedy search will always pick the node closest to the solution (considering the distance in straight line to the goal node, which is in brackets next to node letter). The evaluation function f(x) = the closest node to goal node. The order the nodes will be expanded: A-B-E-B-C-G (total distance: 57) Leaving from node A: F(B)=16 F(D)=20 Out of this two, node B is being picked as 16<20; Leaving from node B: F(E)=12; F(C)=15; Out of this two, node E is being picked as 12<15; From node E the algorithm will backtrack to node B and will go towards node C, as the E node is a dead-end. From node C will go straight to goal node G. iii. A*Search Functions used: G(n) the cost to reach the node (total cost path); H(n) the cost to get from node to the goal (straight line); F(n) estimated cost of the cheapest solution; Node Distance Straight line distance from G
A: B8, D7 B: A8, E12, C5 C: B5, G20 D: A7, G23 E: B12 G: C20, D23
From A: To D: g(D) + h(D) = 7+ 20 = 27 To B: g(B) + h(B) = 8 + 16 = 24 B is chosen and f(D) = 27 is remembered From B: To E: g(E) + h(E) = (8+12) + 12 = 32 To C: g(C) + h(C) = (8+5) + 15 = 28 f(D)=27 is better than any of the solutions above and we chose it, remembering f(E)=32 and f(C)=28 From D: To G: g(G) + h(G) = (7+23) + 0 = 30 F(C) is better and we continue that option (remembering path just discovered: A-D-G: 30) From C:
7
For the figure above some of the values might not match the actual distance. For this example I suppose the values in the graph above are correct. The number on nodes represents the straight distance to the goal and the number on lines represents the distance between two nodes. The colored path is the path chosen by a greedy search algorithm. It is obviously very inefficient, the total path summing up to 47 when the optimal solution has the total path summing up to 14. Therefore, unless it is a really big graph that can cause space related problems for A* search, I will always use the A* search instead of using greedy search.