0% found this document useful (0 votes)
23 views8 pages

ID1395 Searching

The document discusses search algorithms used in artificial intelligence, including uninformed search algorithms like depth-first search and breadth-first search, as well as heuristic search algorithms. It presents the advantages and disadvantages of different algorithms and proposes developing a methodology to visually compare the algorithms in a web-based virtual laboratory to improve AI education.

Uploaded by

Sardor Juraev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views8 pages

ID1395 Searching

The document discusses search algorithms used in artificial intelligence, including uninformed search algorithms like depth-first search and breadth-first search, as well as heuristic search algorithms. It presents the advantages and disadvantages of different algorithms and proposes developing a methodology to visually compare the algorithms in a web-based virtual laboratory to improve AI education.

Uploaded by

Sardor Juraev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

SEARCHING ALGORITHMS AND A COMPARISON METHODOLOGY,

USED IN STUDYING ARTIFICIAL INTELLIGENCE


M. Petrova, D. Atanasova
University of Ruse "Angel Kanchev" (BULGARIA)

Abstract
This article represents a research, analysis and comparison for decision searching algorithms in
solution space used for training students. The aim of this study is to develop a methodology for
comparing algorithms. The methodology will be visually presented in a web-based virtual laboratory
for Artificial Intelligence studies. Embedding the methodology into a web-based virtual laboratory will
support the educational process of the subject, with trainees being able to conduct self-training from
their homes. The study of these algorithms and the creation of the methodology will improve the
perception of information, both for easier learning and for understanding the difference between the
different algorithms, their advantages and disadvantages. The need to develop a methodology to
integrate into the subject studies comes from the rapid development in this area and the need for
teachers and learners to follow these rates.

Advantages will be the creation of a methodology in which these algorithms will be presented visually,
so:
- Students will be able to better understand search algorithms in the field of artificial intelligence;
- You will be able to understand clearly how each algorithm functions and works;
- They will be able to compare more easily because of the visual image they will acquire.

Keywords: Searching Algorithms, Artificial Intelligence Education, Comparison Methodology.

1 INTRODUCTION
“Search is a problem-solving technique that systematically explores a space of problem states” Luger,
G.F. Artificial Intelligence: Structures and Strategies for Complex Problem Solving
Artificial Intelligence is a fast growing field that is continually improving and complementing. It contains
many areas that involve many algorithms and methods of information processing, and the scientific
world is constantly working on improving these algorithms and creating new ones. This article aims to
examine and compare search algorithms in the state of artificial intelligence states and to provide
guidance on their advantages and disadvantages. Figure 1 shows the basic algorithms used in this
area.
General state search in state space - Solving many tasks (but not all), traditionally considered
intellectual, can be reduced to a successive transition from one description (formulation) to another,
equivalent to the first or simply by it until it comes to what is considered to be solving the task. [1]
Structure of the state of space [2]
Data structures:
Trees: only one path to a given node
Graphs: several paths to a given node
Operators: directed arcs between nodes
The search process explores the state space.
In the worst case all possible paths between the initial state and the goal state are explored.
State space search is divided into three main types of tasks, which in turn contain different algorithms
and methods for their solution. They are shown in Figure 1.
Generate and test

Constraint Depth-first search


Satisfaction
Problems, CSP Backtracking

Breadth-first search
Forward checking

Best-First Search

Uninformed search
Search in state of
space Search for a path to
a specific goal
Beam Search

Heuristic search

A* search

Hill Climbing

Search for a winning Minimax Algorithm


strategy in games for
two players

Alpha-beta pruning

Figure 1 Presentation of the State of space

These algorithms are thoroughly studied by young professionals in this field. For this reason their
characteristics will be examined, how to generate and visualize each algorithm in various software
tools to be able to make their comparison method to facilitate learning and studying them. Here is a
brief presentation of the three types of tasks, along with the advantages and disadvantages of their
algorithms.

2 PRESENTATION OF SEARCH TASKS IN STATE SPACE AND


CONCLUSIONS

2.1 Search for a path to a specific goal


2.1.1 Uninformed search
Generating and crawling of state graphs in uninformed search involves two major algorithms, as seen
in Figure 1 - Depth-first search (DFS) and Breadth-first search (BFS) algorithm. The two algorithms
include three basic steps:
Step 1 - Initialization
Step 2 - Search
Step 3 - Display the result

The first step involves defining the start and target states by creating lists of investigated and
unexplored heirs. In the second step, the search for the selected target state begins. The difference in
the two algorithms comes when the heirs are added to the list, when the in-depth crawl is added, the
heirs are added to the list at the front, and when crawled in width they are added to the back of the list.
This can be confirmed when the third step is reached and the crawling results are displayed.

Conclusions for algorithms with uninformed search:


 Depth-first search (DFS) and Breadth-first search (BFS) are common search algorithms for
trees or graphs.
 In DFS, you run into the tree root and crawl each branch before moving to the next. DFS is
suitable for searching for trees that are deeper than they are wide. They are suitable for
solving puzzles that have only one solution (like labyrinths).
 In BFS, on the other hand, crawling starts from the root of the tree and searches at each
subsequent node level before moving to the next level. BFS is suitable for searching for trees
that are wider than deep. They are suitable for problems with stacking hierarchies in order and
finding the shortest path between two nodes in graphs.
 DFS and BFS are two methods for crawling wood or columns, for nodes with specific
properties. Both methods allow the reconstruction of a path from the initial node to a node (i)
of a solution.
 Because they both do the same, it can be difficult to decide whether DFS or BFS is the better
approach. For issues where we are looking for solutions that are close to the starting node,
BFS is a better choice as it first looks for the closest nodes. For many problems, the approach
depends on the structure of the tree or the column.
 Using this type of algorithm for an uninformed search solution can always be found if it exists.

Since trees and Graphs can contain many nodes, memory requirements can determine whether to
use DFS or BFS. For trees, BFS requires that all nodes be stored at a given level, while DFS requires
that the entire path from the root to the sheet is stored. Since trees are often balanced, that is,
structured in a way that minimizes depth, DFS typically require less memory. However, some trees for
making decisions and some trees can have branches that are infinitely long, in which case BFS would
be a better choice.

When visualizing these algorithms in the course of the artificial intelligence course, the learner has the
opportunity to build the necessary tree himself and in the implementation of various tasks on him to
visually see how his crawling happens and over time to acquire the ability to easily choose the most
appropriate algorithm for solving a particular task.

2.1.2 Heuristic search


Methods for cutting a portion of a state graphs into informed search are methods that allow not to
generate the entire state of space, but do not ensure that solutions are found when they exist.
 Hill Climbing [3] - This method uses a function that measures the degree of closeness of a
given state to the desired goal. Considering the large set of input data combined with a good
heuristic function, it tries to find a good solution to the problem. This solution may not be the
global optimal maximum. It is a variant of the generating and testing algorithm because it
takes the feedback from the test procedure. This feedback is then used by the generator when
deciding on the next move in the search box.
 Best-first search (BFS) [4] is a generic algorithm that expands nodes in non-decreasing order
of cost. Different cost functions f(n) give rise to different variants. For example, if f(n) =
depth(n), then best-first search becomes breadth-first search. If f(n) = g(n), where g(n) is the
cost of the path from the root to node n, then best-first search becomes Dijkstra’s single-
source shortest-path algorithm. If f(n) = g(n) + h(n), where h(n) is the heuristic estimate of the
cost of the path from node n to a goal, then best-first search becomes A* [4] This method
uses an evaluation function to decide which neighbouring nodes are most promising and then
examine them. We use a priority queue to save node cost. So, performance is a variation of
BFS.
 Beam search [5] runs a state-space search method, such as best-first search (BFS) or depth-
first search (DFS). What sets beam search apart from its underlying search is the use of
heuristic rules to prune search alternatives before exploring them. Note that these heuristic
pruning rules are different from the pruning rule based on monotonic node costs and an upper
bound on the cost of an optimal goal. [5] This algorithm has a memory constraint for storing
multiple of the alternative nodes. This is because inappropriate nodes can be removed at each
step in the search. The advantage of this algorithm is that it can reduce calculations and
hence search time.
 The A* algorithm [6] and its linear space versions, are the common methods for ending the
shortest paths in large graphs. A* keeps an open list of nodes that have been generated but
not yet expanded, and chooses from it the most promising node (the best node) for expansion.
When a node is expanded it is moved from the open list to the closed list, and its neighbours
are generated and put in the open list. The search terminates when a goal node is chosen for
expansion or when the open list is empty. The cost function of A* is f (n) = g(n) + h(n); where
g(n) is the distance travelled from the initial state to n, and h(n) is a heuristic estimate of the
cost from node n to the goal. If h(n) never overestimates the actual cost from node n to the
goal, we say that h(n) is admissible. When using an admissible heuristic h(n), A* was proved
to be admissible, complete, and optimally effective. In other words, with such a heuristic, A* is
guaranteed to always return the shortest path.

Conclusions on Informed Search Algorithms


- These methods allow not generating the entire state of space, but they do not guarantee that
a solution will be found where it exists.
- In the Best-First Method, the first nearest node is selected, while Hill Climbing climbs all heirs
and chooses the closest to the decision.
- Best-First Search calculates the value of all neigh boring nodes and then selects the best,
while Hill Climbing calculates the value of each neigh boring node in turn and repeats once it
finds a better node than the current node.
- The BFS is about finding the target. So it's about choosing the best knot among the possible
ones. We continue to strive for the goal. But Hill Climbing is about maximizing targeting. We
choose the node that provides the highest climb. Therefore, unlike BFS, the value of the
parent node is also taken into account. If we cannot go higher, we just give up. In this case,
we may even fail to reach the target.
- Hill Climbing does not look forward beyond the immediate neighbours of the current state. It
deals only with the best neigh boring node to expand. And the best neighbour is decided by
evaluation functions.
- Best-first search for search goes to the next state based on the heuristic function f (n) = h with
the lowest heuristic value. It does not take into account the price on the way to this particular
situation. Everything that he cares about is what the next state of the present state has the
lowest heuristics.
- The A * search algorithm visits the following state based on heuristics f (n) = h + g, where the
h component is the same heuristics as Best-first search but the g component is the path from
the initial condition to the particular state. Hence, he does not choose the next state with only
the lowest heuristic value, but one that gives the least value when considering heuristics and
the cost of reaching that country.
In [7] a similar comparison of algorithms is made by different criteria. Table 1
Comparison criteria
- Completeness - an indicator of whether the search algorithm is exhaustive;
- Optimality - an indicator of whether the solution found is optimal;
- Time - what time is needed for the algorithm to find a solution;
- Memory - how much memory is needed to perform the search.
Where
- d = depth of search tree solution.
- b = search engine branching index;
- n = a subset of b for which the algorithm will actually be processed;
- In Table 1, the attributes form the basis for decision making;
- Each of the algorithms discussed has its weaknesses and strengths.
Table 1 Comparison of algorithms by different criteria

Criteria
Complete Optimal Time Memory
Algorithm
Breadth first yes yes О(bd) О(bd)

Depth first no no О(bd) О(b)

Hill climbing no no О(bd) O(l)-О(bd)

Best first yes no О(bd) О(bd)

А* yes yes О(bd) О(db)

Beam search no no О(nd) О(nd)

- From the table, it can be seen that by the time criterion, all algorithms are the same except for
A * and narrowband search method. Beam search has a time estimate O (nd), unlike the
common algorithm O (bd). This is because Beam search is a modification of A *, which looks
at the best n branches at each node. This speeds up the processing, but at the cost of
assuming that it is not necessary to walk a low optimal node to reach the target.
- The demand for search algorithm memory is more widespread than the time criterion. In many
cases, search algorithms will come close to the crawl problems in Breadth First or Depth First.
- As regards the criterion of completeness, it is noted that the three algorithms are more
comprehensive. These algorithms are similar, and two of them come from the third. That is,
the Best-First Method is a variation of Breadth-first search. While A * has the same heuristics
as Breadth-first search. Just that applying this heuristics also distinguishes these two
algorithms in the criterion of optimality.

2.2 Constraint Satisfaction Problems, CSP


2.2.1 Generate and test
CSP can be solved using the generate-and-test paradigm. In this paradigm, each possible
combination of the variables is systematically generated and then tested to see if it satisfies all the
constraints. The first combination that satisfies all the constraints is the solution. [8] This algorithm is
easy to implement, but it takes a long time before a solution is found.

2.2.2 Backtracking
This is the most common algorithm for resolving constraint satisfaction (CSP). Connect variables with
values in a specific order. After connecting each successive variable with a value, it is checked
whether the variables currently associated meet the constraints: if constraints are satisfied, continue to
select a value for the next unrelated variable if the constraints are not met, a new value is selected for
the last associated variable.

2.2.3 Forward checking


It traces of remaining limits on unrelated variables. The search is terminated if a step has no values for
any variable. Forward checking spreads information (restrictions) from related to unrelated variables,
but does not detect all cases that will lead to failure.

2.3 Search for a winning strategy in games for two players


This type of procedure is used to solve complex (intelligent) tasks that are played by two or more
players and require the next opponent to predict the next move. The game is played with complete
information about the possible moves of each player without taking into account the incidental factors
that affect them.
2.3.1 MiniMax
MiniMax [2, 9] is a tree-playing algorithm that is divided into two main stages. The first stage is for the
first player (computer) and the second stage is for the second player (human). The algorithm tries to
find the best move on the computer even if the person plays the best moves for him. This means that
it maximizes the result of the computer if it chooses to move the computer while minimizing the result
by choosing the best move for the person. It is required to build the entire tree of states, which may be
too large.

2.3.2 Alpha-Beta
The Alpha-Beta [2, 10] algorithm is a modification of the MiniMax algorithm that can be applied in a
MiniMax algorithm. Kunth and Moore prove that many of the clones can be removed from the game
tree, which reduces the time needed to complete the game and will give the same result as the
MiniMax algorithm. The basic idea of this algorithm is to remove the unintelligible branches of the
game tree.
Conclusions for both methods
- The disadvantage of the minimax procedure is that each leaf of the state tree must be visited
twice - the first time to find its heirs and the second time to evaluate its heuristic value.
- Minimax is too slow in games like chess, because the player has many choices to choose a
move, and the deeper we go (progressing in the game), the slower it gets.
- Alpha-Beta - utility nodal estimates are generally not accurate but rough estimates of the value
of a position, and as a result large errors can be made.
- Alpha-Beta - If the scoring feature is not good, it is possible for a player not to play optimally
every time and not to choose the best possible move.
Examples of games with these two methods are chess, sea chess, and other games of this type. In
order for learners to learn better and to understand their advantages and disadvantages, it is
appropriate for their visual presentation to be one of the exemplary games. Most chess is best
because it will not be long, but it will still be enough to show off.

3 METHODOLOGY AND VIRTUAL LABORATORY


The design of a virtual lab in this area should be consistent with the current methods and principles for
developing such a system. Consideration should be given to how training in the discipline and what
aspects are emphasized. Due to the short time it is available for training, the following methodology is
recommended for practical exercises.

3.1 Methodology
Methodology for conducting the practical exercises in the subject Artificial Intelligence
1. In each of the algorithms described above, the teacher conceives suitable tasks of varying difficulty,
giving learners the easiest to the most difficult one.
2. The teacher presents the tasks of the students by theoretically explaining their context. It further
explains them by showing block-diagrams of different algorithms.
3. The student reads the lectures, listens to the instructions from the lecturer and starts a practical
solution to the tasks by performing the following steps.
3.1. Reads a task condition
3.2. Consider possible solutions / which algorithms can be applied to the task / make a comparison if
necessary, which is discussed with the lecturer.
3.3. Chooses a solution - an algorithm
3.3.1. If it did not make an algorithm choice returns to step 3.2.
3.3.2. If an algorithm is selected, it is applied to solve the task through the web-based virtual lab
3.4. Solve the task
3.4.1. If the task is not properly solved
3.4.1.1. Consider the decision
3.4.1.2 Detecting an Error
3.4.1.3. Return to step 3.4.
3.4.2. If the task is properly resolved
3.5. Deliver result
3.6. Save the task
3.7. Discuss the result with the lecturer
3.7.1. If the teacher believes that the learning material is not well-used, a new task is given and pass
to step 3.1.
3.7.2. If the teacher considers that the learning material is theoretically and practically used to pass to
the next subject of the subject.
3.8. End

3.2 Model of training


Model of training in the discipline Artificial Intelligence for Practical Exercises
1. Preliminary preparation - read carefully the condition of the task and, if necessary, more than once.
In this way, the trainee chooses the most appropriate algorithm for the task from the lectures. This is
the first step where you can determine the degree of learning of the course material from the lecture.
2. Task solving - solving tasks of the respective type during practical exercises. Here the trainees
apply the acquired knowledge from the applied methodology above.
3. Repeatedly solving tasks of one type - this step leads to reinforcement of the learner's knowledge. It
can also be of great help to the lecturer, because he can see his own gaps and supplement or correct
the tasks.
4. Analysis of achievement and feedback - Here conclusions are drawn about learners' learning, which
would also be a good indicator of the teaching methodologies and methodology of the teacher.
Step 4 "Achievement and Feedback Analysis" - may be appropriate for implementation after each of
the first three steps, which will lead to gaps being completed in advance and immediately before the
relevant exercises of the given type are completed.

3.3 Web-based virtual lab


A virtual lab, which is web-based and can visualize each algorithm, will significantly improve learning
by learners. They will have the opportunity to ask questions about the illustrative examples they will
receive. This will provide a better feedback between a learner and a trainee. A clear framework for
each algorithm will be built and the algorithms will be mastered even more thoroughly. Because the
lab will be web-based, it will have access to it from anywhere with an internet connection, which will
also be a great advantage.
The Web-based Virtual Lab includes four core modules that will greatly improve student education.

Theoretical presentation
For each algorithm, there will be a theoretical presentation, emphasizing its peculiarities. Through this
module, learners will be able to catch up with a missed lesson.

Graphic presentation
In this module, all algorithms are graphically represented, with each note including places where there
are peculiarities. Using this module and the theoretical module, learners will begin to get a visual
break for each algorithm.

Sample Tasks and Solutions


Here are the main tasks that are given as examples to search algorithms. Each task contains a
condition graphically represented on the algorithm and a visualization of the implementation of the
algorithm. Exemplary tasks are:
- Chess;
- Sea chess;
- The task for the 8 queens;
- The task of the traveler.
These tasks are used as examples by many professors in the field.
Through these tasks, learners get a real idea of what they want them to implement and learn as
knowledge in the virtual lab.

Standalone tasks
This is the most used module in our laboratory. It allows learners to choose:
- Which algorithm to exercise.
- How to display the selected algorithm through a graph or tree.
- How to perform the desired algorithm - step by step or all steps at once. The searching path, that
passes the algorithm to reach the target, will be colored in a certain color. There is also an opportunity
for the student to note the way the problem is solved and then the correct answer if his / hers is wrong.
- When the task is complete, it is possible to keep the profile of the learner in the form of a text
document and in the form of a picture.
- When a task is solved by two or more algorithms, the learner has the opportunity to examine the two
tasks side by side and to compare the two algorithms. He sees the difference in each of them.

4 CONCLUSIONS
Through this laboratory and the methodology described above, training is expected to improve
significantly. The combination of the methodology and capabilities available to the lab facilitates both
sides. Due to the fact that the lab is web-based, there is much more time for exercise by the trainees.
In this way, they have the opportunity during the practical exercises to ask a lot more questions or put
case for discussion. Each task they solve can be sent to the tutor for review or correction without
having to wait until the next exercise. The lecturer gets a real idea of the progress of each student and
clearly sees where he is confused, thus able to see and correct these mistakes in the school lab.

ACKNOWLEDGEMENTS
The study was supported by contract of University of Ruse “Angel Kanchev”, № BG05M2OP001-
2.009-0011-С01, „Support for the development of human resources for research and innovation at the
University of Ruse “Angel Kanchev”. The project is funded with support from the Operational Program
„Science and Education for Smart Growth 2014 - 2020" financed by the European Social Fund of the
European Union.

REFERENCES
References [Arial, 10-point, left alignment, upper and lower case] should be cited according to the
Bibliography and Citation Style https://iated.org/citation_guide
[1] Шишков, Д. & Нишева, М. (1995). Изкуствен интелект. Добрич: Интеграл (in Bulgarian).
[2] Artificial Intelligence Solving problems by searching - Fall 2008 - professor: Luigi Ceccaroni –
PowerPoint presentation
[3] GeeksforGeeks – a computer science portal for geeks, visited 10.04.2018
https://www.geeksforgeeks.org/introduction-hill-climbing-artificial-intelligence/
[4] https://aaai.org/Papers/AAAI/1991/AAAI91-067.pdf
[5] Zhang, W. (1998, July). Complete anytime beam search. In AAAI/IAAI (pp. 425-430).
[6] Felner, A., Stern, R., Ben-Yair, A., Kraus, S., & Netanyahu, N. (2004). PHA*: finding the
shortest path with A* in an unknown physical environment. Journal of Artificial Intelligence
Research, 21, 631-670.
[7] Chandel, Ashwani, and Manu Sood. "Searching and Optimization Techniques in Artificial
Intelligence: A Comparative Study & Complexity Analysis”." International Journal of Advanced
Research in Computer Engineering and Technology (IJARCET) 3.3 (2014).
[8] Kumar, V. (1992). Algorithms for constraint-satisfaction problems: A survey. AI magazine, 13(1),
32.
[9] Elnaggar, A. A., Abdel, M., Gadallah, M., & El-Deeb, H. (2014). A comparative study of game
tree searching methods. Int. J. Adv. Comput. Sci. Appl., 5(5), 68-77.
[10] Knuth, D. E. (2000). Selected papers on analysis of algorithms. Stanford, CA: Center for the
Study of Language and Information.

You might also like