0% found this document useful (0 votes)
78 views24 pages

Introduction To AI What Is Artificial Intelligence?

Artificial intelligence (AI) is the branch of computer science concerned with creating intelligent machines. AI can be defined as machines that perceive their environment and take actions to maximize their success. The four main approaches to AI are: acting humanly by passing the Turing test, thinking humanly through cognitive modeling, thinking rationally by following logical rules, and acting rationally as autonomous agents. Today, AI has many applications including autonomous planning, speech recognition, and intelligent software agents.

Uploaded by

kiransangeeta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views24 pages

Introduction To AI What Is Artificial Intelligence?

Artificial intelligence (AI) is the branch of computer science concerned with creating intelligent machines. AI can be defined as machines that perceive their environment and take actions to maximize their success. The four main approaches to AI are: acting humanly by passing the Turing test, thinking humanly through cognitive modeling, thinking rationally by following logical rules, and acting rationally as autonomous agents. Today, AI has many applications including autonomous planning, speech recognition, and intelligent software agents.

Uploaded by

kiransangeeta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Introduction to AI

What is artificial intelligence?

Artificial Intelligence is the branch of computer science concerned with making computers behave like humans.

Major AI textbooks define artificial intelligence as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment
and takes actions which maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making
intelligent machines,especially intelligent computer programs."

The definitions of AI according to some text books are categorized into four approaches and are summarized in the table below :

The four approaches in more detail are as follows :

(a)Acting humanly : The Turing Test approach

o Test proposed by Alan Turing in 1950

o The computer is asked questions by a human interrogator.

The computer passes the test if a human interrogator,after posing some written questions,cannot tell whether the written responses come from a person or not.
Programming a computer to pass ,the computer need to possess the following capabilities :

 Natural language processing to enable it to communicate successfully in English.


 Knowledge representation to store what it knows or hears
 Automated reasoning to use the stored information to answer questions and to draw new conclusions.
 Machine learning to adapt to new circumstances and to detect and extrapolate patterns

To pass the complete Turing Test,the computer will need:


 Computer vision to perceive the objects,and
 Robotics to manipulate objects and move about.

(b)Thinking humanly : The cognitive modeling approach

We need to get inside actual working of the human mind :

(a) through introspection – trying to capture our own thoughts as they go by;

(b) through psychological experiments

Allen Newell and Herbert Simon,who developed GPS,the “General Problem Solver” tried to trace the reasoning steps to traces of human subjects solving the
same problems.

The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to try to construct
precise and testable theories of the workings of the human mind

(c) Thinking rationally : The “laws of thought approach”

The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking”,that is irrefuatable reasoning processes. His syllogism provided
patterns for argument structures that always yielded correct conclusions when given correct premises—for example,”Socrates is a man;all men are
mortal;therefore Socrates is mortal.”. These laws of thought were supposed to govern the operation of the mind;their study initiated a field called logic.

(d)Acting rationally : The rational agent approach

An agent is something that acts. Computer agents are not mere programs ,but they are expected to have the following attributes also : (a) operating under
autonomous control, (b) perceiving their environment, (c) persisting over a prolonged time period, (e) adapting to change. A rational agent is one that acts so
as to achieve the best outcome.

The foundations of Artificial Intelligence

The various disciplines that contributed ideas,viewpoints,and techniques to AI are given below :

Philosophy(428 B.C. – present)

Aristotle (384-322 B.C.) was the first to formulate a precise set of laws governing the rational part of the mind. He developed an informal system of
syllogisms for proper reasoning,which allowed one to generate conclusions mechanically,given initial premises.

Computer Human Brain


Computational units 1 CPU,108 gates 1011 neurons
Storage units 1010 bits RAM 1011 neurons
Cycle time 1011 bits disk 1014 synapses
10-9 sec 10-3 sec
Bandwidth 1010 bits/sec 1014 bits/sec
Memory updates/sec 109 1014

Brains and digital computers perform quite different tasks and have different properties. Table 1.1 shows that there are 10000 times more neurons in the
typical human brain than there are gates in the CPU of a typical high-end computer. Moore’s Law predicts that the CPU’s gate count will equal the brain’s
neuron count around 2020.

Psycology(1879 – present)

The origin of scientific psychology are traced back to the work of German physiologist Hermann von Helmholtz(1821-1894) and his student Wilhelm
Wundt(1832 – 1920) In 1879,Wundt opened the first laboratory of experimental psychology at the university of Leipzig. In US,the development of computer
modeling led to the creation of the field of cognitive science. The field can be said to have started at the workshop in September 1956 at MIT.

Computer engineering (1940-present)

For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been the artifact of choice. AI also owes a debt to the
software side of computer science, which has supplied the operating systems, programming languages, and tools needed to write modern programs

Control theory and Cybernetics (1948-present)

Ktesibios of Alexandria (c. 250 B.c.) built the first self-controlling machine: a water clock with a regulator that kept the flow of water running through it at a
constant, predictable pace. Modern control theory, especially the branch known as stochastic optimal control, has as its goal the design of systems that
maximize an objective function over time.

Linguistics (1957-present)

Modem linguistics and AI, then, were "born" at about the same time, and grew up together, intersecting in a hybrid field called computational linguistics or
natural language processing.

History of Artificial Intelligence

The gestation of artificial intelligence (1943-1955)

There were a number of early examples of work that can be characterized as AI, but it was Alan Turing who first articulated a complete vision of AI in his 1950
article "Computing Machinery and Intelligence." Therein, he introduced the Turing test, machine learning, genetic algorithms, and reinforcement
learning.
The birth of artificial intelligence (1956)

McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him bring together U.S. researchers interested in automata theory, neural
nets, and the study of intelligence. They organized a two-month workshop at Dartmouth in the summer of 1956. Perhaps the longest-lasting thing to come
out of the workshop was an agreement to adopt McCarthy's new name for the field: artificial intelligence.

Early enthusiasm, great expectations (1952-1969)

The early years of A1 were full of successes-in a limited way.

General Problem Solver (GPS) was a computer program created in 1957 by Herbert Simon and Allen Newell to build a universal problem solver machine.
The order in which the program considered subgoals and possible actions was similar to that in which humans approached the same problems.
Thus, GPS was probably the first program to embody the "thinking humanly" approach. Herbert Gelernter (1959) constructed the Geometry Theorem Prover,
which was able to prove theorems that many students of mathematics would find quite tricky. Lisp was invented by John McCarthy in 1958 while he was at the
Massachusetts Institute of Technology (MIT). In 1963, McCarthy started the AI lab at Stanford. Tom Evans's ANALOGY program (1968) solved geometric
analogy problems that appear in IQ tests

A dose of reality (1966-1973)

From the beginning, AI researchers were not shy about making predictions of their coming successes. The following statement by Herbert Simon in 1957 is
often quoted: “It is not my aim to surprise or shock you-but the simplest way I can summarize is to say that there are now in the world machines that think, that
learn and that create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be
coextensive with the range to which the human mind has been applied.

Knowledge-based systems: The key to power? (1969-1979)

Dendral was an influential pioneer project in artificial intelligence (AI) of the 1960s, and the computer software expert system that it produced. Its primary aim
was to help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge of chemistry. It was done at
Stanford University by Edward Feigenbaum, Bruce Buchanan, Joshua Lederberg, and Carl Djerassi.

A1 becomes an industry (1980-present)

In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build intelligent computers running Prolog. Overall, the A1 industry boomed
from a few million dollars in 1980 to billions of dollars in 1988.

The return of neural networks (1986-present) Psychologists including David Rumelhart and Geoff Hinton continued the study of neural-net models of
memory.

A1 becomes a science (1987-present)


In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area. Speech technology and the related field of
handwritten character recognition are already making the transition to widespread industrial and consumer applications. The Bayesian network formalism was
invented to allow efficient representation of, and rigorous reasoning with, uncertain knowledge.

The emergence of intelligent agents (1995-present)

One of the most important environments for intelligent agents is the Internet.

What can AI do today?

Autonomous planning and scheduling: A hundred million miles from Earth, NASA's Remote Agent program became the first on-board autonomous
planning program to control the scheduling of operations for a spacecraft (Jonsson et al., 2000). Remote Agent generated plans from high-level goals specified
from the ground, and it monitored the operation of the spacecraft as the plans were executed-detecting, diagnosing, and recovering from problems as they
occurred.

Game playing: IBM's Deep Blue became the first computer program to defeat the world champion in a chess match when it bested Garry Kasparov by a
score of 3.5 to 2.5 in an exhibition match (Goodman and Keene, 1997).

Autonomous control: The ALVINN computer vision system was trained to steer a car to keep it following a lane. It was placed in CMU's NAVLAB computer-
controlled minivan and used to navigate across the United States-for 2850 miles it was in control of steering the vehicle 98% of the time.

Diagnosis: Medical diagnosis programs based on probabilistic analysis have been able to perform at the level of an expert physician in several areas of
medicine.

Logistics Planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994),
to do automated logistics planning and scheduling for transportation. This involved up to 50,000 vehicles, cargo, and people at a time, and had to account for
starting points, destinations, routes, and conflict resolution among all parameters. The AI planning techniques allowed a plan to be generated in hours that
would have taken weeks with older methods. The Defense Advanced Research Project Agency (DARPA) stated that this single application more than paid
back DARPA's 30-year investment in AI.

Robotics: Many surgeons now use robot assistants in microsurgery. HipNav (DiGioia et al., 1996) is a system that uses computer vision techniques to create
a three-dimensional model of a patient's internal anatomy and then uses robotic control to guide the insertion of a hip replacement prosthesis.

Language understanding and problem solving: PROVERB (Littman et al., 1999) is a computer program that solves crossword puzzles better than most
humans, using constraints on possible word fillers, a large database of past puzzles, and a variety of information sources including dictionaries and online
databases such as a list of movies and the actors that appear in them.
Task Classification of AI
The domain of AI is classified into Formal tasks, Mundane tasks, and Expert tasks.

Task Domains of Artificial Intelligence

Mundane (Ordinary) Formal Tasks Expert Tasks


Tasks
Perception  Mathematics  Engineering
 Geometry  Fault Finding
 Computer Vision
 Logic  Manufacturing
 Speech, Voice
 Integration and  Monitoring
Differentiation

Natural Language Games Scientific Analysis


Processing
 Go
 Understanding  Chess (Deep
 Language Blue)
Generation  Checkers
 Language
Translation

Common Sense Verification Financial Analysis

Reasoning Theorem Proving Medical Diagnosis

Planing Creativity

Robotics

 Locomotive

Humans learn mundane (ordinary) tasks since their birth. They learn by perception, speaking, using language, and locomotives. They learn Formal Tasks and
Expert Tasks later, in that order.

For humans, the mundane tasks are easiest to learn. The same was considered true before trying to implement mundane tasks in machines. Earlier, all work of
AI was concentrated in the mundane task domain.

Later, it turned out that the machine requires more knowledge, complex knowledge representation, and complicated algorithms for handling mundane tasks. This
is the reason why AI work is more prospering in the Expert Tasks domain now, as the expert task domain needs expert knowledge without common sense,
which can be easier to represent and handle.

Refer to the ppt for tic-tac-toe problem solving approaches.


Problem formulation
Involves 4 steps :
 Define the problem precisely. The definition must comprise of precise specification of the initial situation as well the final situation with
acceptable solutions to the problem.
 Analyse the problem.

 Isolate and represent the task knowledge which is required to solve the problem.
 Choose the best problem-solving technique(s) and apply it (them) to the particular problem.

Problem Definition
 A problem is defined by its ‘elements’ and their ‘relations’. To provide a formal description of a problem, we need to do the following:
a. Define a state space that contains all the possible configurations of the relevant objects, including some impossible ones.
b. Specify one or more states that describe possible situations, from which the problem solving process may start. These states are called initial
states.
c. Specify one or more states that would be acceptable solution to the problem.
These states are called goal states.
Specify a set of rules that describe the actions (operators) available.
The problem can then be solved by using the rules, in combination with an appropriate control strategy, to move through the problem space until a path from an
initial state to a goal state is found. This process is known as ‘search’.

Chess Game
 Legal Moves

 Position that represents a win

 It is not only the play but also the winning condition which terminates the game

 The below figure shows one legal move of chess game


Goal
o Opponent does not have a legal move
o King is under attack

Practical Difficulties
o No person could ever supply a complete set of such rules. It would take too long and could certainly not be done without mistakes.

o No program could easily handle all those rules. Although a hashing scheme could be used to find the relevant rules for each m ove fairly
quickly, storing is in fact a difficulty.
The state space representation for chess game is well organized. Each state corresponds to a legal position of the board We can play chess by starting
at an initial state,using a set of rules to move from one state to another and finally end at one of the final states

Advantage of State Space Search

• It allows for a formal definition of a problem as the need to convert some given situation into some desired situation using a set of
permissible operations.

• It permits us to define the process of solving a particular problem as a combination of known techniques and search. Search is a very
important process in the solution of hard problems for which no more direct techniques are available.

Water Jug Problem

You are given two jugs, a 4-litre one and a 3-litre one. Neither has any measuring markers on it. There is a pump that can be used to fill the
jugs with water. How can you get exactly 2 litres of water into 4-litre jug?

refer problem formulation_3.pdf


Search algorithm

• two types of search - Uninformed and Informed.


• Uninformed Search : Also called blind, exhaustive or brute-force search, uses no information about the problem to guide the search
and therefore may not be very efficient.
• Informed Search : Also called heuristic or intelligent search, uses information about the problem to guide the search, usually guesses
the distance to a goal state and therefore efficient, but the search may not be always possible.
Refer 8-puzzle problem solving using heuristic functions

Refer search_4.pdf

Heuristic search techniques

GENERATE-AND-TEST ALGORITHM

Generate-and-test search algorithm is a very simple algorithm that guarantees to find a solution if done systematically and there exists a solution.
ALGORITHM: GENERATE-AND-TEST
1.Generate a possible solution.
2.Test to see if this is the expected solution.
3.If the solution has been found quit else go to step 1.

Potential solutions that need to be generated vary depending on the kinds of problems. For some problems the possible solutions may be particular
points in the problem space and for some problems, paths from the start state.

• Exhaustive generate-and-test : An exhaustive search of the problem space where solutions are generated systematically. This will find a
solution definitely if one exists but not efficient for problems with large space as it may take long time.
Random generate-and-test : No guarantee that a solution will ever be found
• Heuristic generate-and-test: not consider paths that seem unlikely to lead to a solution.
• Plan generate-test:

SYSTEMATIC/HEURISTIC GENERATE-AND-TEST
Depth-first search tree with backtracking can be used to implement systematic generate-and-test procedure. As per this procedure, if
some intermediate states are likely to appear often in the tree, it would be better to modify that procedure to traverse a graph rather
than a tree.

GENERATE-AND-TEST AND PLANNING

Exhaustive generate-and-test is very useful for simple problems. But for complex problems even heuristic generate-and-test is not very
effective technique. But this may be made effective by combining with other techniques in such a way that the space in which to search
is restricted. An AI program DENDRAL, for example, uses plan-Generate-and-test technique. First, the planning process uses
constraint-satisfaction techniques and creates lists of recommended and contraindicated substructures. Then the generate-and-test
procedure uses the lists generated and required to explore only a limited set of structures. Constrained in this way, generate-and-test
proved highly effective. A major weakness of planning is that it often produces inaccurate solutions as there is no feedback from the
world. But if it is used to produce only pieces of solutions then lack of detailed accuracy becomes unimportant.

Hill climbing algorithm

Hill climbing is a variety of depth-first (generate - and - test) search. A feedback is used here to decide on the direction of motion in the search space. In the depth-
first search, the test function will merely accept or reject a solution.
But in hill climbing the test function is provided with a heuristic function which provides an estimate of how close a given state is to goal state.
Hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to
a problem, then attempts to find a better solution by incrementally changing a single element of the solution.
If the change produces a better solution, an incremental change is made to the new solution, repeating until no further improvements can be found.
For example, hill climbing can be applied to the travelling salesman problem. It is easy to find an initial solution that visits all the cities but will be very poor
compared to the optimal solution.
The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited.
Two variations of hill climbing are:
1. Simple hill climbing:
2. Steepest Ascent Hill Climbing.
Simple hill climbing: It is simple way to implement hill climbing:
Algorithm:
1. Evaluate the initial state. If it is a goal state, then return it and quit; otherwise make it a current state and goto Step 2.
2. Loop until a solution is found or there are no new operators left to be applied.
a. Select and apply a new operator
b. Evaluate the new state:
i. If it is a goal state, then return and quit.
ii. If it is better than current state then make it a new current state.
iii. If it is not better than the current state then continue the loop, go to Step 2.
 Hill-Climbing = generate-and-test + heuristics
Steepest-Ascent Hill Climbing

It is better than the Simple Hill Climbing in a way that it chooses the better node out of all the nodes and then proceeds, unlike Simple Hill Climbing in
which it chooses the first better node it encounters and then proceeds. Thereby missing the next nodes which may have higher heuristic values than
the current node and leading to the wrong selection and falling into local maxima.

The algorithm is as follows:

1. Evaluate the initial state. If it also a goal state return it and quit. Otherwise continue with the initial state as current state.
2. Loop until a solution is found or until a complete iteration produces no change to the current state:
a) Let SUCC be a state such that any possible successor of current state is better than SUCC.
b) For each operator that applies to the current state do-
i) Apply the operator and generate a new state.
ii) Evaluate the new state, if it be a goal state then return it & quit. Else if it is better than SUCC then set SUCC to this state else retain
SUCC as it was.
c) If SUCC is better than the current state then set current state to SUCC.

But, even this algorithm has the same disadvantage as Simple Hill Climbing. It cannot backtrack to its parent node. If it gets trapped in the local maxima, then
nothing can help it to get out of that situation.

Hill climbing vs steepest ascent hill climbing


First, the simple Hill climbing is greedy algorithm but steepest Hill climbing is dynamic programming. In simple Hill climbing the algorithm looks for next better
value and might end up in local maxima/minima, whereas steepest Hill climbing looks for best value.

In the below given tree, the simple hill climbing, end up taking path to B, as cost to B =0.5 is minimum, but steepest hill climbing will take path to D as its overall
cost(2+1=3) is minimum.
Hill Climbing: Disadvantages

Ways Out

• Backtrack to some earlier node and try going in a different direction.


• Make a big jump to try to get in a new section.
• Moving in several directions at once.
Refer chapter3.pdf for blocks world problem and simulated annealing.

Best First Search algorithm

Hill Climbing algorithm does not look ahead beyond the immediate neighbors of the current state. Concerned only with best neighboring node to expand and the
best neighboring is decided by the above evaluations functions.Whereas, Best First Search algorithm look ahead of the immediate neighbors to find the best
path to the goal(using Heuristic evaluation) and then proceeding with the best one. So difference lies in approach of local searches and systematic search
algorithms.

In BFS and DFS, when we are at a node, we can consider any of the adjacent as next node. So both BFS and DFS blindly explore paths without considering any
cost function. The idea of Best First Search is to use an evaluation function to decide which adjacent is most promising and then explore. Best First Search falls
under the category of Heuristic Search or Informed Search. We use a priority queue to store costs of nodes.

Algorithm:
1. OPEN = {initial state}.
2. Loop until a goal is found or there are no nodes left in OPEN:
− Pick the best node in OPEN
− Generate its successors
− For each successor:
new ->evaluate it, add it to OPEN, record its parent
generated before -> change parent, update successors

Let us consider below example.


We start from source "S" and search for goal "I" using given costs and Best First search.

pq initially contains S

We remove s from and process unvisited neighbors of S to pq.

pq now contains {A, C, B} (C is put before B because C has lesser cost)

We remove A from pq and process unvisited neighbors of A to pq.

pq now contains {C, B, E, D}

We remove C from pq and process unvisited neighbors of C to pq.

pq now contains {B, H, E, D}

We remove B from pq and process unvisited neighbors of B to pq.

pq now contains {H, E, D, F, G}

We remove H from pq. Since our goal "I" is a neighbor of H, we return.

Analysis :
The worst case time complexity for Best First Search is O(n * Log n) where n is number of
nodes. In worst case, we may have to visit all nodes before we reach goal. Note that priority
queue is implemented using Min(or Max) Heap, and insert and remove operations take O(log n)
time.
Performance of the algorithm depends on how well the cost or evaluation function is
designed.

CONSTRAINT SATISFACTION:-

Many problems in AI can be considered as problems of constraint satisfaction, in which the goal
state satisfies a given set of constraint. constraint satisfaction problems can be solved by using
any of the search strategies. The general form of the constraint satisfaction procedure is as
follows:
Until a complete solution is found or until all paths have led to lead ends, do
1. select an unexpanded node of the search graph.
2. Apply the constraint inference rules to the selected node to generate all possible new
constraints.
3. If the set of constraints contains a contradiction, then report that this path is a dead end.
4. If the set of constraints describes a complete solution then report success.
5. If neither a constraint nor a complete solution has been found then apply the rules to generate
new partial solutions. Insert these partial solutions into the search graph.

Example: consider the crypt arithmetic problems.

SEND
+ MORE
----------
MONEY
----------
Assign decimal digit to each of the letters in such a way that the answer to the problem is correct
to the same letter occurs more than once , it must be assign the same digit each time . no two
different letters may be assigned the same digit.

CONSTRAINTS:-
1. no two digit can be assigned to same letter.
2. only single digit number can be assign to a letter.
1. no two letters can be assigned same digit.
2. Assumption can be made at various levels such that they do not contradict each other.
3. The problem can be decomposed into secured constraints. A constraint satisfaction approach
may be used.
4. Any of search techniques may be used.
5. Backtracking may be performed as applicable us applied search techniques.
6. Rule of arithmetic may be followed.

Initial state of problem.


D=?
E=?
Y=?
N=?
R=?
O=?
S=?
M=?
C1=?
C2=?
C1 ,C 2, C3 stands for the carry variables respectively.
Goal State: the digits to the letters must be assigned in such a manner so that the sum is satisfied.
Solution Process:
We are following the depth-first method to solve the problem.
1. initial guess m=1 because the sum of two single digits can generate at most a carry '1'.

2. When n=1 o=0 or 1 because the largest single digit number added to m=1 can generate the
sum of either 0 or 1 depend on the carry received from the carry sum. By this we conclude that
o=0 because m is already 1 hence we cannot assign same digit another letter(rule no.)
3. We have m=1 and o=0 to get o=0 we have s=8 or 9, again depending on the carry received
from the earlier sum.
The same process can be repeated further. The problem has to be composed into various
constraints. And each constraints is to be satisfied by guessing the possible digits that the letters
can be assumed that the initial guess has been already made . rest of the process is being shown
in the form of a tree, using depth-first search for the clear understandability of the solution process.
D>6(Contradiction)
Performance measure and analysis of search algorithms

Four properties to compare searching algorithms

Complete

Whether it always finds a solution if one exists.

Ex : BFS & Uniform-cost search (if cost is positive) are 'complete' searching algorithms because they
can check every node in a map.

Time Complexity

Number of nodes generated or expanded.

Space

Maximum number of nodes in memory.


Optimality

Whether it always finds least-cost solution

Measures

Time and space complexity are measured in terms of


 maximum branching factor of the search tree (mentioned by b)
 depth of the least-cost solution (d)
 maximum depth of the state space (m)

You might also like