PAI (21AI54) Module 1 Notes
PAI (21AI54) Module 1 Notes
MODULE 1
Chapter - 1 – INTRODUCTION
1. What is AI?
2. Foundations of AI
3. History of AI
What is AI?
➢ According to the father of Artificial Intelligence, John McCarthy, Artificial
Intelligence is “The science and engineering of making intelligent machines,
especially intelligent computer programs”.
➢ Artificial Intelligence is a way of making a computer, a computer-controlled robot,
or a software think intelligently, in the similar manner the intelligent humans think.
➢ AI is accomplished by studying how human brain thinks, and how humans learn,
decide, and work while trying to solve a problem, and then using the outcomes of
this study as a basis of developing intelligent software and systems.
2. Mathematics
➢ What are the formal rules to draw valid conclusions?
➢ What can be computed?
➢ How do we reason with uncertain information?
Philosophers staked out some of the fundamental ideas of AI, but the leap to a formal science
required a level of mathematical formalization in three fundamental areas: logic,
computation, and probability.
The idea of formal logic can be traced back to the philosophers of ancient Greece, but its
mathematical development really began with the work of George Boole (1815– 1864), who
worked out the details of propositional, or Boolean, logic (Boole, 1847).
Computations - The first nontrivial algorithm is thought to be Euclid’s algorithm for
computing greatest common divisors. Some fundamental results can also be interpreted as
showing that some functions on the integers cannot be represented by an algorithm—that
is, they cannot be computed.
Besides logic and computation, the third great contribution of mathematics to AI is the
theory of probability. The Italian Gerolamo Cardano (1501–1576) first framed the idea of
probability, describing it in terms of the possible outcomes of gambling events.
4. Neuroscience
➢ How do brains process information?
Neuroscience is the study of the nervous system, particularly the brain. It has also long been
known that human brains are somehow different; in about 335 B.C. Aristotle wrote, “Of all
the animals, man has the largest brain in proportion to his size.” Paul Broca’s (1824–1880)
study of aphasia (speech deficit) in brain-damaged patients in 1861 demonstrated the
existence of localized areas of the brain responsible for specific cognitive functions. By that
time, it was known that NEURON the brain consisted of nerve cells, or neurons, but it was
not until 1873 that Camillo Golgi (1843–1926) developed a staining technique allowing the
observation of individual neurons in the brain.
5. Psychology
➢ How do humans and animals think and act?
The origins of scientific psychology are usually traced to the work of the German physicist
Hermann von Helmholtz (1821–1894) and his student Wilhelm Wundt (1832–1920). In
1879, Wundt opened the first laboratory of experimental psychology, at the University of
Leipzig. Wundt insisted on carefully controlled experiments in which his workers would
perform a perceptual or associative task while introspecting on their thought processes.
Biologists studying animal behavior, on the other hand, lacked introspective data and
developed an objective methodology, as described by H. S. Jennings (1906) in his influential
work Behavior of the Lower Organisms.
Applying this viewpoint to humans, the behaviorism movement, led by John Watson
(1878–1958), rejected any theory involving mental processes on the grounds that
introspection could not provide reliable evidence.
Cognitive psychology, which views cognitive the brain as an information-processing
device, can be traced back at least to the works of William James (1842– 1910). Craik
specified the three key steps of a knowledge-based agent: (1) the stimulus must be translated
into an internal representation, (2) the representation is manipulated by cognitive processes
to derive new internal representations, and (3) these are in turn retranslated back into action.
6. Computer engineering
➢ How can we build an efficient computer?
For artificial intelligence to succeed, we need two things: intelligence and an artifact. The
computer has been the artifact of choice. The modern digital electronic computer was
invented independently and almost simultaneously by scientists in three countries embattled
in World War II.
The first operational computer was the electromechanical Heath Robinson,8 built in 1940
by Alan Turing’s team for a single purpose: deciphering German messages. The first
operational programmable computer was the Z-3, the invention of Konrad Zuse in Germany
in 1941.The first electronic computer, the ABC, was assembled by John Atanasoff and his
Department of AI&ML, CIT - Ponnampet Page 6
Principles of Artificial Intelligence (21AI54) Module 1
student Clifford Berry between 1940 and 1942 at Iowa State University. The first
programmable machine was a loom, devised in 1805 by Joseph Marie Jacquard (1752–
1834), that used punched cards to store instructions for the pattern to be woven.
8. Linguistics
➢ How does language relate to thought?
Modern linguistics and AI, then, were “born” at about the same time, and grew up together,
intersecting in a hybrid field called computational linguistics or natural language
processing.
The problem of understanding language soon turned out to be considerably more complex
than it seemed in 1957. Understanding language requires an understanding of the subject
matter and context, not just an understanding of the structure of sentences. This might seem
obvious, but it was not widely appreciated until the 1960s. Much of the early work in
knowledge representation (the study of how to put knowledge into a form that a computer
can reason with) was tied to language and informed by research in linguistics, which was
connected in turn to decades of work on the philosophical analysis of language.
➢ We use the term percept to refer to the agent’s perceptual inputs at any given
instant.
➢ An agent’s percept sequence is the complete history of everything the agent has
ever perceived.
➢ An agent’s behavior is described by the agent function that maps any given
percept sequence to an action.
➢ We use rectangles to denote the current internal state of the agent’s decision process,
and ovals to represent the background information used in the process.
➢ The agent program, is shown in Figure 2.10.
➢ Search and planning are the subfields of AI devoted to finding action sequences that
achieve the agent’s goals.
➢ Notice that decision making of this kind is fundamentally different from the
condition– action rules of reflex agents, in that it involves consideration of the
future—both “What will happen if I do such-and-such?” and “Will that make me
happy?”
➢ The reflex agent brakes when it sees brake lights. A goal-based agent, in principle,
could reason that if the car in front has its brake lights on, it will slow down.
➢ Although the goal-based agent appears less efficient, it is more flexible because the
knowledge that supports its decisions is represented explicitly and can be modified.
➢ For the reflex agent, on the other hand, we would have to rewrite many condition–
action rules.
Learning agents
➢ A learning agent can be divided into four conceptual components, as shown in
Figure 2.15.
➢ The learning element, which is responsible for making improvements, and the
performance element, which is responsible for selecting external actions.
➢ The performance element is what we have previously considered to be the entire
agent: it takes in percepts and decides on actions.