0% found this document useful (0 votes)
26 views23 pages

Artificial Intelligence: Dr. Piyush Joshi IIIT Sri City

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views23 pages

Artificial Intelligence: Dr. Piyush Joshi IIIT Sri City

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Artificial Intelligence

Dr. Piyush Joshi


IIIT Sri City
a) Exams (70%)
1. Mid Term-1 (20%)
2. Mid Term-2 (20%)
3. End Term (30%)
b) Assignments (20%)
1. Assignement-1 (10%)
Assigned: After Mid Sem-1
Submission: Before Mid Term -2
2. Assignment- 2 (10%)
Assigned: After Mid Term-2
Submission: Before End Term
c) Quizzes (10%)
Introduction of AI
We call ourselves Homo sapiens
Wise Man: Thinking

Intelligence: Perceive, understand, predict, and manipulate a world


AI definitions
Acting humanly: The Turing Test approach
The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a
machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that
of a human.
Turing proposed that a computer can be said to possess artificial intelligence if it can mimic
human responses under specific conditions.
To Acting Humanly, the computer would need to possess the following capabilities:
Natural language processing
Knowledge representation
Automated reasoning
Machine learning
To pass the total Turing Test, the computer will need
COMPUTER VISION • computer vision to perceive objects, and
ROBOTICS • robotics to manipulate objects and move about.
Thinking humanly: The cognitive modeling
approach
If we are going to say that a given program thinks like a human, we must have some
way of determining how humans think. We need to get inside the actual workings of
human minds.
There are three ways to do this: through introspection—trying to catch our own
thoughts as they go by; through psychological experiments—observing a person
in action; and through brain imaging—observing the brain in action.
Once we have a sufficiently precise theory of the mind, it becomes possible to
express the theory as a computer program.
If the program’s input–output behavior matches corresponding human behavior, that
is evidence that some of the program’s mechanisms could also be operating in
humans.
Thinking rationally: The “laws of thought”
approach
The Greek philosopher Aristotle was one of the first to attempt to codify “right
thinking,” that is, irrefutable reasoning processes.

Patterns for argument structures that always yielded correct conclusions.

For example, “Socrates is a man; all men are mortal; therefore, Socrates is mortal.”

These laws of thought were supposed to govern the operation of the mind; their
study initiated the field called logic.
Acting rationally: The rational agent approach

An agent is just something that acts.

Of course, all computer programs do something, but computer agents are expected to do
more:
operate autonomously
perceive their environment
persist over a prolonged time period
adapt to change
create and pursue goals.

A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
Foundation of AI
We discuss disciplines that contributed ideas, viewpoints, and techniques to AI.
Philosophy
Mathematics
Neuroscience
Psychology
Computer engineering
Control theory
Linguistics
Foundation of AI : Philosophy
Can formal rules be used to draw valid conclusions?
How does the mind arise from a physical brain?
Where does knowledge come from?
How does knowledge lead to action?
Aristotle (384–322 B.C.) was the first to formulate a precise set of laws governing the rational
part of the mind. He developed an informal system of syllogisms for proper reasoning, which
in principle allowed one to generate conclusions.
Ren´e Descartes (1596–1650) gave the first clear discussion of the distinction between mind
and matter and of the problems that arise.
He held that there is a part of the human mind (or soul or spirit) that is outside of
nature, exempt from physical laws.
The final element in the philosophical picture of the mind is the connection between
knowledge and action. This question is vital to AI because intelligence requires action as
well as reasoning.
Foundation of AI : Mathematics
What are the mathematical formal rules to draw valid conclusions?
What can be computed?
How do we reason with uncertain information?
Philosophers staked out some of the fundamental ideas of AI, but the leap to a formal
science required a level of mathematical formalization in three fundamental areas: logic,
computation, and probability.

Thomas Bayes (1702–1761) proposed a rule for updating probabilities in the light of new evidence.
Bayes’ rule underlies most modern approaches to uncertain reasoning in AI systems

The idea of formal logic can be traced back to the philosophers of ancient Greece, but its
mathematical development really began with the work of George Boole (1815–1864), who worked
out the details of propositional, or Boolean, logic (Boole, 1847).
Foundation of AI : Neuroscience
How do brains process information?

Neuroscience is the study of the nervous system, particularly the brain.

The measurement of intact brain activity began in 1929 with the invention by Hans Berger of the
electroencephalograph (EEG).

The recent development of functional magnetic resonance imaging (fMRI) (Ogawa et al., 1990;
Cabeza and Nyberg, 2001) is giving neuroscientists unprecedentedly detailed images of brain activity,
enabling measurements that correspond in interesting ways to ongoing cognitive processes.

Note: Even with a computer of unlimited capacity, we still would not know how to achieve the
brain’s level of intelligence.
Foundation of AI : Computer engineering
How can we build an efficient computer?

For artificial intelligence to succeed, we need two things: intelligence and an artifact.

The computer has been the artifact of choice. The first operational computer was the
electromechanical Heath Robinson, built in 1940 by Alan Turing’s team for a single
purpose: deciphering German messages.

Of course, there were calculating devices before the electronic computer.

Charles Babbage (1792–1871) Analytical Engine was far more ambitious: it


included addressable memory, stored programs, and conditional jumps.
Foundation of AI : Control theory
How can artifacts operate under their own control?

Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water
clock with a regulator that maintained a constant flow rate. This invention changed the
definition of what an artifact could do.

Other examples of self-regulating feedback control systems include the steam engine
governor, created by James Watt (1736–1819), and the thermostat, invented by
Cornelis Drebbel (1572–1633), who also invented the submarine.

• Linguistics
How does language relate to thought?
Understanding language requires an understanding of the subject matter and context, not
just an understanding of the structure of sentences.
The History of AI
• The gestation of artificial intelligence (1943–1955)

The first work that is now generally recognized as AI was done by Warren McCulloch and Walter Pitts (1943).

They drew on three sources:


knowledge of the basic physiology
function of neurons in the brain; a formal analysis of propositional logic
Turing’s theory of computation.

They showed, for example, that any computable function could be computed by some network of connected
neurons, and that all the logical connectives (and, or, not, etc.) could be implemented by simple net structures.

McCulloch and Pitts also suggested that suitably defined networks could learn.

Most influential work by Alan Turing: in his 1950 article “Computing Machinery and Intelligence.” Therein,
he introduced the Turing Test, machine learning, genetic algorithms, and reinforcement learning.
The birth of Artificial Intelligence (1956)
McCarthy brings U.S. researchers together those are interested in automata theory, neural
nets, and the study of intelligence.

He organized a two-month workshop at Dartmouth in the summer of 1956.

The workshop is particularly for development in AI.

The conclusion: From now, AI treat as a separate field due to


Duplicating human faculties such as creativity, self-improvement, and language use. None of the
other fields were addressing these issues.
AI is the only field to attempt to build machines that will function autonomously in complex,
changing environments.
Early enthusiasm, great expectations
(1952–1969)
The intellectual establishment, by and large, preferred to believe that “a machine can never
do X.” (a long list of X’s gathered by Turing.)

AI researchers naturally responded by demonstrating one X after another.

John McCarthy referred to this period as the “Look, Ma, no hands!” era.

Starting in 1952, Arthur Samuel wrote a series of programs for checkers (draughts) that
eventually learned to play at a strong amateur level.

In MIT AI Lab Memo No. 1, McCarthy defined the high-level language Lisp, which was to
become LISP the dominant AI programming language for the next 30 Years.
A dose of reality (1966–1973)
The following statement by Herbert Simon in 1957 is often quoted:
It is not my aim to surprise or shock you—but the simplest way I can summarize is to
say that there are now in the world machines that think, that learn and that create.
Moreover, their ability to do these things is going to increase rapidly until—in a visible
future—the range of problems they can handle will be coextensive with the range to which
the human mind has been applied.
Simon also made more concrete predictions: that within 10 years a computer would be chess
champion, and a significant mathematical theorem would be proved by machine.
These predictions came true (or approximately true) within 40 years rather than 10.
These early AI systems turned out to fail miserably when tried out on wider
selections of problems and on more difficult problems.
AI becomes an industry (1980–present)
The first successful commercial expert system, R1, began operation at the Digital
Equipment Corporation (McDermott, 1982).

Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in
1988, including hundreds of companies building expert systems, vision systems, robots, and
software and hardware specialized for these purposes.

Soon after that came a period called the “AIWinter,” in which many companies fell by the
wayside as they failed to deliver on extravagant promises.
The return of neural networks
(1986–present)
In the mid-1980s at least four different groups reinvented the back-propagation
learning algorithm first found in 1969 by Bryson and Ho.

The algorithm was applied to many learning problems in computer science and psychology.

AI adopts the scientific method (1987–present)

AI was founded in part as a rebellion against the limitations of existing fields like
control theory and statistics, but now it is embracing those fields.

The emergence of intelligent agents (1995–present) such as internet and AI systems have
become so common.
The availability of very large data sets
(2001–present)
Throughout the 60-year history of computer science, the emphasis has been on the
algorithm as the main subject of study.

But some recent work in AI suggests that for many problems, it makes more sense to worry
about the data and be less picky about what algorithm to apply.

This is true because of the increasing availability of very large data sources: for
example, trillions of words of English and billions of images from the Web
(Kilgarriff and Grefenstette, 2006); or billions of base pairs of genomic sequences (Collins et
al., 2003).

Reporters have noticed the surge of new applications and have written that “AI Winter” may
be yielding to a new Spring (Havenstein, 2005).
Applications of AI
Robotic vehicles
Speech recognition
Autonomous planning and scheduling
Game playing
Spam fighting
Logistics planning
Robotics
Machine Translation

You might also like