Module 1 (Part 1)
Module 1 (Part 1)
ACT ACT
HUMANLY RATIONALLY
Acting Humanly: The Turing Test Approach
• The Turing test, proposed by Alan Turing (1950), provides a definition of intelligence.
• A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer.
ne
A computer to pass this test (Turing Test), would need to possess the following capabilities:
– Automated Reasoning to use the stored information to answer questions and to draw new
conclusions
– Machine Learning to adapt to new circumstances and to detect and estimate patterns
• If we are going to say that a given program thinks like a human, we must have some way of
determining how humans think.
Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the
senses"
• Once we have a sufficiently precise theory of the mind, it becomes possible to express the
theory as a computer program.
• Cognitive science brings together computer models from AI and experimental techniques from
psychology to construct precise and testable theories of the human mind.
Thinking Rationally: The “Laws of Thought” Approach
• Syllogisms provided patterns for argument structures that always yielded correct conclusions
when given correct premises
• For example,
“Socrates is a man; all men are mortal; therefore, Socrates is mortal.”
• These laws of thought were supposed to govern the operation of the mind; this resulted in the
field called logic.
– Second, there is a big difference between solving a problem “in principle” and solving it in practice. Even
problems with just a few hundred facts can exhaust the computational resources of any computer unless it
has some guidance as to which reasoning steps to try first.
Acting Rationally: The Rational Agent Approach
• An agent is just something that acts. All computer programs do something, but computer
agents are expected to do more.
• Computer agents operate autonomously, perceive their environment, persist over a prolonged
time period, adapt to change, and create and pursue goals.
• A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
• Making correct inferences (as in case of ‘Laws of Thought’), is sometimes a part of being a
rational agent, because one way to act rationally is to reason logically to the conclusion.
• There are also ways of acting rationally that cannot be said to involve inference. For example,
recoiling from a hot stove is a reflex action that is usually more successful than a slower action
taken after careful deliberation.
1. The Gestation of AI (1943 – 1955)
ne
• 1957: Newell and Simon’s early success was followed up with the General Problem Solver (GPS).
GPS was probably the first program to embody the “Thinking Humanly” approach.
• 1958: McCarthy defined the high-level language LISP, which was to become the dominant AI
programming language for the next 30 years.
• For example, in 1957, Herbert Simon predicted - within 10 years a computer would be chess
champion. These predictions came true (or approximately true) within 40 years rather than 10.
Simon’s overconfidence was due to the promising performance of early AI systems on simple
examples.
• In almost all cases, however, these early systems turned out to fail miserably when tried out on wider
selections of problems and on more difficult problems.
• The first kind of difficulty arose because most early programs knew nothing of their subject
matter; they succeeded by means of simple syntactic manipulations.
• The second kind of difficulty was the intractability of many of the problems that AI was
attempting to solve. Most of the early AI programs solved problems by trying out different
combination of steps until the solution was found.
• The third difficulty arose because of some fundamental limitations on the basic structures being
used to generate intelligent behaviour.
• The picture of problem solving that had arisen during the first decade of AI research was of
a general-purpose search mechanism trying to string together elementary reasoning steps
to find complete solutions.
• Such approaches have been called weak methods because, although general, they do not
scale up to large or difficult problem instances.
• The alternative to weak methods is to use more powerful, domain-specific knowledge that
allows larger reasoning steps and can more easily handle typically occurring cases in
narrow areas of expertise (Emergence of Expert Systems).
• The DENDRAL program, developed in 1969, was an early example of this approach. It was the
first successful knowledge-intensive system: its expertise derived from large numbers of special-
purpose rules.
• This new methodology of expert systems is then applied to other areas of human expertise.
• 1972: Developed MYCIN to diagnose blood infections. With about 450 rules, MYCIN was able to
perform as well as some experts, and considerably better than junior doctors.
• The importance of domain knowledge was also apparent in the area of understanding natural
language.
6. AI becomes an industry (1980–present)
• 1982: The first successful commercial expert system, R1, began operation at the Digital
Equipment Corporation (DEC). The program helped configure orders for new computer systems.
• By 1988, DEC’s AI group had 40 expert systems deployed, with more on the way.
• Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988,
including hundreds of companies building expert systems, vision systems, robots, and software
and hardware specialized for these purposes.
• As the hype regarding AI increased, researchers feared that the funding would once again dry
up. This fear proved to be correct. Soon after that came the second ‘AI Winter’.
7. The return of Neural Networks (1986–present)
• In the mid-1980s, the Back-Propagation learning algorithm, which was first found in 1969 was
reinvented.
8. AI adopts the scientific method (1987–present)
• Recent years have seen a revolution in both the content and the methodology of work in artificial
intelligence. In terms of methodology, AI has finally come firmly under the scientific method.
• In recent years, approaches based on Hidden Markov Models (HMMs) have come to dominate the
area of Speech Recognition.
• As a result of these developments, so-called Data Mining technology has spawned a vigorous new
industry.
• In 1988, Probabilistic Reasoning in Intelligent Systems led to a new acceptance of probability and
decision theory in AI.
• The Bayesian Network formalism was invented to allow efficient representation of uncertain
knowledge.
• Similar gentle revolutions occurred in robotics, computer vision, and knowledge representation.
9. The emergence of Intelligent Agents (1995–present)
• One of the most important environment for intelligent
agents is the Internet. AI systems have become so
common in Web-based applications. Moreover, AI
technologies underlie many Internet tools, such as
search engines, recommender systems, and website
aggregators.
• 1997: In the year 1997, IBM Deep Blue beats world chess champion,
Garry Kasparov, and became the first computer to beat a world chess
champion.
• 2002: for the first time, AI entered the home in the form of Roomba, a
vacuum cleaner.
• 2006: AI came in the Business world. Companies like Facebook, Twitter,
and Netflix also started using AI.
10. The availability of very large data sets (2001–present)
• Throughout the 60-year history of computer science, the emphasis has been on the algorithm
as the main subject of study. But some recent work in AI suggests that for many problems, it
makes more sense to worry about the data and be less picky about what algorithm to apply.
This is true because of the increasing availability of very large data sources.
• For example, in 1995, Yarowsky’s work on word-sense disambiguation: given the use of the
word “plant” in a sentence, does that refer to flora or factory Yarowsky showed that, given a
corpus of unannotated text and just the dictionary definitions of the two senses – “works,
industrial plant” and “flora, plant life” – one can label examples in the corpus.
• Reports have noticed the surge of new applications and have written that “AI Winter” may be
yielding to a new Spring.
The Foundations of AI
The most important disciplines that contributed ideas, viewpoints, and techniques to AI are:
– Philosophy
– Mathematics
– Economics
– Neuroscience
– Psychology
– Computer Engineering
– Linguistics
Philosophy
Philosophers made AI conceivable by considering the ideas that:
– The mind is in some ways like a machine
The history of philosophy can be organized around the following series of questions:
– Can formal rules be used to draw valid conclusions?
• They also set the groundwork for understanding computation and reasoning about algorithms.
the future?
Neuroscience
• How do brain process information?
• Psychologists adopted the idea that humans and animals can be considered information
processing machines.
Computer Engineering
• How can we build an efficient computer?
• Control theory deals with designing devices that act optimally on the basis of feedback from the
environment.
Linguistics
• How does language relate to thought?
• Speech Recognition:
• A traveler calling United Airlines to book a flight can have the
entire conversation guided by an automated speech
recognition and dialog management system.
• Spam fighting:
– Each day, learning algorithms classify over a billion
messages as spam, saving the recipient from having to
waste time deleting them.
• Logistics planning:
– During the Persian Gulf crisis of 1991, U.S. forces
deployed a Dynamic Analysis and Replanning Tool
(DART), to do automated logistics planning and
scheduling for transportation.
• Robotics:
– The iRobot Corporation has sold over two million Roomba robotic
vacuum cleaners for home use.
– The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear
explosives, and identify the location of snipers.
• Machine Translation:
PackBot
– A computer program automatically translates one language
to another language.
• Astronomy:
– Artificial Intelligence can be very useful to solve complex