0% found this document useful (0 votes)
4 views

Module 1 (Part 1)

Ma 34

Uploaded by

abinesanthosh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Module 1 (Part 1)

Ma 34

Uploaded by

abinesanthosh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

WHAT IS AI ?

Artificial intelligence, or AI , is the technolo gy that


enables computers and machines to simul ate huma n
intelligence and problem -solving capabilities .
John McCarthy
- The Father of AI
THINK THINK
HUMANLY RATIONALLY

ACT ACT
HUMANLY RATIONALLY
Acting Humanly: The Turing Test Approach

• The Turing test, proposed by Alan Turing (1950), provides a definition of intelligence.

• A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer.

ne
A computer to pass this test (Turing Test), would need to possess the following capabilities:

– Natural Language Processing (NLP), to enable it to communicate successfully in English

– Knowledge Representation to store what it knows or hears

– Automated Reasoning to use the stored information to answer questions and to draw new
conclusions

– Machine Learning to adapt to new circumstances and to detect and estimate patterns

In case of Total Turing Test, the computer also needs,

– Computer Vision to perceive objects

– Robotics to manipulate objects and move about


Thinking Humanly: The Cognitive Modeling Approach

• If we are going to say that a given program thinks like a human, we must have some way of
determining how humans think.

• We need to get inside the actual workings of human minds.

• There are three ways to do this:


➢ Through Introspection—Trying to catch our own thoughts as they go by

➢ Through Psychological Experiments—Observing a person in action

➢ Through Brain Imaging—Observing the brain in action.

Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the
senses"
• Once we have a sufficiently precise theory of the mind, it becomes possible to express the
theory as a computer program.

• If the program’s input–output behavior matches corresponding human behavior, that is


evidence that it thinks humanly.

• Cognitive science brings together computer models from AI and experimental techniques from
psychology to construct precise and testable theories of the human mind.
Thinking Rationally: The “Laws of Thought” Approach
• Syllogisms provided patterns for argument structures that always yielded correct conclusions
when given correct premises

• For example,
“Socrates is a man; all men are mortal; therefore, Socrates is mortal.”
• These laws of thought were supposed to govern the operation of the mind; this resulted in the
field called logic.

• There are two main obstacles to this approach:


– First, it is not easy to take informal knowledge and state it in the formal terms required by logical notation

– Second, there is a big difference between solving a problem “in principle” and solving it in practice. Even
problems with just a few hundred facts can exhaust the computational resources of any computer unless it
has some guidance as to which reasoning steps to try first.
Acting Rationally: The Rational Agent Approach
• An agent is just something that acts. All computer programs do something, but computer
agents are expected to do more.

• Computer agents operate autonomously, perceive their environment, persist over a prolonged
time period, adapt to change, and create and pursue goals.

• A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.

• Making correct inferences (as in case of ‘Laws of Thought’), is sometimes a part of being a
rational agent, because one way to act rationally is to reason logically to the conclusion.

• There are also ways of acting rationally that cannot be said to involve inference. For example,
recoiling from a hot stove is a reflex action that is usually more successful than a slower action
taken after careful deliberation.
1. The Gestation of AI (1943 – 1955)

• 1943: The first work that is now generally recognized as AI was


done by Warren McCulloch and Walter Pitts. They proposed a
model of Artificial Neurons, in which each neuron is
characterized as being “on” or “off”, with a switch to “on”
occurring in response to simulation by a sufficient number of
neighboring neurons.

• 1949: Donald Hebb (1949) demonstrated a simple updating


rule for modifying the connection strengths between neurons.
His rule, now called Hebbian Learning, remains an influential
model to this day.
• 1950: Two undergraduate students at • 1950: Alan Turing published an article on
Harvard, Marvin Minsky and Dean “Computing Machinery and Intelligence.”
Edmonds, built the first neural network Therein, he introduced the Turing Test.
computer, the SNARC.

ne

Stochastic Neural Analog Reinforcement Calculator (SNARC) Turing Test


2. The Birth of Artificial Intelligence (1956)

• 1956: The word "Artificial Intelligence" first adopted by


American computer scientist John McCarthy at the
Dartmouth Conference (Dartmouth College became the
official birthplace of AI). For the first time, AI coined as
an academic field.

• Logic Theorist (LT), a reasoning program invented by


Newell and Simon. The program was able to prove most
of the theorems in Chapter 2 of Russell and Whitehead’s
Principia Mathematica.
3. Early enthusiasm, Great expectations (1952-1969)
• The early years of AI were full of success.

• 1957: Newell and Simon’s early success was followed up with the General Problem Solver (GPS).
GPS was probably the first program to embody the “Thinking Humanly” approach.

• 1958: McCarthy defined the high-level language LISP, which was to become the dominant AI
programming language for the next 30 years.

• 1958: McCarthy published a paper entitled “Programs with Common


Sense”.
• 1959: Herbert Gelernter constructed the Geometry Theorem Prover,
which was able to prove theorems that many students of
mathematics would find quite tricky.
LISP
• 1959: Arthur Samuel developed a Checkers Program (draughts) that
eventually learned to play at a strong amateur level. Along the way,
he disproved the idea that computers can do only what they are
told to: his program quickly learned to play a better game than its
creator.

• 1963: McCarthy started the AI lab at


Stanford, the Stanford AI Lab (SAIL).
• 1966: Joseph Weizenbaum created the first
chatbot, which was named as ELIZA.
4. AI dawn: A dose of reality (1966–1973)
• From the beginning, AI researchers were not shy about making predictions of their coming successes.

• For example, in 1957, Herbert Simon predicted - within 10 years a computer would be chess
champion. These predictions came true (or approximately true) within 40 years rather than 10.
Simon’s overconfidence was due to the promising performance of early AI systems on simple
examples.

• In almost all cases, however, these early systems turned out to fail miserably when tried out on wider
selections of problems and on more difficult problems.
• The first kind of difficulty arose because most early programs knew nothing of their subject
matter; they succeeded by means of simple syntactic manipulations.

• The second kind of difficulty was the intractability of many of the problems that AI was
attempting to solve. Most of the early AI programs solved problems by trying out different
combination of steps until the solution was found.

• The third difficulty arose because of some fundamental limitations on the basic structures being
used to generate intelligent behaviour.

• All these factors led to the period called, AI Winter.

(Note that in 1972, the first intelligent humanoid robot was


built in Japan which was named as WABOT-1)
5. Knowledge-based systems: The key to power? (1969–1979)

• The picture of problem solving that had arisen during the first decade of AI research was of
a general-purpose search mechanism trying to string together elementary reasoning steps
to find complete solutions.

• Such approaches have been called weak methods because, although general, they do not
scale up to large or difficult problem instances.

• The alternative to weak methods is to use more powerful, domain-specific knowledge that
allows larger reasoning steps and can more easily handle typically occurring cases in
narrow areas of expertise (Emergence of Expert Systems).
• The DENDRAL program, developed in 1969, was an early example of this approach. It was the
first successful knowledge-intensive system: its expertise derived from large numbers of special-
purpose rules.

• This new methodology of expert systems is then applied to other areas of human expertise.

• The new major effort was in the area of medical diagnosis.

• 1972: Developed MYCIN to diagnose blood infections. With about 450 rules, MYCIN was able to
perform as well as some experts, and considerably better than junior doctors.

• The importance of domain knowledge was also apparent in the area of understanding natural
language.
6. AI becomes an industry (1980–present)
• 1982: The first successful commercial expert system, R1, began operation at the Digital
Equipment Corporation (DEC). The program helped configure orders for new computer systems.

• By 1986, it was saving the company as estimated $40 million year.

• By 1988, DEC’s AI group had 40 expert systems deployed, with more on the way.

• Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988,
including hundreds of companies building expert systems, vision systems, robots, and software
and hardware specialized for these purposes.

• As the hype regarding AI increased, researchers feared that the funding would once again dry
up. This fear proved to be correct. Soon after that came the second ‘AI Winter’.
7. The return of Neural Networks (1986–present)

• In the mid-1980s, the Back-Propagation learning algorithm, which was first found in 1969 was
reinvented.
8. AI adopts the scientific method (1987–present)
• Recent years have seen a revolution in both the content and the methodology of work in artificial
intelligence. In terms of methodology, AI has finally come firmly under the scientific method.

• In recent years, approaches based on Hidden Markov Models (HMMs) have come to dominate the
area of Speech Recognition.

• As a result of these developments, so-called Data Mining technology has spawned a vigorous new
industry.

• In 1988, Probabilistic Reasoning in Intelligent Systems led to a new acceptance of probability and
decision theory in AI.

• The Bayesian Network formalism was invented to allow efficient representation of uncertain
knowledge.

• Similar gentle revolutions occurred in robotics, computer vision, and knowledge representation.
9. The emergence of Intelligent Agents (1995–present)
• One of the most important environment for intelligent
agents is the Internet. AI systems have become so
common in Web-based applications. Moreover, AI
technologies underlie many Internet tools, such as
search engines, recommender systems, and website
aggregators.
• 1997: In the year 1997, IBM Deep Blue beats world chess champion,
Garry Kasparov, and became the first computer to beat a world chess
champion.
• 2002: for the first time, AI entered the home in the form of Roomba, a
vacuum cleaner.
• 2006: AI came in the Business world. Companies like Facebook, Twitter,
and Netflix also started using AI.
10. The availability of very large data sets (2001–present)
• Throughout the 60-year history of computer science, the emphasis has been on the algorithm
as the main subject of study. But some recent work in AI suggests that for many problems, it
makes more sense to worry about the data and be less picky about what algorithm to apply.
This is true because of the increasing availability of very large data sources.

• For example, in 1995, Yarowsky’s work on word-sense disambiguation: given the use of the
word “plant” in a sentence, does that refer to flora or factory Yarowsky showed that, given a
corpus of unannotated text and just the dictionary definitions of the two senses – “works,
industrial plant” and “flora, plant life” – one can label examples in the corpus.

• Another example is the problem of filling holes in a photograph.

• Reports have noticed the surge of new applications and have written that “AI Winter” may be
yielding to a new Spring.
The Foundations of AI
The most important disciplines that contributed ideas, viewpoints, and techniques to AI are:

– Philosophy

– Mathematics

– Economics

– Neuroscience

– Psychology

– Computer Engineering

– Control Theory and Cybernetics

– Linguistics
Philosophy
Philosophers made AI conceivable by considering the ideas that:
– The mind is in some ways like a machine

– The mind operates on knowledge encoded in some internal language

– Thought can be used to choose what actions to take

The history of philosophy can be organized around the following series of questions:
– Can formal rules be used to draw valid conclusions?

– How does the mind arise from a physical brain?

– Where does knowledge come from?

– How does knowledge lead to action?


Mathematics
• Mathematicians provided the tools to manipulate statements of logical certainty, as well as
uncertain, probabilistic statements.

• They also set the groundwork for understanding computation and reasoning about algorithms.

• What are the formal rules to draw valid conclusions?


• What can be computed?
• How do we reason with uncertainty?
Economics
• Economists formulated the problem of making decisions that maximize the expected outcome to
the decision maker.
– How should we make decisions so as to maximize payoff?

– How should we do this when others may not go along?

– How should we do this when the payoff may be fair in

the future?

Neuroscience
• How do brain process information?

• Neuroscientists discovered some facts about how


the brain works and the ways in which it is similar
to and different from computers.
Psychology
• How do humans and animals think and act?

• Psychologists adopted the idea that humans and animals can be considered information
processing machines.

Computer Engineering
• How can we build an efficient computer?

• Computer engineers provided the ever-more-


powerful machines that make AI applications
possible.
Control Theory & Cybernetics
• How can artifacts operate under their own control?

• Control theory deals with designing devices that act optimally on the basis of feedback from the
environment.

Linguistics
• How does language relate to thought?

• Modern linguistics and AI were “born” at about


the same time, and grew up together,
intersecting in a hybrid field called Computational
Linguistics or Natural Language Processing.
• Self Driving Cars:
– Autonomous cars, Robotic cars, Robotaxi or Robo-car

– Stanley, the winner of 2005 DARPA Grand Challenge

– CMU’s BOSS won the Urban Challenge

• Speech Recognition:
• A traveler calling United Airlines to book a flight can have the
entire conversation guided by an automated speech
recognition and dialog management system.

• Autonomous Planning and Scheduling:


• A hundred million miles from Earth, NASA’s Remote Agent
Mars Rover
program became the first on-board autonomous planning
program to control the scheduling of operations for a
spacecraft (eg. Mars Rovers).
• Game playing:
– IBM’s DEEP BLUE became the first computer program to
defeat the world champion in a chess match when it
bested Garry Kasparov.

• Spam fighting:
– Each day, learning algorithms classify over a billion
messages as spam, saving the recipient from having to
waste time deleting them.

• Logistics planning:
– During the Persian Gulf crisis of 1991, U.S. forces
deployed a Dynamic Analysis and Replanning Tool
(DART), to do automated logistics planning and
scheduling for transportation.
• Robotics:
– The iRobot Corporation has sold over two million Roomba robotic
vacuum cleaners for home use.

– The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear
explosives, and identify the location of snipers.

• Machine Translation:
PackBot
– A computer program automatically translates one language

to another language.

• Astronomy:
– Artificial Intelligence can be very useful to solve complex

universe problems. AI technology can be helpful for understanding

the universe such as how it works, origin, etc.


• Agriculture: Agriculture is applying AI as agriculture robotics, solid and crop
monitoring, predictive analysis.

• E-commerce: AI is helping shoppers to discover associated products with


recommended size, color, or even brand.

Iris – India’s first AI teacher

• Entertainment: With the help of ML/AI algorithms, services like Netflix or


Amazon show the recommendations for programs or shows.
• Education: AI can automate grading so that the tutor can have more time
to teach. AI chatbot can communicate with students as a teaching
Malar Miss – World’s first assistant.
Autonomous AI Professor
• Healthcare: Healthcare Industries are applying AI to make a better
and faster diagnosis than humans.

• Finance: The finance industry is implementing automation, chatbot,


adaptive intelligence, algorithm trading, and machine learning into
financial processes

• Data Security: AI can be used to determine software bug and cyber-


attacks in a better way.

• Social Media: Social Media sites such as Facebook, Twitter, and


Snapchat uses AI to analyze lots of data to identify the latest trends,
hashtag, and requirement of different users.

You might also like