0% found this document useful (0 votes)
30 views11 pages

Unit 1 Fai

The document provides an overview of Artificial Intelligence (AI), covering its definition, historical development, and foundational principles. It discusses the significance of AI in solving real-world problems, its goals, and the various fields that contribute to its advancement, such as philosophy, mathematics, and neuroscience. Additionally, it highlights key milestones in AI's history, including the Turing Test and the emergence of intelligent agents, as well as the current state of AI technology.

Uploaded by

sitharavashisht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views11 pages

Unit 1 Fai

The document provides an overview of Artificial Intelligence (AI), covering its definition, historical development, and foundational principles. It discusses the significance of AI in solving real-world problems, its goals, and the various fields that contribute to its advancement, such as philosophy, mathematics, and neuroscience. Additionally, it highlights key milestones in AI's history, including the Turing Test and the emergence of intelligent agents, as well as the current state of AI technology.

Uploaded by

sitharavashisht
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

VISHNU INSTITUTE OF TECHNOLOGY ::

BHIMAVARAM
DEPARTMENT OF CSE(ARTIFICIAL INTELLIGENCE AND MACHINE
LEARNING)

FUNDAMENTALS OF ARTIFICIAL INTELLIGENCE AND MACHINE


LEARNING

UNIT 1
Topics :
1. Introduction to Artificial Intelligence
2. Foundations of Artificial Intelligence
3. History of Artificial Intelligence
4. The state of the Art
Introduction to Artificial Intelligence
AI is one of the fascinating and universal fields of Computer
science which has a great scope in future. AI holds a tendency to
cause a machine to work as a human. It is also one of the booming
technologies of computer science is Artificial Intelligence which is
ready to create a new revolution in the world by making intelligent
machines.The Artificial Intelligence is now all around us. It is
currently working with a variety of sub fields, ranging from general
to specific, such as self-driving cars, playing chess, proving
theorems, playing music, Painting, etc.
It is a branch of Computer Science that pursues creating the
computers or machines as intelligent as human beings. It is the
science and engineering of making intelligent machines, especially
intelligent computer programs. It is related to the similar task of
using computers to understand human intelligence, but AI does not
have to confine itself to methods that are biologically observable.
Definition: Artificial Intelligence is the study of how to make
computers do things, which, at the moment, people do better.
According to the father of Artificial Intelligence, John
McCarthy, it is “The science and engineering of making intelligent
machines, especially intelligent computer programs”. Artificial
Intelligence is a way of making a computer, a computer-
controlled robot, or a software think intelligently, in the
similar manner the intelligent humans think.
AI is accomplished by studying how human brain thinks and
how humans learn, decide, and work while trying to solve a
problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems. It has gained
prominence recently due, in part, to big data, or the increase in
speed, size and variety of data businesses are now collecting.
AI can perform tasks such as identifying patterns in the data
more efficiently than humans, enabling businesses to gain more
insight out of their data.
From a business perspective AI is a set of very powerful tools,
and methodologies for using those tools to solve business problems.
From a programming perspective, AI includes the study of symbolic
programming, problem solving, and search.

In Figure 1.1 we see eight definitions of AI, laid out along two
dimensions. The definitions on top are concerned with thought
processes and reasoning, whereas the ones on the bottom address
behavior. The definitions on the left measure success in terms of
fidelity to human performance, whereas RATIONALITY the ones on
the right, measure against an ideal performance measure, called
rationality. A system is rational if it does the “right thing,” given
what it knows.
Historically, all four approaches to AI have been followed,
each by different people with different methods. A human-centered
approach must be in part an empirical science, involving
observations and hypotheses about human behavior. A rationalist
approach involves a combination of mathematics and engineering.
1. Acting Humanly : The Turing Test Approach
The Turing Test, proposed by Alan Turing (1950), was
designed to provide a satisfactory operational definition of
intelligence. A computer passes the test if a human interrogator,
after posing some written questions, cannot tell whether the written
responses come from a person or from a computer.we note that
programming a computer to pass a rigorously applied test provides
plenty to work on. The computer would need to possess the
following capabilities:
 Natural Language Processing
 Knowledge Representation
 Automated Reasoning
 Machine Learning
2. Thinking humanly: The cognitive modelling approach
If we are going to say that a given program thinks like a
human, we must have some way of determining how humans think.
We need to get inside the actual workings of human minds. There
are two ways to do this: through introspection--trying to catch our
own thoughts as they go by--or through psychological experiments.
Once we have a sufficiently precise theory of the mind, it becomes
possible to express the theory as a computer program. If the
program's input/output and timing behavior matches human
behavior, that is evidence that some of the program's mechanisms
may also be operating in humans.
3. Thinking Rationally : The laws of Thought Approach
The Greek philosopher Aristotle was one of the first to
attempt to codify ``right thinking,'' that is, irrefutable
reasoning processes. His famous syllogisms provided
patterns for argument structures that always gave correct
conclusions given correct premises. For example, ``Socrates
is a man; all men are mortal; therefore Socrates is mortal.''
These laws of thought were supposed to govern the
operation of the mind, and initiated the field of logic.
4. Acting Rationally : The rational agent approach
Acting rationally means acting so as to achieve one's
goals, given one's beliefs. An agent is just something that
perceives and acts. (This may be an unusual use of the
word, but you will get used to it.) In this approach, AI is
viewed as the study and construction of rational agents.
Why Artificial Intelligence ?
Before Learning about Artificial Intelligence, we should know that
what is the importance of AI and why should we learn it. Following
are some main reasons to learn about AI:
1. With the help of AI, you can create such software or devices
which can solve real-world problems very easily and with
accuracy such as health issues, marketing, traffic issues, etc.
2. With the help of AI, you can create your personal virtual
Assistant, such as Cortana, Google Assistant, Siri, etc.
3. With the help of AI, you can build such Robots which can work in
an environment where survival of humans can be at risk.
4. AI opens a path for other new technologies, new devices, and
new Opportunities.
Goals of Artificial Intelligence
Following are the main goals of Artificial Intelligence:
1. Replicate human intelligence
2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires
human intelligence such as:
 Proving a theorem
 Playing chess
 Plan some surgical operation
 Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior,
learn new things by itself, demonstrate, explain, and can
advise to its user.

Foundations of Artificial Intelligence


1. Philosophy : AI in philosophy is mainly concerned with the
following :
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?
Aristotle was the first to formulate a precise set of laws governing
the rational part of the mind. He developed an informal system of
syllogisms for proper reasoning, which in principle allowed one to
generate conclusions mechanically, given initial premises.
2. Mathematics : Mathematical modeling is the activity devoted to
the study of the simulation of physical phenomena by computational
processes. The goal of the simulation is to predict the behavior of
some artifact within its environment. The AI in Mathematics is
mainly concerned with :
 What are the formal rules to draw valid conclusions?
 What can be computed?
 How do we reason with uncertain information?
Philosophers staked out most of the important ideas of AI, but the
leap to a formal science required a level of mathematical
formalization in three fundamental areas: logic, computation, and
probability.The first nontrivial algorithm is thought to be Euclid’s
algorithm for computing greatest common denominators. The study
of algorithms as objects in themselves goes back to al-Khowarazmi,
a Persian mathematician of the 9th century, whose writings also
introduced Arabic numerals and algebra to Europe.
3. Economics :
Most people think of economics as being about money, but
economists will say that they are really studying how people make
choices that lead to preferred outcomes or utility. Decision theory,
which combines probability theory with utility theory, provides a
formal and complete framework for decisions made under
uncertainty that is, in cases where probabilistic descriptions
appropriately capture the decision-maker’s environment. This is
suitable for “large” economies. For “small” economies, the situation
is much more like a game: the actions of one player can significantly
affect the utility of another. Von Neumann and Morgenstern’s
development of Game Theory included the surprising result that,
for some games, a rational agent should act in a random fashion, or
at least in a way that appears random to the adversaries.
In the field of operations research, which emerged in World
War II from efforts in Britain to optimize radar installations, and later
found civilian applications in complex management decisions. The
work of Richard Bellman (1957) formalized a class of sequential
decision problems called Markov decision processes
4. Neuroscience :
Neuroscience is the study of the nervous system, particularly
the brain. The exact way in which the brain enables thought is one
of the great mysteries of science.Brain is recognized as the seat of
consciousness.The truly amazing conclusion is that a collection of
simple cells can lead to thought, action, and consciousness or, in
other words, that brains cause minds.Brains and computers perform
quite different tasks and have different properties. Moore’s Law
predicts that the CPU’s gate count will equal the brain’s neuron
count around 2020.
Human brain capacity doubles roughly every 2 to 4 million
years. Even though a computer is a million times faster in raw
switching speed, the brain ends up being 100,000 times faster at
what it does.
5. Psychology :
The view of the brain as an information-processing device,
which is a principal characteristic of cognitive psychology, can be
traced back at least to the works of William James. The development
of computer modeling led to the creation of the field of cognitive
science. The field can be said to have started at a workshop in
September 1956 at MIT. This is just two months after the conference
at which AI itself was “born.”
6. Linguistics :
Modem linguistics and AI were born at about the same time,
and grew up together, intersecting in a hybrid field called
computational linguistics or Natural Language Processing.
The problem of understanding language soon turned out to be
considerably more complex than it seemed in 1957. Understanding
language requires an understanding of the subject matter and
context, not just an understanding of the structure of sentences.

History of Artificial Intelligence


Artificial Intelligence is not a new word and not a new technology for
researchers. This technology is much older than you would imagine.
Even there are the myths of Mechanical men in Ancient Greek and
Egyptian Myths. Following are some milestones in the history of AI
which defines the journey from the AI generation to till date
development.
Gestation of Artificial Intelligence (1943-1952)
 Year 1943: The first work which is now recognized as AI was
done by Warren McCulloch and Walter pits in 1943. They
proposed a model of artificial neurons.
 Year 1949: Donald Hebb demonstrated an updating rule for
modifying the connection strength between neurons. His rule is
now called Hebbian learning.
 Year 1950: The Alan Turing who was an English mathematician
and pioneered Machine learning in 1950. Alan Turing publishes
"Computing Machinery and Intelligence" in which he
proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence,
called a Turing test.
The birth of Artificial Intelligence (1952-1956)
 Year 1955: An Allen Ne-well and Herbert A. Simon created the
"first artificial intelligence program" which was named as "Logic
Theorist". This program had proved 38 of 52 Mathematics
theorems, and find new and more elegant proofs for some
theorems.
 Year 1956: The word "Artificial Intelligence" first adopted by
American Computer scientist John McCarthy at the Dartmouth
Conference. For the first time, AI coined as an academic field. At
that time high-level computer languages such as FORTRAN, LISP,
or COBOL were invented. And the enthusiasm for AI was very
high at that time.
The golden years-Early enthusiasm (1956-1974)
 Year 1966: The researchers emphasized developing algorithms
which can solve mathematical problems. Joseph Weizmann
created the first chat bot in 1966, which was named as ELIZA.
 Year 1972: The first intelligent humanoid robot was built in
Japan which was named as WABOT-1.
The first AI winter (1974-1980): The duration between years
1974 to 1980 was the first AI winter duration. AI winter refers to the
time period where computer scientist dealt with a severe shortage
of funding from government for AI researches. During AI winters, an
interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
 Year 1980: After AI winter duration, AI came back with "Expert
System". Expert systems were programmed that emulate the
decision-making ability of a human expert.
 In the Year 1980, the first national conference of the American
Association of Artificial Intelligence was held at Stanford
University.

The second AI winter (1987-1993)


 The duration between the years 1987 to 1993 was the second AI
Winter duration.Again Investors and government stopped in
funding for AI research as due to high cost but not efficient
result. The expert system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
 Year 1997: In the year 1997, IBM Deep Blue beats world chess
champion, Gary Kasparov, and became the first computer to
beat a world chess champion.
 Year 2002: for the first time, AI entered the home in the form of
Roomba, a vacuum cleaner.
 Year 2006: AI came in the Business world till the year 2006.
Companies
 like Facebook, Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general intelligence
(2011-present)
 Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz
show, where it had to solve the complex questions as well as
riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
 Year 2012: Google has launched an Android app feature
"Google now", which was able to provide information to the user
as a prediction.
 Year 2014: In the year 2014, Chat bot "Eugene Goostman" won
a competition in the infamous "Turing test."
 Year 2018: The "Project Debater" from IBM debated on complex
topics with two master debaters and also performed extremely
well. Google has demonstrated an AI program "Duplex" which
was a virtual assistant and which had taken hairdresser
appointment on call, and lady on other side didn't notice that
she was talking with the machine.
 Now AI has developed to a remarkable level. The concept of
Deep learning, big data, and data science are now trending like
a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The
future of Artificial Intelligence is inspiring and will come with high
intelligence.

The State of the Art


1. Autonomous planning and scheduling : A hundred million
miles from Earth, NASA’s Remote Agent Program became the first
onboard autonomous planning program. It control the scheduling of
operations for a spacecraft Remote Agent generated plans from
high-level goals specified from the ground, and it monitored the
operation of the spacecraft as the plans were executed-detecting,
diagnosing, and recovering from problems as they occurred.
2. Game playing: IBM’s Deep Blue became the first computer
program to defeat the world champion in a chess match when it
bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition
match. Kasparov said that he felt a “new kind of intelligence” across
the board from him. Newsweek magazine described the match as
“The brain’s last stand.” The value of IBM’s stock increased by $18
billion.
3. Autonomous control: The ALVINN computer vision system was
trained to steer a car to keep it following a lane. Video cameras that
transmit road images to ALVINN, which then computes the best
direction to steer, based on experience from previous training runs.
4. Diagnosis: Medical diagnosis programs based on probabilistic
analysis have been able to perform at the level of an expert
physician in several areas of medicine. Heckerman (1991) describes
a case where a leading expert on lymph-node pathology scoffs at a
program’s diagnosis of an especially difficult case. The creators of
the program suggest he ask the computer for an explanation of the
diagnosis. The machine points out the major factors influencing its
decision and explains the subtle interaction of several of the
symptoms in this case. Eventually, the expert agrees with the
program.
5. Logistics Planning : During the Persian Gulf crisis of 1991, U.S.
forces deployed a Dynamic Analysis and Re-planning Tool, DART
(Cross and Walker, 1994), to do automated logistics planning and
scheduling for transportation. This involved up to 50,000 vehicles,
cargo, and people at a time, and had to account for starting points,
destinations, routes, and conflict resolution among all parameters.
The AI planning techniques allowed a plan to be generated in hours
that would have taken weeks with older methods. The Defense
Advanced Research Project Agency (DARPA) stated that this single
application more than paid back DARPA’s 30-year investment in AI.
6. Robotics : Many surgeons now use robot assistants in
microsurgery. Computer vision techniques are used to create a
three-dimensional model of a patient’s internal anatomy and then
uses robotic control to guide the insertion of a hip replacement
prosthesis.
7. Language understanding and problem solving : PROVERB is
a computer program that solves crossword puzzles better than most
humans, using constraints on possible word fillers, a large database
of past puzzles, and a variety of information sources including
dictionaries and online databases such as a list of movies and the
actors that appear in them.

You might also like