Artificial Intelligenceby Siva and Nambi

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 12

AVINASHI, COIMBATORE – 641 654

PAPER PRESENTED ON

AUTHORED BY

NAMBIRAJAN.S
([email protected])

SIVACHANDRAN.R
([email protected])
Humankind has given itself the scientific name homo sapiens--man the wise--because
our mental capacities are so important to our everyday lives and our sense of self. The
field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one
reason to study it is to learn more about ourselves. But unlike philosophy and
psychology, which are also concerned with intelligence, AI strives to build intelligent
entities as well as understand them. Another reason to study AI is that these constructed
intelligent entities are interesting and useful in their own right. AI has produced many
significant and impressive products even at this early stage in its development. Although
no one can predict the future in detail, it is clear that computers with human-level
intelligence (or better) would have a huge impact on our everyday lives and on the future
course of civilization.

The most important fields of research in this area are information processing, pattern
recognition, game-playing computers, and applied fields such as medical diagnosis.
Current research in information processing deals with programs that enable a computer to
understand written or spoken information and to produce summaries, answer specific
questions, or redistribute information to users interested in specific areas of this
information. Essential to such programs is the ability of the system to generate
grammatically correct sentences and to establish linkages between words, ideas, and
associations with other ideas. Research has shown that whereas the logic of language
structure—its syntax—submits to programming, the problem of meaning, or semantics,
lies far deeper, in the direction of true AI.

In medicine, programs have been developed that analyze the disease symptoms, medical
history, and laboratory test results of a patient, and then suggest a diagnosis to the
physician. The diagnostic program is an example of so-called expert systems—programs
designed to perform tasks in specialized areas as a human would. Expert systems take
computers a step beyond straightforward programming, being based on a technique called
rule-based inference, in which preestablished rule systems are used to process the data.
Despite their sophistication, systems still do not approach the complexity of true
intelligent thought.

AI addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain{brain},
whether biological or electronic, to perceive, understand, predict, and manipulate a world
far larger and more complicated than itself? How do we go about making something with
those properties? These are hard questions, but unlike the search for faster-than-light
travel or an antigravity device, the researcher in AI has solid evidence that the quest is
possible.
 INTRODUCTION
»INTELLIGENCE
»LEARNING
»RESONING
»PROBLEM-SOLVING
»PERCEPTION
»LANGUAGE-UNDERSTANDING

 HISTORY
»THE ERA OF THE COMPUTER
»THE BEGINNINGS OF AI

 RESEARCHES

 APPLICATIONS
»GENERAL APPLICATIONS
»MILITARY APPLICATIONS
»MEDICAL APPLICATIONS

 CONCLUSION

 REFERENCE
Artificial Intelligence (AI) is the area of computer science focusing on creating
machines that can engage on behaviors that humans consider intelligent. The ability
to create intelligent machines has intrigued humans since ancient times, and today
with the advent of the computer and 50 years of research into AI programming
techniques, the dream of smart machines is becoming a reality. Researchers are
creating systems which can mimic human thought, understand speech, beat the best
human chess player, and countless other feats never before possible. Find out how
the military is applying AI logic to its hi-tech systems, and how in the near future
Artificial Intelligence may impact our lives.

In other words Artificial Intelligence (AI) is usually defined as the science of making
computers do things that require intelligence when done by humans. AI has had some
success in limited, or simplified, domains. However, the five decades since the inception
of AI have brought only very slow progress, and early optimism concerning the
attainment of human-level intelligence has given way to an appreciation of the profound
difficulty of the problem.

American computer scientist John McCarthy coins the term "artificial intelligence" (AI).
In 1959-1960 he develops LISP, a list-oriented computer programming language, which
becomes the standard language for AI research.

A robot plays a song on a keyboard while sitting next to its inventor. What kinds of
amazing things do you think robots will do in the future?
What is Intelligence?

Quite simple human behaviour can be intelligent yet quite complex behaviour performed
by insects is unintelligent. What is the difference? Consider the behaviour of the digger
wasp, Sphex ichneumoneus. When the female wasp brings food to her burrow, she
deposits it on the threshold, goes inside the burrow to check for intruders, and then if the
coast is clear carries in the food. The unintelligent nature of the wasp's behaviour is
revealed if the watching experimenter moves the food a few inches while the wasp is
inside the burrow checking. On emerging, the wasp repeats the whole procedure: she
carries the food to the threshold once again, goes in to look around, and emerges. She can
be made to repeat this cycle of behaviour upwards of forty times in succession.
Intelligence--conspicuously absent in the case of Sphex--is the ability to adapt one's
behaviour to fit new circumstances.
Mainstream thinking in psychology regards human intelligence not as a single ability or
cognitive process but rather as an array of separate components. Research in AI has
focussed chiefly on the following components of intelligence: learning, reasoning,
problem-solving, perception, and language-understanding.

Learning
Learning is distinguished into a number of different forms. The simplest is learning by
trial-and-error. For example, a simple program for solving mate-in-one chess problems
might try out moves at random until one is found that achieves mate. The program
remembers the successful move and next time the computer is given the same problem it
is able to produce the answer immediately. The simple memorising of individual items--
solutions to problems, words of vocabulary, etc.--is known as rote learning.
Rote learning is relatively easy to implement on a computer. More challenging is the
problem of implementing what is called generalisation. Learning that involves
generalisation leaves the learner able to perform better in situations not previously
encountered. A program that learns past tenses of regular English verbs by rote will not
be able to produce the past tense of e.g. "jump" until presented at least once with
"jumped", whereas a program that is able to generalise from examples can learn the "add-
ed" rule, and so form the past tense of "jump" in the absence of any previous encounter
with this verb. Sophisticated modern techniques enable programs to generalise complex
rules from data.

Reasoning
To reason is to draw inferences appropriate to the situation in hand. Inferences are
classified as either deductive or inductive. An example of the former is "Fred is either in
the museum or the cafŽ; he isn't in the cafŽ; so he's in the museum", and of the latter
"Previous accidents just like this one have been caused by instrument failure; so probably
this one was caused by instrument failure". The difference between the two is that in the
deductive case, the truth of the premisses guarantees the truth of the conclusion, whereas
in the inductive case, the truth of the premiss lends support to the conclusion that the
accident was caused by instrument failure, but nevertheless further investigation might
reveal that, despite the truth of the premiss, the conclusion is in fact false.
There has been considerable success in programming computers to draw inferences,
especially deductive inferences. However, a program cannot be said to reason simply in
virtue of being able to draw inferences. Reasoning involves drawing inferences that are
relevant to the task or situation in hand. One of the hardest problems confronting AI is
that of giving computers the ability to distinguish the relevant from the irrelevant.

Problem-solving
Problems have the general form: given such-and-such data, find x. A huge variety of
types of problem is addressed in AI. Some examples are: finding winning moves in board
games; identifying people from their photographs; and planning series of movements that
enable a robot to carry out a given task.
Problem-solving methods divide into special-purpose and general-purpose. A special-
purpose method is tailor-made for a particular problem, and often exploits very specific
features of the situation in which the problem is embedded. A general-purpose method is
applicable to a wide range of different problems. One general-purpose technique used in
AI is means-end analysis, which involves the step-by-step reduction of the difference
between the current state and the goal state. The program selects actions from a list of
means--which in the case of, say, a simple robot, might consist of pickup, putdown,
moveforward, moveback, moveleft, and moveright--until the current state is transformed
into the goal state.

Perception
In perception the environment is scanned by means of various sense-organs, real or
artificial, and processes internal to the perceiver analyse the scene into objects and their
features and relationships. Analysis is complicated by the fact that one and the same
object may present many different appearances on different occasions, depending on the
angle from which it is viewed, whether or not parts of it are projecting shadows, and so
forth.
At present, artificial perception is sufficiently well advanced to enable a self-controlled
car-like device to drive at moderate speeds on the open road, and a mobile robot to roam
through a suite of busy offices searching for and clearing away empty soda cans. One of
the earliest systems to integrate perception and action was FREDDY, a stationary robot
with a moving TV 'eye' and a pincer 'hand' (constructed at Edinburgh University during
the period 1966-1973 under the direction of Donald Michie). FREDDY was able to
recognise a variety of objects and could be instructed to assemble simple artefacts, such
as a toy car, from a random heap of components.

Language-understanding
A language is a system of signs having meaning by convention. Traffic signs, for
example, form a mini-language, it being a matter of convention that, for example, the
hazard-ahead sign means hazard ahead. This meaning-by-convention that is distinctive of
language is very different from what is called natural meaning, exemplified in statements
like 'Those clouds mean rain' and 'The fall in pressure means the valve is malfunctioning'.
An important characteristic of full-fledged human languages, such as English, which
distinguishes them from, e.g. bird calls and systems of traffic signs, is their productivity.
A productive language is one that is rich enough to enable an unlimited number of
different sentences to be formulated within it.
It is relatively easy to write computer programs that are able, in severely restricted
contexts, to respond in English, seemingly fluently, to questions and statements, for
example the Parry and Shrdlu programs described in the section Early AI Programs.
However, neither Parry nor Shrdlu actually understands language. An appropriately
programmed computer can use language without understanding it, in principle even to the
point where the computer's linguistic behaviour is indistinguishable from that of a native
human speaker of the language (see the section Is Strong AI Possible?). What, then, is
involved in genuine understanding, if a computer that uses language indistinguishably
from a native human speaker does not necessarily understand? There is no universally
agreed answer to this difficult question. According to one theory, whether or not one
understands depends not only upon one's behaviour but also upon one's history: in order
to be said to understand one must have learned the language and have been trained to take
one's place in the linguistic community by means of interaction with other language-
users.

This robotic hand is sensitive enough to hold an egg without breaking it. Computer
circuits control how tight or loose the hand is.
Evidence of Artificial Intelligence folklore can be traced back to ancient Egypt, but with
the development of the electronic computer in 1941, the technology finally became
available to create machine intelligence. The term artificial intelligence was first coined
in 1956, at the Dartmouth conference, and since then Artificial Intelligence has expanded
because of the theories and principles developed by its dedicated researchers. Through its
short modern history, advancement in the fields of AI have been slower than first
estimated, progress continues to be made. From its birth 4 decades ago, there have been a
variety of AI programs, and they have impacted other technological advancements.

The Era of the Computer


In 1941 an invention revolutionized every aspect of the storage and processing of
information. That invention, developed in both the US and Germany was the electronic
computer. The first computers required large, separate air-conditioned rooms, and were a
programmers nightmare, involving the separate configuration of thousands of wires to
even get a program running.
The 1949 innovation, the stored program computer, made the job of entering a program
easier, and advancements in computer theory lead to computer science, and eventually
Artificial intelligence. With the invention of an electronic means of processing data, came
a medium that made AI possible.

The Beginnings of AI
Although the computer provided the technology necessary for AI, it was not until the
early 1950's that the link between human intelligence and machines was really observed.
Norbert Wiener was one of the first Americans to make observations on the principle of
feedback theory feedback theory. The most familiar example of feedback theory is the
thermostat: It controls the temperature of an environment by gathering the actual
temperature of the house, comparing it to the desired temperature, and responding by
turning the heat up or down. What was so important about his research into feedback
loops was that Wiener theorized that all intelligent behavior was the result of feedback
mechanisms. Mechanisms that could possibly be simulated by machines. This discovery
influenced much of early development of AI.
In late 1955, Newell and Simon developed The Logic Theorist, considered by many to be
the first AI program. The program, representing each problem as a tree model, would
attempt to solve it by selecting the branch that would most likely result in the correct
conclusion. The impact that the logic theorist made on both the public and the field of AI
has made it a crucial stepping stone in developing the AI field.
In 1956 John McCarthy regarded as the father of AI, organized a conference to draw the
talent and expertise of others interested in machine intelligence for a month of
brainstorming. He invited them to Vermont for "The Dartmouth summer research project
on artificial intelligence." From that point on, because of McCarthy, the field would be
known as Artificial intelligence. Although not a huge success, (explain) the Dartmouth
conference did bring together the founders in AI, and served to lay the groundwork for
the future of AI research.
The 1980's were not totally good for the AI industry. In 1986-87 the demand in AI
systems decreased, and the industry lost almost a half of a billion dollars. Companies
such as Teknowledge and Intellicorp together lost more than $6 million, about a third of
there total earnings. The large losses convinced many research leaders to cut back
funding. Another disappointment was the so called "smart truck" financed by the Defense
Advanced Research Projects Agency. The projects goal was to develop a robot that could
perform many battlefield tasks. In 1989, due to project setbacks and unlikely success, the
Pentagon cut funding for the project.
Despite these discouraging events, AI slowly recovered. New technology in Japan was
being developed. Fuzzy logic, first pioneered in the US has the unique ability to make
decisions under uncertain conditions. Also neural networks were being reconsidered as
possible ways of achieving Artificial Intelligence. The 1980's introduced to its place in
the corporate marketplace, and showed the technology had real life uses, ensuring it
would be a key in the 21st century.

The earliest computers were so big that they filled entire rooms! This picture shows one
of the first computers, UNIVAC I, which was invented in 1951.
Research in AI divides into three categories: "strong" AI, applied AI, and cognitive
simulation or CS.
Strong AI aims to build machines whose overall intellectual ability is indistinguishable
from that of a human being. Joseph Weizenbaum, of the MIT AI Laboratory, has
described the ultimate goal of strong AI as being "nothing less than to build a machine on
the model of man, a robot that is to have its childhood, to learn language as a child does,
to gain its knowledge of the world by sensing the world through its own organs, and
ultimately to contemplate the whole domain of human thought". The term "strong AI",
now in wide use, was introduced for this category of AI research in 1980 by the
philosopher John Searle, of the University of California at Berkeley. Some believe that
work in strong AI will eventually lead to computers whose intelligence greatly exceeds
that of human beings. Edward Fredkin, also of MIT AI Lab, has suggested that such
machines "might keep us as pets". Strong AI has caught the attention of the media, but by
no means all AI researchers view strong AI as worth pursuing. Excessive optimism in the
1950s and 1960s concerning strong AI has given way to an appreciation of the extreme
difficulty of the problem, which is possibly the hardest that science has ever undertaken.

To date, progress has been meagre. Some critics doubt whether research in the next few
decades will produce even a system with the overall intellectual ability of an ant.
Applied AI, also known as advanced information-processing, aims to produce
commercially viable "smart" systems--such as, for example, a security system that is able
to recognise the faces of people who are permitted to enter a particular building. Applied
AI has already enjoyed considerable success. Various applied systems are described in
this article.
General applications
AI currently encompasses a huge variety of subfields, from general-purpose areas such as
perception and logical reasoning, to specific tasks such as playing chess, proving
mathematical theorems, writing poetry{poetry}, and diagnosing diseases. Often,
scientists in other fields move gradually into artificial intelligence, where they find the
tools and vocabulary to systematize and automate the intellectual tasks on which they
have been working all their lives. Similarly, workers in AI can choose to apply their
methods to any area of human intellectual endeavor. In this sense, it is truly a universal
field.

Military applications
The military put AI based hardware to the test of war during Desert Storm. AI-based
technologies were used in missile systems, heads-up-displays, and other advancements.
AI has also made the transition to the home. With the popularity of the AI computer
growing, the interest of the public has also grown. Applications for the Apple Macintosh
and IBM compatible computer, such as voice and character recognition have become
available. Also AI technology has made steadying camcorders simple using fuzzy logic.
With a greater demand for AI-related technology, new advancements are becoming
available.
.

Medical applications
In medicine, programs have been developed that analyze the disease symptoms, medical
history, and laboratory test results of a patient, and then suggest a diagnosis to the
physician. The diagnostic program is an example of so-called expert systems—programs
designed to perform tasks in specialized areas as a human would. Expert systems take
computers a step beyond straightforward programming, being based on a technique called
rule-based inference, in which preestablished rule systems are used to process the data.
Despite their sophistication, systems still do not approach the complexity of true
intelligent thought
Expressive AI is a new interdiscipline of AI-based cultural production combining art
practice and AI research practiceExpressive AI changes the focus from an AI system as a
thing in itself (presumably demonstrating some essential feature of intelligence), to the
communication between author and audience. The technical practice of building the
artifact becomes one of exploring which architectures and
techniques best serve as an inscription device within which the authors can express their
message. Expressive AI does not single out a particular technical tradition as being
peculiarly suited to culture production. Rather, expressive AI is a stance or viewpoint
from which all of AI can be rethought and transformed

1. M. Mateas, "Computational Subjectivity in Virtual


World Avatars" in Working notes of the Socially Intelligent
Agents Symposium, AAAI Fall Symposium Series (Menlo
Park: Calif.: AAAI Press, 1997).
2. W. S. Neal Reilly, Believable Social and Emotional
Agents, Ph.D. diss., (School of Computer Science,
Carnegie Mellon University, 1996).
3. A. B. Loyall and J. Bates, "Hap: A Reactive, Adaptive
Architecture for Agents", Technical Report CMU-CS-91-
147 (Department of Computer Science, Carnegie Mellon
University, 1991).
4. J. Bates, "Virtual Reality, Art, and Entertainment", in
Presence: The Journal of Teleoperators and Virtual
Environments 1(1): 133-138, (1992).
http://physics.sys.edu/courses/modules/mm/ai/ai.html.

You might also like