Unit I (A)
Unit I (A)
Syllabus
Chapter 1: Introduction to AI
Concept of Al
Many human mental activities such as developing computer programs, working out mathematics,
engaging in common sense reasoning, understanding languages and interpreting it, even driving an
automobile are said to demand "intelligence". Several computer systems have been built that can
perform tasks such as these. Also there are specially developed computers systems that can diagnose
disease, solve quadratic equations, understand human speech and natural language text.
We can say that all such systems possess certain degree of artificial intelligence.
The central point of all such activities and systems is that "How to think" OR rather "How to make
system think". The process of thinking has various steps like preceive, understand, predict and
manipulate a world that is made up of tiny complex things or situations.
The field of AI not just attempts to understand but also it builds intelligent entities.
Various Definitions of AI
1. AI may be defined as the branch of computer science that is concerned with the automation of
intelligent behaviour. (Luger-1993)
3. The exciting new effort to make computers think ... machines with minds, in the full and literal
sense. (Hallgeland-1985)
4. "The automation of activities that we associate with human thinking, activities such as devision
making, problem solving, learning ..." (Bellman-1978)
6. "The art of creating machines that perform functions that require intelligence, when performed by
people". (Kurzweil - 1990)
7. "The study of how to make computers do things at which, at the moment, people are better".
(Rich and Knight - 1991)
9. The study of mental faculties through the use of computational models. (Charniak and McDermott
- 1985)
10. "The study of the computations that make it possible to perceive, reason and act". (Winston -
1992)
12. "Computational intelligence is the study of the design of intelligent agents"."(Poole et al - 1998)
13. "AI is concerned with intelligent behaviour in artifacts". (Nilsson - 1998)
These definitions vary along two main dimensions. First dimension is the thought process and
reasoning and second dimension is the behaviour of the machine.
The first seven definitions are based on comparisons to human performance where as remaining
definitions measure success against an ideal concept of intelligence, which we call rationality. A
system is rational if it does the "right thing" given what it knows. Historically, there are four
approaches that are followed in AI. These four approaches are Acting Humanly, Thinking Humanly,
Thinking Rationally and Acting Rationally. Let us consider four approaches in detail.
1) Acting Humanly
For testing intelligence Alan Turing (1950) proposed a test called as Turing test. He suggested a test
based on common features that can match with the most intelligent entity - human beings.
c) Automated reasoning to make use of stored information to answer questions being asked and to
draw conclusions.
d) Machine learning to adapt to new circumstances and to detect and make new predictions by
finding patterns.
Turing also suggested to have physical interaction between interrogater and computers. Turing test
avoids this but Total Turing Test includes video signal so that the interrogator can test the subject's
perceptual abilities, as well as the opportunity for the interrogator to pass the physical objects
"through the hatch".
To pass total turing test in addition, computer will need following capabilities.
2) Thinking Humanly
As we are saying that the given program thinks like human it we should know that how human
thinks. For that, the theory of human minds needs to be explored. There are two ways to do this:
through introspection i.e. trying to catch our own thoughts as they go by and through psychological
experiments.
If computer programs, I/O and timing behaviours matches corresponding human behaviours, that is,
we can say that some of the program's mechanisms could also be operating in human. The
interdesciplinary field of cognitive science brings together computer models from AI and
experimental techniques from psychology that try to construct precise and testable theories of the
workings of human mind.
The concept of "Right thinking" was proposed by Aristotle. This idea provided patterns for argument
structures that always yielded correct conclusions when given correct premises.
For example,
"Ram is man",
"Ram is mortal".
These laws of thought were supposed to govern the operation in the mind; their study initiated the
field called logic which can be implemented to create intelligent systems.
4) Acting Rationally
An agent (Latin agre-to do) is something that acts. But computer agents are expected to have more
other attributes that distinguish them from just the "programs", because they need to operate under
autonomous control, perceiving their environment, persisting prolonged time period, adapting to
change and being capable of taking on another goals. A rational agent is expected to act so as to
achieve the best outcome or when there is uncertairuty to acheive best expected outcome.
The laws of thought emphasis on correct inference which should be incorported in rational agent.
Foundation of AI
Now we discuss the various disciplines that contributed ideas, viewpoints and techniques to AI.
Philosophy provides base to AI by providing theories of relationship between physical brain and
mental mind, rules for drawing valid conclusions. It also provides information about knowledge
origins and the knowledge leads to action.
Mathematics gives strong base to AI to develop concrete and formal rules for drawing valid
conclusions, various methods for date computation and techniques to deal with uncertain
information.
Economics support AI to make decisions so as to maximize payoff and make decisions under
uncertain circumstances.
Neuroscience gives information which is related to brain processing which helps AI to develope date
processing theories.
Phychology provides strong concepts of how humans and animals think and act which helps AI for
developing process of thinking and actions.
After taking brief look at various disciplines that contribute towards AI, now let us look at the
concept of strong and weak AI which also gives basic foundation for developing automated systems.
1. Strong AI
This concept was put forward by John Searle in 1980 in his article, "Minds, Brains and Programs".
Strong form AI provides theories for developing some form of computer based AI that can truly
reason and solve problems. A strong form of AI is said to be sentient or self aware.
Human-like AI - In which the computer program thinks and reasons much like a human-mind.
Non-human-like AI - In which the computer program develops a totally non-human sentience; and a
non-human way of thinking and reasoning.
2. Weak AI
Weak artificial intelligence research deals with the creation of some form of computer based AI that
cannot truly reason and solve problems. They can reason and solve problems only in a limited
domain, such a machine would, in some ways, act as if it were intelligent, but it would not possess
true intelligence.
There are several fields of weak AI, one of which is natural language. Much of the work in this field
has been done with computer simulations of intelligence based on predefined sets of rules. Very
little progress has been made in strong AI. Depending on how one defines one's goals, a moderate
amount of progress has been made in weak AI.
History of AI
The early work that is now generally recognized as AI was done in the period of 1943 to 1955. The
first AI thoughts were formally put by men McCulloch and Walter Pitts (1943). Their idea of AI was
based on three theories, firstly basic phsycology (the function of neurons in the brain), secondly
formal analysis of propositional logic and third was Turing's theory of computation.
Later Donald Hebb in 1949 demonstrated simple updating rule for modifying the connection
strengths between neurons. His rule now called Hebbian learning which is considered to be great
influencial model in AI.
There were huge early day work that can be recognized as AI but Alan Turing who first articulated a
complete vision of AI in his 1950 article named "Computing Machinery and Intelligence".
Real AI birth year is 1956 where in John McCarthy held workshop on automata theory, neural nets
and study of intelligence where other researchers also presented their papers and they come out
with new field in computer science called AI.
From 1952 to 1969 large amount of work was done with great success.
Newell and Simon's presented General Problem Solver (GPS) within the limited class of puzzles it
could handle. It turned out that the order in which the program considered subgoals and possible
actions was similar that in which humans approached the same problems. GPS was probably the first
program which has "thinking humanly" approach.
Herbert Gelernter (1959) constructed the Geometry Theorem Prover which was capable of proving
quite tricky mathematics theorem.
At MIT, in 1958 John McCarthy made major contributions to AI field :- development of HLL LISP which
has became the dominant AI programing language.
In 1958, McCarthy published a paper entitled Programs with Common Sense, in which he described
the Advice Taker, a hypothetical program that can be seen as the first complete AI system. Like the
Logic Theorist and Geometry Theorem Prover. McCarthy's program was designed to use knowledge
to search for solutions of problems.
The program was also designed so that it could accept new axioms in the normal course of
operation, thereby allowing it to achieve competence in new areas without being reprogrammed.
The Advice Taker thus embodied the central principles of knowledge representation and reasoning.
Early work building on the neural networks of McCulloch and Pitts also flourished. The work of
Winogard and Cowan (1963) showed how a large number of elements could collectively represent an
individual concept, with a corresponding increase in robustness and parallelism. Hebb's learning
methods were enhanced by Bernie Widrow (Widrow and Hoff, 1960; Widro, 1962), who called his
networks adalines, and by Frank Rosenblatt (1962) with his perceptrons. Rosenblatt proved the
perceptron convergence theorem, showing that his learning algorithm could adjust the connection
strengths of a perception to match any input data, provided such a match existed.
In 1965, Weizenbaum's ELIZA program appeared to conduct a serious conversation on any topic by
basically borrowing and manipulating the sentences given by a human. None of the programs
developed so far, had complex domain knowledge and were called 'weak' methods. Researchers
realized that it was necessary to use more knowledge for more complicated, larger reasoning tasks.
The DENDRAL program was developed by Buchanan in 1969 and was based on these principles. It
was a unique program that effectively used domain specific knowledge in problem solving. In the
mid- 1970's, MYCIN, a program developed to diagnose blood infections. It used expert knowledge to
diagnose illnesses and prescribe treatments. This program is also known as the first program, which
addressed the problem of reasoning with uncertain or incomplete information.
Within a very short time a number of knowledge representation languages were developed such as
predicate calculus, semantic networks, frames and objects. Some of them are based on
mathematical logic such as PROLOG. Although PROLOG goes back to 1972, it did not attract wide
spread attention until a more efficient version was introduced in 1979.
As the real, useful strong works on AI were put forward by researchers, AI emerged to be a big
Industry.
In 1981, Japanese announced 5th generation project a 10-year plan to build intelligent computers
running PROLOG. US also formed the Micro electronics and Computer Technology Corporation (MCC)
for research in AI.
Overall the AI industry boomed from few million dollars in 1980 to billions of dollars in 1988. But
soon after that AI industry had huge setback as many companies suffered as they failed to deliver on
extra vagant promises.
In late 1970s more research were done by psychologists on neural networks which continued in
1980s.
In 1990s AI emerged as a science. In terms of methodology AI has finally come firmly under the
scientific method. In recent years approaches based on Hidden Markov Models (HMMS) have come
to dominate the AI field. This model is based on two aspects one is rigorous mathematical model
theory and second is, these models are generated by a process of training on a large corpus real
speech data.
Judea Pearl's (1988) Probabilistic Reasoning in Intelligent Systems led to a new acceptance of
probability theory in AI. Later Bayesian network was invented which can represent uncertain
knowledge along with reasoning support.
Judea Pearl, Eric Hovitz and David Hackerman in 1986 promoted the idea of normative expert
systems that can act rationally according to the laws of decision theory. Similar but slow revolution
have ocurred in robotics, computer vision and knowledge representation.
In 1987 a complete agent architecture called SOAR was work out by Allan Newell, John Laired and
Paul Rosenbloom. Many such agents were developed to work in big environment "Internet". AI
systems have become so common in web based applications that the "- bot" suffix has entered in
everyday language.
AI technologies underlie many Internet tools, such as search engines, systems and website.
While developing complete agents it was realized that previously isolated subfields of AI need to
reorganize when their results are to be tied together.
Today, In particular it is widely appreciated that sensory systems (vision, sonar, speech-recogonition,
etc.) cannot deliver perfectly reliable information about the environment. Hence reasoning and
planning systems must be able to handle uncertainity. AI has been draw in to much closer contact
with other fields such as control theory and economics, that also deal with agents.
NASA's Remote Agent program became the first on-board autonomous planning program to control
the scheduling of operations for spacecraft. Such remote agents can do task of detecting, diagnosing
and recovering from problems as they occurred.
Game Playing
A computer chess program by IBM named as Deep Blue defeated world chess champion Garry
Kasparov in exhibition match in 1997. Such type of gaming programs can be developed using AI
techniques.
Autonomous Control
The ALVINN computer vision system was trained to stear car to keep it following a lane. It was made
to travel 2850 miles in which 98 % of the time control was with the system and only 2% of the time
human took over. AI can give more theories to develop such systems.
Diagnosis
Heckerman (1991) describes a case where a leading expert on lymph node pathology scoffs at a
program's diagnosis of an difficult case. The machine can explain the diagnosis. The machine points
out the major factors influencing its decision and explain interaction of several of the symptoms in
this case. If such diagnostic programs are developed using AI then highly accurate dignosis can be
made.
Planning
In 1991 during the persion Gulf Crisis U.S. forces deployed a dynamic analysis and replanning tool
name DART for automated logistics planning and scheduling for transportation.
Robotics
For doing complex and critical tasks systems can be developed using AI techniques. For e.g. Surgeons
can use robot assistants in microsurgery which can generate 3D vision of patients internal anatomy.
It can make use of constraints or possible word fillers, a large database of past puzzles and variety of
information sources including dictionaries and online databases. Such as a list of movies and the
actors that appears in them.
AI does not generate magic or science fiction but rather it can develops science, engineering and
mathematics system.
Recent progress in understanding the theoretical basis for intelligence has gone hand in hand with
improvements in the capabilities of real systems. The subfields of AI have became more integrated
and AI has found common ground with other disciplines.
Human Vs Machine
AU: May-12
1) Machines do not have life, as they are mechanical. On the other hand, humans are made of flesh
and blood; life is not mechanical for humans.
2) Humans have feelings and emotions and they can express these emotions. Machines have no
feelings and emotions. They just work as per the details fed into their mechanical brain.
4) Humans have the capability to understand situations and behave accordingly. On the contrary,
machines do not have this capability.
5) While humans behave as per their consciousness, machines just perform as they are taught.
6) Humans perform activities as per their own intelligence. On the contrary, machines only have an
artificial intelligence.
3) The brain is a massively parallel machine; machines are modular and serial. 4) Processing speed is
not fixed in the brain; machine has fixed speed specification.
8) Unlike machine, processing and memory management are performed by the same components in
the brain.
10) Brain have bodies, the brain is much, much digger than any [current] machine.
2. DENDRAL - Advised the user on how to interpret the output from a mass spectrograph.
3. CENTAUR, INTERNIST, PUFF, CASNET - Are all medical expert systems for various purposes.
8. PROSPECTOR- Interpreted geological data as potential evidence for mineral deposits. (Duda, Hart,
in 1976).
9. NAVEX - Monitored radar data and estimated the velocity and position of the space shuttle.
(Marsh, 1984)
10. R1/XCON - Configured VAX computer systems on the basis of customer's needs. (Mc Dermott,
1980)
11. COOKER ADVISER - Provides repair advice with respect to canned soup sterilizing machines.
(Texas Instruments, 1986)
12. VENTILATOR MANAGEMENT ASSISTANT - Scrutinised the data from hospital breathing - support
machines, and provided accounts of the patient's conditions. (Fagan, 1978)
13. MYCIN Diagnosed blood infections of the sort that might be contracted in hospital.
14. CROP ADVISOR - Developed by ICI to advise cereal grain farmers on appropriate fertilizers and
pesticides for their farms.
15. OPTIMUM - AIV - is a planner used by the European Space Agency to help in the assembly,
integration and verification of spacecraft.
Al Problem
Much of the early working AI focused on formal tasks, such as game playing and theorem proving.
For example chess playing, logic theorist was an early attempt to prove mathematical theorems.
Game playing and theorem proving share the property that people who do them well are considered
to be displaying intelligence.
Despite this it appeared that computers could perform well at those tasks by being fast at exploring a
large number of solution paths and then selecting the best one. But no computer is fast enough to
overcome the combinatorial explosion generated by most problems.
AI focusing on the sort of problem solving we do every day for instance, when we decide to get to
work in the morning, often called commonsense reasoning. In investigating this sort of reasoning
Newell, Shaw, and Simon built the General Problem Solver (GPS), which they applied to several
commonsense tasks as well performing symbolic manipulations of logical expression. However no
attempt was made to create a program with a large amount of knowledge about a particular
problem domain. Only quite simple tasks were selected.
As AI research progressed and techniques for handling larger amounts of world knowledge were
developed in dealing with problem solving in specialized domains such as medical diagnosis and
chemical analysis.
Perception (vision and speech) is another area for AI problems. Natural language understanding and
problem solving in specialized domain are other areas related to AI problems. The problem of
understanding spoken language is perceptual problem and is hard to solve from the fact that it is
more analog related than digital related. Many people can perform one or may be more specialized
tasks in which carefully, acquired expertise is necessary. Examples of such as tasks include
engineering design, scientific discovery, medical diagnosis, and financial planning. Programs that can
solve problems in these domains also fall under the aegis of Artificial Intelligence.
2. Formal tasks - Games (Chess, etc.), Mathematics (Geometry, Logic, Integral calculus, etc.)
3. Expert tasks - Engineering (Design, Fault finding, Manufacturing planning), Scientific analysis,
Medical diagnosis, Financial analysis
A person who knows how to perform tasks from several of the categories shown in above list learn
the necessary skills in a standard order. First perceptual, linguistic, and commonsense skills are
learned. Later expert skills such as engineering, medicine, or finance are acquired. Earlier skills are
easier and thus more amenable to computerized duplication than the later, more specialized one. For
this reason much of the initial work in Al work was concentrated in those early areas.
The problems areas where now AI is flourishing most as a practical discipline are primarily the
domains that require only specialized expertise without the assistance of commonsense knowledge.
Expert systems (AI programs) now are up for day-to-day tasks that aim at solving part, or perhaps all,
of practical, significant problem that previously required high human expertise.
When one is building a expert system, following questions need to be considered before one can
progress further:
Underlying Assumption
A physical symbol system consists of a set of entities called symbols which are patterns that can
occur as components of another entity called an expression. At an instant the system will contain a
collection of these symbol structures. In addition the system also contains a collection of processes
that operate on expressions to produce other expressions; processes of creation, modification,
reproduction and destruction. A physical symbol system is a machine that produces through time an
evolving collection of symbol structures. Such a system is machine that produces through time an
evolving collection of symbol structures.
• Formal logic: The symbols are words like "and", "or", "not", "for all x" and so on. The expressions
are statements in formal logic which can be true or false. The processes are the rules of logical
deduction.
• Algebra: The symbols are "+", "x", "x", "y", "1", "2", "3", etc. The expressions are o equations. The
processes are the rules of algebra that allow you to manipulate a mathematical expression and retain
its truth.
• A digital computer: The symbols are zeros and ones of computer memory, the processes are the
operations of the CPU that change memory.
• Chess: The symbols are the pieces, the processes are the legal chess expressions are the positions
of all the pieces on the board
The physical symbol system hypothesis claims that both of these are also examples of physical
symbol systems. Intelligent human thoughts are the symbols that are encoded in our brains. The
expressions are thoughts. The processes are the mental operations of thinking. In a running artificial
intelligence program the symbols are data, the expressions are more data and the processes are
programs that manipulate the data.
The importance of the physical symbol system hypothesis is twofold. It is significant theory of the
nature of human intelligence and it forms the basis of the belief that it is possible to build programs
that can perform intelligent tasks which are currently performed by people.
What is an Al Technique?
Intelligence requires knowledge but knowledge possesses less desirable properties such as, 1. It is
voluminous. 2. It is difficult to characterize accurately. 3. It is constantly changing. 4. It differs from
data by being organised in a way that corresponds to its application.
An AI technique is a method that exploits knowledge that is represented so that the knowledge
captures generalizations and situations that share properties which can be grouped together, rather
than being allowed separate representation. It can be understood by people who must provide the
knowledge; although for many programs the bulk of the data may come automatically, such as from
readings.
In many Al domains people must supply the knowledge to programs in a form the people understand
and in a form that is acceptable to the program. Knowledge can be easily modified to correct errors
and reflect changes in real conditions. Knowledge can be widely used even if it is incomplete or
inaccurate. Knowledge can be used to help overcome its own sheer bulk by helping to narrow the
range of possibilities that must be usually considered.
• Search - Provides a way of solving problems for which no more direct approach is available.
• Use of knowledge - Provides a way of solving complex problems by exploiting the structures of the
objects that are involved.
• Abstraction - Provides a way of separating important features and variations from the many
unimportant ones that would otherwise overwhelm any process.
Before starting doing something, it is good idea to decide exactly what one is trying to do. One
should ask following questions for self analysis :-
• What is the goal in trying to produce programs that do the tasks the same people do?
• Are we trying to produce programs that do the tasks the same way people do? Or are we trying to
produce programs that simply do the tasks in whatever way appears easiest?
Efforts to build program that perform tasks the way people do can be divided into two classes. The
first one are those that attempt to solve problems that do not really fit our definition of AI. i.e.
problems that computer could easily solve. The second class attempt to model human performance
are those that do things that fall more clearly within our definition of AI tasks; they do things that are
not trivial for the computer.
Reasons for modeling human performance for these kind of tasks:-
• To test psychological theories of human performance. e.g. PARRY program written for this reason,
which exploited a model of human paranoid behaviour to simulate the conversational behaviour of a
paranoid person.
• To enable computer to understand human reasoning. For example, for a computer or to be able to
read a news paper story and then answer question, such as "Why I did Ravana lose the game?"
• To enable people to understand computer reasoning. In many cases people are ems reluctant to
rely on the output of computer unless they can understand how the machine arrived at its result.
• To ask for assistance from best performing people and ask them how to proceed in dealing with
their tasks.
One of the most important questions to answer in any scientific or engineering research project is
"How will we know if we have succeeded?". So how in Al we have to ask ourselves, how will we know
if we have constructed a machine that is intelligent? The question is hard as unanswerable question
"What is Intelligence?"
To measure the progress we use proposed method known as Turing Test. Alan Turing suggested this
method to determine whether the machine can think. To conduct this test, we need two people and
the machine to be evaluated. One person act as interrogator, who is in a separate room from the
computer and the other person. The interrogator can ask questions of either the person or computer
by typing questions and received typed responses. However the interrogator knows them only as A
and B and aims to determine which is the person and which is the machine. The goal of the machine
is to fool the interrogator into believing that it is the person. If the machine succeeds at this, then we
will conclude that the machine can think.
The early work that is now generally recognized as AI was done in the period of 1943 to 1955. The
first AI thoughts were formally put by men McCulloch and Walter Pitts (1943). Their idea of AI was
based on three theories, firstly basic phsycology (the function of neurons in the brain), secondly
formal analysis of propositional logic and third was Turing's theory of computation.
Later Donald Hebb in 1949 demonstrated simple updating rule for modifying the connection
strengths between neurons. His rule now called Hebbian learning which is considered to be great
influencial model in AI.
There were huge early day work that can be recognized as AI but Alan Turing who first articulated a
complete vision of AI in his 1950 article named "Computing Machinery and Intelligence".
Real AI birth year is 1956 where in John McCarthy held workshop on automata theory, neural nets
and study of intelligence where other researchers also presented their papers and they come out
with new field in computer science called AI.
From 1952 to 1969 large amount of work was done with great success.
Newell and Simon's presented General Problem Solver (GPS) within the limited class of puzzles it
could handle. It turned out that the order in which the program considered subgoals and possible
actions was similar that in which humans approached the same problems. GPS was probably the first
program which has "thinking humanly" approach.
Herbert Gelernter (1959) constructed the Geometry Theorem Prover which was capable of proving
quite tricky mathematics theorem.
At MIT, in 1958 John McCarthy made major contributions to AI field:- development of HLL LISP which
has became the dominant AI programing language.
In 1958, McCarthy published a paper entitled Programs with Common Sense, in which he described
the Advice Taker, a hypothetical program that can be seen as the first complete AI system. Like the
Logic Theorist and Geometry Theorem Prover. McCarthy's program was designed to use knowledge
to search for solutions of problems.
The program was also designed so that it could accept new axioms in the normal course of
operation, thereby allowing it to achieve competence in new areas without being reprogrammed.
The Advice Taker thus embodied the central principles of knowledge representation and reasoning.
Early work building on the neural networks of McCulloch and Pitts also flourished. The work of
Winogard and Cowan (1963) showed how a large number of elements could collectively represent an
individual concept, with a corresponding increase in robustness and parallelism. Hebb's learning
methods were enhanced by Bernie Widrow (Widrow and Hoff, 1960; Widro, 1962), who called his
networks adalines, and by Frank Rosenblatt (1962) with his perceptrons. Rosenblatt proved the
perceptron convergence theorem, showing that his learning algorithm could adjust the connection
strengths of a perception to match any input data, provided such a match existed.
In 1965, Weizenbaum's ELIZA program appeared to conduct a serious conversation on any topic by
basically borrowing and manipulating the sentences given by a human. None of the programs
developed so far, had complex domain knowledge and were called 'weak' methods. Researchers
realized that it was necessary to use more knowledge for more complicated, larger reasoning tasks.
The DENDRAL program was developed by Buchanan in 1969 and was based on these principles. It
was a unique program that effectively used domain specific knowledge in problem solving. In the mid
-1970's, MYCIN, a program developed to diagnose blood infections. It used expert knowledge to
diagnose illnesses and prescribe treatments. This program is also known as the first program, which
addressed the problem of reasoning with uncertain or incomplete information.
Within a very short time a number of knowledge representation languages were developed such as
predicate calculus, semantic networks, frames and objects. Some of them are based on
mathematical logic such as PROLOG. Although PROLOG goes back to 1972, it did not attract wide
spread attention until a more efficient version was introduced in 1979, as
As the real, useful strong works on AI were put forward by researchers, AI emerged to be a big
Industry.
In 1981, Japanese announced 5th generation project a 10-year plan to build intelligent computers
running PROLOG. US also formed the Micro electronics and Computer Technology Corporation (MCC)
for research in AI.
Overall the AI industry boomed from few million dollars in 1980 to billions of dollars in 1988. But
soon after that AI industry had huge setback as many companies suffered as they failed to deliver on
extra vagant promises.
In late 1970s more research were done by psychologists on neural networks which continued in
1980s.
In 1990s AI emerged as a science. In terms of methodology AI has finally come firmly under the
scientific method. In recent years approaches based on Hidden Markov Models (HMMS) have come
to dominate the AI field. This model is based on two aspects one is rigorous mathematical model
theory and second is, these models are generated by a process of training on a large corpus real
speech data.
Judea Pearl's (1988) Probabilistic Reasoning in Intelligent Systems led to a new acceptance of
probability theory in AI. Later Bayesian network was invented which can represent uncertain
knowledge along with reasoning support.
Judea Pearl, Eric Hovitz and David Hackerman in 1986 promoted the idea of normative expert
systems that can act rationally according to the laws of decision theory.
Similar but slow revolution have ocurred in robotics, computer vision and knowledge representation.
In 1987 a complete agent architecture called SOAR was work out by Allan Newell, John Laired and
Paul Rosenbloom. Many such agents were developed to work in big environment "Internet". AI
systems have become so common in web based applications that the "- bot" suffix has entered in
everyday language.
AI technologies underlie many Internet tools, such as search engines, recommender systems and
website.
While developing complete agents it was realized that previously isolated subfields of AI need to
reorganize when their results are to be tied together.
Today, in particular it is widely appreciated that sensory systems (vision, sonar, speech-recogonition,
etc.) cannot deliver perfectly reliable information about the environment. Hence reasoning and
planning systems must be able to handle uncertainity. AI has been draw in to much closer contact
with other fields such as control theory and economics, that also deal with agents.
After taking a brief tour of AI history and its related work it can be seen that goal of Al is to construct
working programs that solve the problems which are useful for well being of human.
In AI major issue is to acquire large and enough amount of data and processed knowledge that can
deal with almost all the problems and at least solve the toy problems. It becomes harder to access
appropriate things when required, once the amount of knowledge grows up.
A good programming language is required to process knowledge related to AI problems. LISP has
been most commonly used language for AI programming. Specifically, AI programs are easiest to
build using languages that have been designed to support symbolic rather than primarily numeric
computation.
AI is still a yet to bloom and a bud in industry. In our syllabus we are going to study some of the basic
but major topics related to AI.
The Al
AI can be termed as "The science and engineering of making intelligent machines, especially
intelligent computer programs".
Intelligence distinguish human from everything in the world. As it has the ability to understand, apply
knowledge. Also, improve skills that played a significant role in human evolution.
Artificial Intelligence (AI) is the simulation of human intelligence by machines. It is the method by
which machines demonstrate certain aspects of human intelligence like learning, reasoning and self-
correction. Since its inception, AI has demonstrated unprecedented growth. Sophia the AI Robot, is
the quintessential example of this. The future of Artificial intelligence is more of unclear. But going by
the bounds of progress Al has been making, it is clear AI will impact every sphere of human life.
It is highly possible that areas in which the universal intelligence of the human level is not needed
will reach maturity and produce reliable high-quality products in the next decade. Efforts to improve
quality and expand boundaries for text and video understanding systems, and to give home robots
greater reliability and overall utility, will lead to systems of common sense linking learning and action
together in all these modalities.
Special systems for the acquisition and organization of scientific knowledge, as well as for working
with complex hypotheses, are likely to greatly affect molecular biology, system biology and medicine.
We should start looking for similar influences in the social sciences and policy making.
Especially given the massive growth of machine-readable data on human activities and the need for
machines that understand human values if such machines are reliable and useful. Public and private
sources of knowledge (systems that know and draw conclusions about the real world, and not just
store data) will become part of society.
If earlier AI developers were trying to create a machine that could perform tasks independently, at
the moment the situation has changed and the goal that is put before artificial intelligence is to help
a person in various matters.
But now the modern approach, artificial intelligence starts to simplify and improve various processes,
for example, in Western countries the robot conducts primary medical diagnostics on the basis of
dialogue with the patient and his analyses.
Another promising sphere is the prediction and even some manipulation of human behaviour in
advertising systems.
One can expect an increase in the quality of the work of search engines and machine translation. This
will be possible due to the fact that the computer will begin to understand and analyze the meaning
of the text.
Cyber Security
In near future AI application in cyber security will ensure in curbing hackers. The incidence of
cybercrime is an issue that has been escalating through the years. It costs enterprises in term of
brand image as well as material cost. Credit card fraudery is one of the most prevalent cybercrimes.
Despite there being detection techniques, they still prove to be ineffective in curbing hackers. Novel
AI techniques like Recurrent Neural Networks can detect fraudery in initial stages itself. This fraud
detection system will be able to scan thousands of transactions instantly and predict/ classify them
into buckets.
Face Recognition
The launch of iPhone x with face recognition feature was a step towards AI future. In the coming
years, iPhone users might be to unlock their phones by looking into the front camera. Authenticating
personal content is not the only use of facial recognition. Governments and security forces make use
of this feature to track down criminals and identify citizens. In the future, facial recognition can go
beyond physical structure to emotional analysis. For example, it might become possible to detect
whether a person is stressed or angry or any required emotional aspect.
Data Analysis
AI will benefit business in the field of Data Analysis. AI would be able to perceive patterns in data
that humans cannot. This enables business to target the right customers for the product. An example
of this is the partnership between IBM and Fluid. Fluid, a digital retail company uses Watson - an AI
application created by IBM for insightful product recommendation to its customers.
Transport
Al-guided transport will no longer be just a fantasy but a reality. Self- driving cars have already
populated the market, however, a driver is required at the wheels for safety purposes. With Google,
Uber and General Motors trying to establish themselves at the top in this market, it will not be long
before driverless vehicles become a reality. Machine Learning will be crucial in ensuring that these
automated vehicles operate smoothly and efficiently and error free.
Various Jobs
Application like robotic process automation is machine learning to automate rule-based tasks. It will
help people to focus on the critical aspects of their job while leaving the routine aspects to machines.
Automation can range from data entry to complete process automation. The reach of AI is also
expected to blanket jobs that are risky or health-hazardous like bomb diffusion and welding that
would help mankind.
Emotion Bots
Technology has advanced in terms of emotional quotient. Virtual assistants, Cortana and Alexa show
how the extent to which AI comprehends human language. They are able to understand the meaning
from context and can make intelligent judgments. Overall, considering all this, the possibility of
emotional bots might become a reality in the future.
The application of AI in sales and marketing are going to be a definite, considering the fact that
marketing professionals leave no stone unturned to benefit their business. AI can increase the
efficiency of sales and marketing organization. The focus will be on improving conversion rates and
sales. Personalised advertising, knowledge of customers and their behavior gleamed through facial
recognition can generate more revenue.
AI is the latest buzzword in Tech. All the major business giants will make use of AI in the future to
improve their productivity. So AI is going to permeate every job sector in the future. It can create
new career paths in the field of machine learning, data mining, and analysis, AI software
development, program management, and testing.
• Private companies
• Public organizations
• Education
• The arts
• Healthcare facilities
• The military.
• Algorithm specialists.
• Military and aviation electricians working with flight simulators, drones, and armaments.
Review Questions
7. Explain the nature and scope of AI. Why game playing problems are considered AI problems?
9. Define the term "Artificial Intelligence". Explain how AI techniques improve real-world problem
solving.
10. What is the significance of the "Turing Test" in AI? Explain how it is performed. (Refer section
Ans.: Al is branch of computer science that deals with the automation of intelligent behaviour. Al
gives basis for developing human like programs which can be useful to solve real life problems and
thereby become useful to mankind.
Ans.: A machine that looks like a human being and performs various complex acts of a human being.
It can do the task efficiently and repeatedly without fault. It works on the basis of a program feeded
to it, it can have previously stored knowledge and it can gather more knowledge from environment
through its sensors. It acts with the help of actuators.
University Question with Answer
May 2012
Q.1 Is Al a science or is it engineering? Or neither or both? Explain. (Refer section 1.4) [8]
1) Machines do not have life, as they are mechanical. On the other hand, humans are made of flesh
and blood; life is not mechanical for humans.
2) Humans have feelings and emotions and they can express these emotions. Machines have no
feelings and emotions. They just work as per the details fed into their mechanical brain.
4) Humans have the capability to understand situations and behave accordingly. On the contrary,
machines do not have this capability.
5) While humans behave as per their consciousness, machines just perform as they are taught.
6) Humans perform activities as per their own intelligence. On the contrary, machines only have an
artificial intelligence.
3) The brain is a massively parallel machine; machines are modular and serial. 4) Processing speed is
not fixed in the brain; machine has fixed speed specification.
8) Unlike machine, processing and memory management are performed by the same components in
the brain.
10) Brain have bodies, the brain is much, much digger than any [current] machine.