Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
AI is the part of computer science concerned with designing intelligence computer system that
is, system that exhibit the characteristics we associated with intelligence in human behavior.
AI is a branch of computer science which deals with helping machines find solutions to complex
problems in a more human – like fashion. This generally involves borrowing characteristics from
human intelligence and applying them as algorithms in a computer friendly way.
Brief History of Artificial Intelligence
The development of AI can be understood under the following headings:
1. The gestation of artificial intelligence (1943-1955)
The first work on Al was done by Warren McCulloch and Walter Pitts (1943). They drew
on three sources: Knowledge of the basic physiology and function of neurons in the brain;
a formal analysis of propositional logic due to Russel and Whitehead; and Turing's theory
of computation. They showed for example, that any computable function could be
computed by some network of connected neurons, and that all the logical connectives
(and, or, not, etc.) could be implemented by simple net structures. McCulloch and Pits
also suggested that suitably defined networks could learn.
In 1947, Alan Turing published an article "Computing Machinery and Intelligence" where
he introduced the Turing Test, machine learning, genetic algorithms, and reinforcement
learning.
In 1949, Donald Hebb demonstrated a simple updating rule (now known as Hebbian
learning) for modifying the connection strengths between neurons.
In 1950, two graduates (Minsky and Edmonds) from Princeton college built the first neural
network computer which was named "SNARC".
2. The birth of artificial intelligence (1956)
In 1956, McCarthy, Minsky, Claude Shannon, and Rochester organized a two-month
workshop at Dartmouth college in Hanover, New Hampshire which brought together
U.S. researchers interested in automata theory, neural networks, and the study of
intelligence. The objective of the workshop was to find how to make machines use
language, form abstractions and concepts, solve kinds of problems now reserved for
humans, and improve themselves. There were altogether 10 attendees.
John McCarthy first coined the term "artificial intelligence" at the Dartmouth
conference. He defined Al as the science and engineering of making intelligent
machines.
3. Early enthusiasm, great expectations (1952-1969)
Newell and Simon developed General Problem Solver (GPS). This program could
imitate human problem- solving protocols, but could only handle a limited class of
puzzles.
In 1952, Arthur Samuel wrote a series of programs for checkers (draughts) that
eventually learned to play at a strong amateur level.
In 1958 at MIT AI lab Memo No. 1, McCarthy defined the high-level Al programming
language "Lisp". In the same year, McCarthy published a paper entitled "Programs
with Common Sense" in which he described the Advice Taker, a hypothetical program
that can be seen as the first complete Al system.
In 1959, Herbert Gelernter developed Geometry Theorem Prover which could prove
theorems using explicitly represented axioms.
In 1963, McCarthy started the Al lab at Stanford.
In 1965, J. A. Robinson discovered resolution method which was a complete theorem-
proving algorithm for first-order logic.
In 1963, James Slagle developed SAINT program which was able to solve closed-form
calculus integration problems typical of first-year college courses.
In 1967, Daniel Bobrow developed STUDENT program which solved algebra story
problems, such as the following:
If the number of customers Rabindra gets is twice the square of 10% of the number of
advertisements he runs, and the number of advertisements he runs is 50, what is the
number of customers Rabindra gets?
Tom Evans' ANALOGY program (1968) solved geometry analogy problems that appear
in IQ tests.
Hebb's learning methods were enhanced by Bernie Widrow (Widrow and Hoff, 1960;
Widrow, 1962), who called his networks "adalines", and by Franke Rosenblatt (1962)
with his "perceptrons".
4. A dose of reality (1966-1973)
In 1957, Simon made concrete predictions that within 10 years, a computer would be a chess
champion, and a significant mathematical theorem prover. These predictions turned out into
reality within 40 years rather than 10. There were several reasons behind this:
The first reason was that most early programs knew nothing of their subject matter, they
succeeded by means of simple syntactic manipulations. So, there occurred problems in
early machine translation efforts.
The second reason was the intractability of many of the problems that Al was attempting
to solve. Most of the early Al programs solved problems by trying out different
combinations of steps until the solution was found. This strategy worked initially but later
failed in many situations.
The third reason was that there were some fundamental limitations on the basic structure
being used to generate intelligent behavior. 5. Knowledge-based systems: The key to
power? (1969 1979) In this period, a more powerful, domain-specific knowledge was used
that allowed larger reasoning steps.
5. Knowledge-based systems: The key to power? (1969 1979)
In this period, a more powerful, domain-specific knowledge was used that allowed larger
reasoning steps
The DENDRAL program (Buchanan et al., 1969) was developed at Stanford, and was the
first successful knowledge-intensive system: its expertise derived from large number of
special-purpose rules. The program was used to solve the problem of inferring molecular
structure from the information provided by a mass spectrometer.
Feigenbaum, Buchanan, and Dr. Edward Shortlife developed MYCIN to diagnose blood
infections. With about 450 rules, MYCIN was able to perform as well as some experts.
Winograd developed SHRDLU system for understanding natural language and was
designed specifically for one area the blocks world.
At Yale, Roger Schank and his students built a series of programs that all had the task of
understanding natural language. The emphasis was less on language in itself and more on
the problems of representing and reasoning with the knowledge required for language
understanding.
A large number of different representation and reasoning languages were developed.
Some were based on logic The Prolog language (popular in Europe), and the PLANNER
(popular in the United States). Others, following Minsky's idea of frames (1975), adopted
a more structured approach.
6. AI becomes an industry (1980 - present)
The first successful commercial expert system, R1 began operation at the Digital
Equipment Corporation (McDermott, 1982). The program helped configure orders for
new computer systems; by 1986, it was saving the company an estimated $40 million a
year.
Nearly every major U.S. corporation had its own Al group and was either using or
investigating expert systems.
In 1981, Japan announced the "Fifth Generation" project to build intelligent computers
running Prolog. In response, the United States formed the Microelectronics and
Computer Technology Corporation (MCC) as a research consortium.
Overall, the Al industry bloomed from a few million dollars in 1980 to billions of dollars in
1988, including hundreds of companies building expert systems, vision systems, robots,
and software and hardware specialized for these purposes.
7. The return of neural networks
The algorithm was applied to many learning problems in computer science and
psychology.
Modern neural network research has bifurcated into two fields, one concerned with
creating effective network architectures and algorithms and understanding their
mathematical properties, the other concerned with careful modeling of the empirical
properties of actual neurons and ensembles of neurons.
8. Al adopts the scientific method (1987 - present)
Al has finally come firmly under the scientific method. It is now possible to replicate experiments
by using shared repositories of test data and code.
Approaches based on "hidden Markov models (HMMs)" have come to dominate the field
of speech recognition. HMMs were based on a rigorous mathematical theory, and are
generated by a process of training on a large corpus of real speech data.
Machine translation follows the same course as speech recognition.
Neural networks also fit this trend. As neural networks have started using improved
methodology and theoretical frameworks, "data mining" technology has spawned a
vigorous new industry.
The "Bayesian network" formalism was invented to allow efficient representation of, and
rigorous reasoning with, uncertain knowledge.
Similar gentle revolutions have occurred in robotics, computer vision, and knowledge
representation.
9. The emergence of intelligent agents (1995 - present)
One of the most important environments for intelligent agents is the Internet. Al systems
have become more common in Web-based applications. Moreover, Al technologies
underlie many Internet tools such as search engines, recommender systems, and Web
site aggregators.
The first conference on "Artificial General Intelligence (AGI)" was held in 2008. AGI looks
for a universal algorithm for learning and acting in any environment.
10. The availability of very large data sets (2001- present)
More emphasis on data than algorithm. Increasing availability of very large data sources:
for example, trillions of words of English and billions of images from the Web; billions of
base pairs of genomic sequences.
Today, many thousands of Al applications are deeply embedded in the infrastructure of
every industry.
Consider the following scenario. There are two rooms A and B. One of the rooms contains a
computer, and the other room contains a human. The interrogator is outside and does not
know which room has a computer. He can ask questions through a teletype and receives
answers from both rooms A and B. The interrogator needs to identify whether a human is in
room A or in room B. To pass the Turing test, the computer has to fool the interrogator into
believing that it is human
Automated reasoning: To use the stored information to answer questions and to draw new
conclusions.
Machine learning: To adapt to new circumstances and to detect and extrapolate patterns.
Computational intelligence is the study of the design of an intelligence agent. This model is
also known as the rational agent approach. A rational agent is one that acts so to achieve the
best outcome or when there is uncertainty, the best-expected outcome.
A rational agent is more general than the laws of thought approach, which emphasizes
correct interference. Making correct interference is sometimes part of being a rational agent
because one way to act rationally is to reason logically to the conclusion that a givers action
will achieve one's goals and then act on that conclusion. E.g., reflex agent.
We are a privileged generation to live in this era full of technological advancements. Gone are
the days when almost everything was done manually, and now we live in a time where a lot of
work is taken over by machines, software, and various automatic processes. In this regard,
artificial intelligence has a special place in all the advancement made today. Al is nothing but the
science of computers and machines developing intelligence like humans. In this technology, the
machines can do some of the simples to complex stuff that humans need to do regularly. The
artificial intelligence applications help to get the work done faster and with accurate results.
Error-free and efficient worlds are the main motives behind artificial intelligence.
A concise answer to what Al can do today is difficult because there are so many activities in so
many subfields. There are numerous real-world applications of Al which signifies how important
an Al is in today's world. Following are the applications which explains the importance of artificial
intelligence in today's world:
a. Game playing
b. Speech recognition
c. Natural Language Processing
d. Computer Vision
e. Expert Systems
f. Consumer marketing
g. Intrusion Detection
h. Autonomous planning and scheduling
i. Medical diagnosis
j. Media and entertainment.
k. Scientific research
l. Finance
Knowledge is more general, which means it may be applied to situations we have not been
programmed for it.
Importance of Knowledge
Sufficient amount of knowledge introduces to intelligence. If we have enough knowledge, then
we can achieve intelligence. Knowledge plays a major role in building intelligence systems. Not
only that, knowledge is very important, even in our daily lives. It makes us superior and gives us
wisdom.
Learning
Learning In simple terms, learning refers to the cognitive process of acquiring skill or knowledge.
It is making a useful change in our minds. Learning can simply be understood as the process or
phase of gaining knowledge or skills. Learning means of construct of modifying the
representation of what is being experienced.
According to Herbert Simon (1983), "Learning is the phenomenon of knowledge acquisition in
the absence of explicit programming". Learning denotes change in the system that is adaptive in
such a way that they enable the system to do the same task or task which are drawn from the
same population more efficiently and more effectively next time.
Learning basically involves three factors, and they are presented below :
Changes:
Learning changes leaner. For machine learning, the problem is determining the nature of the
following changes and how to represent them in the best form.
Generalization:
Learning leads to generalization where performance must improve not only on the same task but
on similar tasks.
Improvement:
Learning leads to improvements. Machine learning must address the possibility that the changes
may degrade in term of performance and find a way to prevent it
Intelligent Agents
Agents
An agent is anything that can be viewed as perceiving its environment through sensors and acting
upon that environment through the actuator.
Human agent: Eyes, ears and other organs for sensors, hands, legs, mouth and another
body part for actuators.
Robotic agent: Cameras and infrared range finder for sensors; various motor for the
actuator.
Agents and Environment
The term "percept" refers to the agent's perceptual input at any given instant, that is, location
and state of the environment. An agent's percept sequence is the complete history of everything
the agent has ever perceived.
The agent function maps from percept histories to actions.
[f: P* → A]
The agent program runs on the physical architecture to produce f,
Agent = architecture + program
e.g., Vacuum-cleaner world
Percept sequence Actions
[A, clean] Right
[A, dirty] Suck
[B, clean] Left
[B, dirty] Suck
Rational Agents
For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, based on the evidence provided by the percept sequence
and whatever built-in knowledge the agent has. Performance measure is an objective criterion
for the success of an agent's behaviour. E.g., a performance measure of a vacuum-cleaner agent
could be the amount of dirt cleaned up, amount of time taken, amount of electricity, amount of
noise generated etc.
Rationality is distinct form omniscience. An omniscient agent knows the actual outcome of its
actions and can act accordingly, but omniscience is impossible. Whereas, rationality maximizes
expected performance
Agents can act to modify future percepts to obtain useful information (information gathering,
exploration). Agent is autonomous if its behaviour is determined by its own percepts and
experiences (with the ability to learn and adapt) without depending solely on built-in knowledge
In our discussion of the rationality of the simple vacuum- cleaner agent, we had to specify the
performance measure, environment and agent's actuator and sensors. We will group all these
together under the task environment.
Before we design an intelligent agent, we must specify its task environment (PEAS).
Performance measure P
Environment E
Actuators A
Sensors S
Agent Types
There are five basic types of agents in order of increasing generality.
1. Table driven agents
It uses a percept sequence actions table in memory to find the next action. They are
implemented by a large look up table.
2. Simple reflex agents
Agents are based on condition action rules, implemented with an appropriate production
system. They are stateless devices which do not have a memory of past world states.