0% found this document useful (0 votes)
71 views

A Technical Report On:: "Artificial Intelligence"

The document provides a technical report on artificial intelligence (AI). It discusses the history of AI, key problems addressed in AI research like reasoning and problem solving, knowledge representation, planning, learning, etc. It also covers major approaches to AI like symbolic, sub-symbolic, statistical, and integrating approaches. Tools used in AI research like search algorithms, logic, probabilistic methods, neural networks are explained. The document evaluates progress in AI and discusses applications and competitions. It concludes that AI aims to study and design intelligent agents that can take percepts from the environment and perform actions to achieve goals.

Uploaded by

Bhagya Raj Bhagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views

A Technical Report On:: "Artificial Intelligence"

The document provides a technical report on artificial intelligence (AI). It discusses the history of AI, key problems addressed in AI research like reasoning and problem solving, knowledge representation, planning, learning, etc. It also covers major approaches to AI like symbolic, sub-symbolic, statistical, and integrating approaches. Tools used in AI research like search algorithms, logic, probabilistic methods, neural networks are explained. The document evaluates progress in AI and discusses applications and competitions. It concludes that AI aims to study and design intelligent agents that can take percepts from the environment and perform actions to achieve goals.

Uploaded by

Bhagya Raj Bhagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 20

A TECHNICAL REPORT ON:

“ARTIFICIAL INTELLIGENCE”

Prepared by: TO

G.BHAGYARAJ - [17835A0324] MRS.RENUKA MAM.

GURUNANAK INSTITUTE OF
TECHNOLOGY
Table of contents:

 1 History
 2 Problems
o 2.1 Deduction, reasoning, problem solving

o 2.2 Knowledge representation

o 2.3 Planning

o 2.4 Learning

o 2.5 Natural language processing

o 2.6 Motion and manipulation

o 2.7 Perception

o 2.8 Social intelligence

o 2.9 Creativity

o 2.10 General intelligence

 3 Approaches
o 3.1 Cybernetics and brain simulation

o 3.2 Symbolic

o 3.3 Sub-symbolic

o 3.4 Statistical

o 3.5 Integrating the approaches

 4 Tools
o 4.1 Search and optimization

o 4.2 Logic

o 4.3 Probabilistic methods for uncertain reasoning

o 4.4 Classifiers and statistical learning methods


o 4.5 Neural networks

o 4.6 Control theory

o 4.7 Languages

 5 Evaluating progress
 6 Applications
o 6.1 Competitions and prizes

o 6.2 Platforms

 Conclusion
 Bibliography

PREFACE
A report of this is having some great information on the fore coming technology and very
interesting topic “Artificial intelligence”Artificial Intelligence” (AI) . We have tried to explore
the full breadth of the field, which encompasses logic, probability, and continuous mathematics;
perception, reasoning, learning, and action; and everything from microelectronic devices to
robotic planetary explorers. The report is also big because we go into some depth to know the
human behavior and then think to implement it on the machines so that the machine would be
able to think like a human brain . The information about the machines with the respect of human
beings is present in this report

The main unifying theme is the idea of an intelligent agent. We define AI as the study of agents
that receive percepts from the environment and perform actions. Each such agent implements a
function that maps percept sequences to actions, and we cover different ways to represent these
functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain
the role of learning as extending the reach of the designer into unknown environments, and we
show how that role constrains agent design, favoring explicit knowledge representation and
reasoning. We treat robotics and vision not as independently defined problems, but as occurring
in the service of achieving goals. We stress the importance of the task environment in
determining the appropriate agent design.
Our primary aim is to convey the ideas that have emerged over the past fifty years of AI research
and the past two millennia of related work. We have tried to avoid excessive formality in the
presentation of these ideas while retaining precision

ACKNOWLEDGEMENTS

It is important to give credit to those who have helped us in this work so, we take immense
pleasure in thanking MRS.RENUKA MAM our beloved Communication & Presentation
Skills Lecturer and Guide for having permitted us to carry out this Report Writing On the topic
“Artificial Intelligence”. We wish to express our deep sense of gratitude for his able guidance
and useful suggestions, which helped us in completing the Report, in time and perfection.

Needless to mention that we got a large help through the John McCarthy popular AI
programming language LISP books and no of popular websites on Artificial Intelligence, which
has been a good source of knowledge for us in preparing the report. We would also like to thank
all the students who had helped us in gathering the information and all their valuable assistance
in the writing the report.

Finally, yet importantly, we would like to express my heartfelt thanks to the beloved
Parents of all the students in this group for their blessings for the successful completion of this
Report

ABSTRACT

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science
that aims to create it. Report define the field as "the study and design of intelligent agents" where
an intelligent agent is a system that perceives its environment and takes actions that maximize its
chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and
engineering of making intelligent machines."

The field was founded on the claim that a central property of humans, intelligence the
sapience of Homo sapiens can be so precisely described that it can be simulated by a
machine. This raises philosophical issues about the nature of the mind and limits of scientific
hubris, issues which have been addressed by myth, fiction and philosophy since antiquity.
Artificial intelligence has been the subject of optimism, but has also suffered setbacks and,
today, has become an essential part of the technology industry, providing the heavy lifting for
many of the most difficult problems in computer science.

AI research is highly technical and specialized, deeply divided into subfields that often fail to
communicate with each other. Subfields have grown up around particular institutions, the
work of individual researchers, the solution of specific problems, longstanding differences of
opinion about how AI should be done and the application of widely differing tools. The
central problems of AI include such traits as reasoning, knowledge, planning, learning,
communication, perception and the ability to move and manipulate objects. General
intelligence (or "strong AI") is still a long-term goal of (some) research.

Introduction
Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the
golden robots of Hephaestus and Pygmalion's Galatea. Human likenesses believed to have
intelligence were built in every major civilization: animated statues were seen in Egypt and
Greece and humanoid automatons were built by Yan Shi, Hero of Alexandria, Al-Jazari and
Wolfgang von Kempelen. It was also widely believed that artificial beings had been created by
Jābir ibn Hayyān, Judah Loew and Paracelsus. By the 19th and 20th centuries, artificial beings
had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's
R.U.R. (Rossum's Universal Robots). Pamela McCorduck argues that all of these are examples of
an ancient urge, as she describes it, "to forge the gods". Stories of these creatures and their fates
discuss many of the same hopes, fears and ethical concerns that are presented by artificial
intelligence.

Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since
antiquity. The study of logic led directly to the invention of the programmable digital electronic
computer, based on the work of mathematician Alan Turing and others. Turing's theory of
computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could
simulate any conceivable act of mathematical deduction. This, along with recent discoveries in
neurology, information theory and cybernetics, inspired a small group of researchers to begin to
seriously consider the possibility of building an electronic brain.
In the early 1980s, AI research was revived by the commercial success of expert systems, a form
of AI program that simulated the knowledge and analytical skills of one or more human experts.
By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth
generation computer project inspired the U.S and British governments to restore funding for
academic research in the field.[36] However, beginning with the collapse of the Lisp Machine
market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.

In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind
the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many
other areas throughout the technology industry.] The success was due to several factors: the
incredible power of computers today (see Moore's law), a greater emphasis on solving specific
subproblems, the creation of new ties between AI and other fields working on similar problems,
and above all a new commitment by researchers to solid mathematical methods and rigorous
scientific standards.

Chapter.1: Problems
The general problem of simulating (or creating) intelligence has been broken down into a
number of specific sub-problems. These consist of particular traits or capabilities that researchers
would like an intelligent system to display. The traits described below have received the most
attention.

Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans
were often assumed to use when they solve puzzles, play board games or make logical
deductions. By the late 1980s and '90s, AI research had also developed highly successful
methods for dealing with uncertain or incomplete information, employing concepts from
probability and economics.

For difficult problems, most of these algorithms can require enormous computational resources
most experience a "combinatorial explosion": the amount of memory or computer time required
becomes astronomical when the problem goes beyond a certain size. The search for more
efficient problem solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgments rather than the
conscious, step-by-step deduction that early AI research was able to model. AI has made some
progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches
emphasize the importance of sensorimotor skills to higher reasoning; neural net research
attempts to simulate the structures inside human and animal brains that give rise to this skill.

Knowledge representation and knowledge engineering are central to AI research. Many of the
problems machines are expected to solve will require extensive knowledge about the world.
Among the things that AI needs to represent are: objects, properties, categories and relations
between objects; situations, events, states and time; causes and effects; knowledge about
knowledge (what we know about what other people know); and many other, less well researched
domains. A complete representation of "what exists" is an ontology (borrowing a word from
traditional philosophy), of which the most general are called upper ontologies.

Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem

Many of the things people know take the form of "working assumptions." For example, if
a bird comes up in conversation, people typically picture an animal that is fist sized,
sings, and flies. None of these things are true about all birds. John McCarthy identified
this problem in 1969 as the qualification problem: for any commonsense rule that AI
researchers care to represent, there tend to be a huge number of exceptions. Almost
nothing is simply true or false in the way that abstract logic requires. AI research has
explored a number of solutions to this problem.

Planning

Intelligent agents must be able to set goals and achieve them. They need a way to visualize the
future (they must have a representation of the state of the world and be able to make predictions
about how their actions will change it) and be able to make choices that maximize the utility (or
"value") of the available choices.

In classical planning problems, the agent can assume that it is the only thing acting on the world
and it can be certain what the consequences of its actions may be. However, if this is not true, it
must periodically check if the world matches its predictions and it must change its plan as this
becomes necessary, requiring the agent to reason under uncertainty.

Multi-agent planning uses the cooperation and competition of many agents to achieve a given
goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.

Learning

Machine learning has been central to AI research from the beginning. Unsupervised learning is
the ability to find patterns in a stream of input. Supervised learning includes both classification
and numerical regression. Classification is used to determine what category something belongs
in, after seeing a number of examples of things from several categories. Regression takes a set of
numerical input/output examples and attempts to discover a continuous function that would
generate the outputs from the inputs. In reinforcement learning the agent is rewarded for good
responses and punished for bad ones. These can be analyzed in terms of decision theory, using
concepts like utility. The mathematical analysis of machine learning algorithms and their
performance is a branch of theoretical computer science known as computational learning
theory.

Natural language processing

ASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs.

Natural language processing gives machines the ability to read and understand the languages that
humans speak. Many researchers hope that a sufficiently powerful natural language processing
system would be able to acquire knowledge on its own, by reading the existing text available
over the internet. Some straightforward applications of natural language processing include
information retrieval (or text mining) and machine translation. Motion and manipulation

The field of robotics is closely related to AI. Intelligence is required for robots to be able to
handle such tasks as object manipulation and navigation, with sub-problems of localization
(knowing where you are), mapping (learning what is around you) and motion planning (figuring
out how to get there).

Perception

Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar
and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze
visual input. A few selected subproblems are speech recognition, facial recognition and object
recognition.

Social intelligence
Kismet, a robot with rudimentary social skills

Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict
the actions of others, by understanding their motives and emotional states. (This involves
elements of game theory, decision theory, as well as the ability to model human emotions and the
perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent
machine also needs to display emotions. At the very least it must appear polite and sensitive to
the humans it interacts with. At best, it should have normal emotions itself.

Creativity

TOPIO, a robot that can play table tennis, developed by TOSY.


A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological
perspective) and practically (via specific implementations of systems that generate outputs that
can be considered creative). A related area of computational research is Artificial Intuition and
Artificial Imagination.

General intelligence
Most researchers hope that their work will eventually be incorporated into a machine with
general intelligence (known as strong AI), combining all the skills above and exceeding
human abilities at most or all of them. A few believe that anthropomorphic features like
artificial consciousness or an artificial brain may be required for such a project.

Many of the problems above are considered AI-complete: to solve one problem, you must solve
them all. For example, even a straightforward, specific task like machine translation requires that
the machine follow the author's argument (reason), know what is being talked about
(knowledge), and faithfully reproduce the author's intention (social intelligence). Machine
translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well
as humans can do it.

Chapter.2: Approaches
There is no established unifying theory or paradigm that guides AI research. Researchers
disagree about many issues. A few of the most long standing questions that have remained
unanswered are these: should artificial intelligence simulate natural intelligence, by studying
psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to
aeronautical engineering? Can intelligent behavior be described using simple, elegant principles
(such as logic or optimization)? Or does it necessarily require solving a large number of
completely unrelated problems? Can intelligence be reproduced using high-level symbols,
similar to words and ideas? Or does it require "sub-symbolic" processing?

Cybernetics and brain simulation

There is no consensus on how closely the brain should be simulated.

In the 1940s and 1950s, a number of researchers explored the connection between neurology,
information theory, and cybernetics. Some of them built machines that used electronic networks
to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins
Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton
University and the Ratio Club in England. By 1960, this approach was largely abandoned,
although elements of it would be revived in the 1980s.
Symbolic

When access to digital computers became possible in the middle 1950s, AI research began to
explore the possibility that human intelligence could be reduced to symbol manipulation. The
research was centered in three institutions: CMU, Stanford and MIT, and each one developed its
own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or
"GOFAI".

Cognitive simulation

Economist Herbert Simon and Allen Newell studied human problem solving skills and
attempted to formalize them, and their work laid the foundations of the field of artificial
intelligence, as well as cognitive science, operations research and management science.
Their research team used the results of psychological experiments to develop programs
that simulated the techniques that people used to solve problems. This tradition, centered
at Carnegie Mellon University would eventually culminate in the development of the
Soar architecture in the middle 80s.

Logic based

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate
human thought, but should instead try to find the essence of abstract reasoning and
problem solving, regardless of whether people used the same algorithms. His laboratory
at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems,
including knowledge representation, planning and learning. Logic was also focus of the
work at the University of Edinburgh and elsewhere in Europe which led to the
development of the programming language Prolog and the science of logic programming.

"Anti-logic" or "scruffy"

Researchers at MIT (such as Marvin Minsky and Seymour Papert) found that solving
difficult problems in vision and natural language processing required ad-hoc solutions –
they argued that there was no simple and general principle (like logic) that would capture
all the aspects of intelligent behavior. Roger Schank described their "anti-logic"
approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford).
Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy"
AI, since they must be built by hand, one complicated concept at a time.

Knowledge based
When computers with large memories became available around 1970, researchers from
all three traditions began to build knowledge into AI applications. This "knowledge
revolution" led to the development and deployment of expert systems (introduced by
Edward Feigenbaum), the first truly successful form of AI software. The knowledge
revolution was also driven by the realization that enormous amounts of knowledge would
be required by many simple AI applications.

Sub-symbolic

During the 1960s, symbolic approaches had achieved great success at simulating high-level
thinking in small demonstration programs. Approaches based on cybernetics or neural networks
were abandoned or pushed into the background. By the 1980s, however, progress in symbolic AI
seemed to stall and many believed that symbolic systems would never be able to imitate all the
processes of human cognition, especially perception, robotics, learning and pattern recognition.
A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.

Bottom-up, embodied, situated, behavior-based or nouvelle AI

Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic
AI and focused on the basic engineering problems that would allow robots to move and
survive. Their work revived the non-symbolic viewpoint of the early cybernetics
researchers of the 50s and reintroduced the use of control theory in AI. This coincided
with the development of the embodied mind thesis in the related field of cognitive
science: the idea that aspects of the body (such as movement, perception and
visualization) are required for higher intelligence.

Computational Intelligence

Interest in neural networks and "connectionism" was revived by David Rumelhart and
others in the middle 1980s. These and other sub-symbolic approaches, such as fuzzy
systems and evolutionary computation, are now studied collectively by the emerging
discipline of computational intelligence.

Statistical

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific


subproblems. These tools are truly scientific, in the sense that their results are both measurable
and verifiable, and they have been responsible for many of AI's recent successes. The shared
mathematical language has also permitted a high level of collaboration with more established
fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig
describe this movement as nothing less than a "revolution" and "the victory of the neats."

Integrating the approaches


Intelligent agent paradigm
An intelligent agent is a system that perceives its environment and takes actions which
maximizes its chances of success. The simplest intelligent agents are programs that solve
specific problems. The most complicated intelligent agents are rational, thinking humans.
The paradigm gives researchers license to study isolated problems and find solutions that
are both verifiable and useful, without agreeing on one single approach. An agent that
solves a specific problem can use any approach that works — some agents are symbolic
and logical, some are sub-symbolic neural networks and others may use new approaches.
The paradigm also gives researchers a common language to communicate with other
fields—such as decision theory and economics—that also use concepts of abstract agents.
The intelligent agent paradigm became widely accepted during the 1990s.

Agent architectures and cognitive architectures

Researchers have designed systems to build intelligent systems out of interacting


intelligent agents in a multi-agent system. A system with both symbolic and sub-symbolic
components is a hybrid intelligent system, and the study of such systems is artificial
intelligence systems integration. A hierarchical control system provides a bridge between
sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest
levels, where relaxed time constraints permit planning and world modelling. Rodney
Brooks' subsumption architecture was an early proposal for such a hierarchical system.

Chapter.3 Tools
In the course of 50 years of research, AI has developed a large number of tools to solve the most
difficult problems in computer science. A few of the most general of these methods are discussed
below.

Search and optimization

Many problems in AI can be solved in theory by intelligently searching through many possible
solutions: Reasoning can be reduced to performing a search. For example, logical proof can be
viewed as searching for a path that leads from premises to conclusions, where each step is the
application of an inference rule. Planning algorithms search through trees of goals and subgoals,
attempting to find a path to a target goal, a process called means-ends analysis. Robotics
algorithms for moving limbs and grasping objects use local searches in configuration space.
Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches are rarely sufficient for most real world problems: the search space
(the number of places to search) quickly grows to astronomical numbers. The result is a search
that is too slow or never completes. The solution, for many problems, is to use "heuristics" or
"rules of thumb" that eliminate choices that are unlikely to lead to the goal (called "pruning the
search tree"). Heuristics supply the program with a "best guess" for what path the solution lies
on.
A very different kind of search came to prominence in the 1990s, based on the mathematical
theory of optimization. For many problems, it is possible to begin the search with some form of a
guess and then refine the guess incrementally until no more refinements can be made. These
algorithms can be visualized as blind hill climbing: we begin the search at a random point on the
landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top.
Other optimization algorithms are simulated annealing, beam search and random optimization.

Evolutionary computation uses a form of optimization search. For example, they may begin with
a population of organisms (the guesses) and then allow them to mutate and recombine, selecting
only the fittest to survive each generation (refining the guesses). Forms of evolutionary
computation include swarm intelligence algorithms (such as ant colony or particle swarm
optimization) and evolutionary algorithms (such as genetic algorithms and genetic
programming).

Logic

Logic is used for knowledge representation and problem solving, but it can be applied to other
problems as well. For example, the satplan algorithm uses logic for planning and inductive logic
programming is a method for learning.

Several different forms of logic are used in AI research. Propositional or sentential logic is the
logic of statements which can be true or false. First-order logic also allows the use of quantifiers
and predicates, and can express facts about objects, their properties, and their relations with each
other. Fuzzy logic, is a version of first-order logic which allows the truth of a statement to be
represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems
can be used for uncertain reasoning and have been widely used in modern industrial and
consumer product control systems. Subjective logic models uncertainty in a different and more
explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief +
uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from
probabilistic statements that an agent makes with high confidence. Default logics, non-
monotonic logics and circumscription are forms of logic designed to help with default reasoning
and the qualification problem. Several extensions of logic have been designed to handle specific
domains of knowledge, such as: description logics; situation calculus, event calculus and fluent
calculus (for representing events and time); causal calculus; belief calculus; and modal logics.

Probabilistic methods for uncertain reasoning

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the
agent to operate with incomplete or uncertain information. AI researchers have devised a number
of powerful tools to solve these problems using methods from probability theory and economics.

Bayesian networks are a very general tool that can be used for a large number of problems:
reasoning (using the Bayesian inference algorithm), learning (using the expectation-
maximization algorithm), planning (using decision networks) and perception (using dynamic
Bayesian networks). Probabilistic algorithms can also be used for filtering, prediction, smoothing
and finding explanations for streams of data, helping perception systems to analyze processes
that occur over time (e.g., hidden Markov models or Kalman filters).

A key concept from the science of economics is "utility": a measure of how valuable something
is to an intelligent agent. Precise mathematical tools have been developed that analyze how an
agent can make choices and plan, using decision theory, decision analysis, information value
theory. These tools include models such as Markov decision processes, dynamic decision
networks, game theory and mechanism design.

Classifiers and statistical learning methods


The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond")
and controllers ("if shiny then pick up"). Controllers do however also classify conditions before
inferring actions, and therefore classification forms a central part of many AI systems. Classifiers
are functions that use pattern matching to determine a closest match. They can be tuned
according to examples, making them very attractive for use in AI. These examples are known as
observations or patterns. In supervised learning, each pattern belongs to a certain predefined
class. A class can be seen as a decision that has to be made. All the observations combined with
their class labels are known as a data set. When a new observation is received, that observation is
classified based on previous experience.

A classifier can be trained in various ways; there are many statistical and machine learning
approaches. The most widely used classifiers are the neural network, kernel methods such as the
support vector machine, k-nearest neighbor algorithm, Gaussian mixture model, naive Bayes
classifier, and decision tree. The performance of these classifiers have been compared over a
wide range of tasks. Classifier performance depends greatly on the characteristics of the data to
be classified. There is no single classifier that works best on all given problems; this is also
referred to as the "no free lunch" theorem. Determining a suitable classifier for a given problem
is still more an art than science.

Neural networks
A neural network is an interconnected group of nodes, akin to the vast network of neurons in the
human brain.

The study of artificial neural networks began in the decade before the field AI research was
founded, in the work of Walter Pitts and Warren McCullough. Other important early researchers
were Frank Rosenblatt, who invented the perceptron and Paul Werbos who developed the
backpropagation algorithm.

The main categories of networks are acyclic or feedforward neural networks (where the signal
passes in only one direction) and recurrent neural networks (which allow feedback). Among the
most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis
networks. Among recurrent networks, the most famous is the Hopfield net, a form of attractor
network, which was first described by John Hopfield in 1982. Neural networks can be applied to
the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian
learning and competitive learning.

Jeff Hawkins argues that research in neural networks has stalled because it has failed to model
the essential properties of the neocortex, and has suggested a model (Hierarchical Temporal
Memory) that is loosely based on neurological research.

Control theory

Control theory, the grandchild of cybernetics, has many important applications, especially in
robotics.

Languages

AI researchers have developed several specialized languages for AI research, including Lisp and
Prolog.

Chapter.5 Evaluating progress


In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now
known as the Turing test. This procedure allows almost all the major problems of artificial
intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.

Artificial intelligence can also be evaluated on specific problems such as small problems in
chemistry, hand-writing recognition and game-playing. Such tests have been termed subject
matter expert Turing tests. Smaller problems provide more achievable goals and there are an
ever-increasing number of positive results.

The broad classes of outcome for an AI test are:

 Optimal: it is not possible to perform better


 Strong super-human: performs better than all humans
 Super-human: performs better than most humans
 Sub-human: performs worse than most humans
For example, performance at draughts is optimal, performance at chess is super-human and
nearing strong super-human, and performance at many everyday tasks performed by humans is
sub-human.

A quite different approach measures machine intelligence through tests which are developed
from mathematical definitions of intelligence. Examples of these kinds of tests start in the late
nineties devising intelligence tests using notions from Kolmogorov Complexity and data
compression. Two major advantages of mathematical definitions are their applicability to
nonhuman intelligences and their absence of a requirement for human testers.

Chapter.6 Applications
Artificial intelligence techniques are pervasive and are too numerous to list. Frequently, when a
technique reaches mainstream use, it is no longer considered artificial intelligence; this
phenomenon is described as the AI effect.

Competitions and prizes

There are a number of competitions and prizes to promote research in artificial intelligence. The
main areas promoted are: general machine intelligence, conversational behavior, data-mining,
driverless cars, robot soccer and games.
Platforms

A platform (or "computing platform")is defined as "some sort of hardware architecture or


software framework (including application frameworks), that allows software to run." As Rodney
Brooks pointed out many years ago, it is not just the artificial intelligence software that defines
the AI features of the platform, but rather the actual platform itself that affects the AI that results,
i.e., we need to be working out AI problems on real world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert
systems, albeit PC-based but still an entire real-world system to various robot platforms such as
the widely available Roomba with open interface.

Conclusion
AI is a common topic in both science fiction and projections about the future of technology and
society. The existence of an artificial intelligence that rivals human intelligence raises difficult
ethical issues, and the potential power of the technology inspires both hopes and fears.

In fiction, AI has appeared fulfilling many roles, including a servant (R2D2 in Star Wars), a law
enforcer (K.I.T.T. "Knight Rider"), a comrade (Lt. Commander Data in Star Trek: The Next
Generation), a conqueror/overlord (The Matrix), a dictator (With Folded Hands), an assassin
(Terminator), a sentient race (Battlestar Galactica/Transformers), an extension to human abilities
(Ghost in the Shell) and the savior of the human race (R. Daneel Olivaw in the Foundation
Series).
Mary Shelley's Frankenstein considers a key issue in the ethics of artificial intelligence: if a
machine can be created that has intelligence, could it also feel? If it can feel, does it have the
same rights as a human? The idea also appears in modern science fiction, including the films I
Robot, Blade Runner and A.I.: Artificial Intelligence, in which humanoid machines have the
ability to feel human emotions. This issue, now known as "robot rights", is currently being
considered by, for example, California's Institute for the Future, although many critics believe
that the discussion is premature.

The impact of AI on society is a serious area of study for futurists. Academic sources have
considered such consequences as a decreased demand for human labor, the enhancement of
human ability or experience, and a need for redefinition of human identity and basic values.
Andrew Kennedy, in his musing on the evolution of the human personality, considered that
artificial intelligences or 'new minds' are likely to have severe personality disorders, and
identifies four particular types that are likely to arise: the autistic, the collector, the ecstatic, and
the victim. He suggests that they will need humans because of our superior understanding of
personality and the role of the unconscious.

Several futurists argue that artificial intelligence will transcend the limits of progress. Ray
Kurzweil has used Moore's law (which describes the relentless exponential improvement in
digital technology) to calculate that desktop computers will have the same processing power as
human brains by the year 2029. He also predicts that by 2045 artificial intelligence will reach a
point where it is able to improve itself at a rate that far exceeds anything conceivable in the past,
a scenario that science fiction writer Vernor Vinge named the "technological singularity".

Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have
predicted that humans and machines will merge in the future into cyborgs that are more capable
and powerful than either. This idea, called transhumanism, which has roots in Aldous Huxley and
Robert Ettinger, has been illustrated in fiction as well, for example in the manga Ghost in the
Shell and the science-fiction series Dune.

Edward Fredkin argues that "artificial intelligence is the next stage in evolution," an idea first
proposed by Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by
George Dyson in his book of the same name in 1998.

Bibliography
 Luger, George; Stubblefield, William (2004). Artificial Intelligence: Structures and
Strategies for Complex Problem Solving (5th ed.). The Benjamin/Cummings
Publishing Company, Inc.. ISBN 0-8053-4780-1. http://www.cs.unm.edu/~luger/ai-
final/tocfull.html.
 Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann
Publishers. ISBN 978-1-55860-467-4.
 Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach
(2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2,
http://aima.cs.berkeley.edu/
 Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence:
A Logical Approach. New York: Oxford University Press. ISBN 0195102703.
http://www.cs.ubc.ca/spider/poole/ci.html.
 Winston, Patrick Henry (1984). Artificial Intelligence. Reading, Massachusetts:
Addison-Wesley. ISBN 0201082594.

You might also like