Lecture Notes Artificial Intelligence COMP 241: Niccolo Machiavelli, Italian (1469-1527)
Lecture Notes Artificial Intelligence COMP 241: Niccolo Machiavelli, Italian (1469-1527)
Lecture Notes Artificial Intelligence COMP 241: Niccolo Machiavelli, Italian (1469-1527)
ARTIFICIAL INTELLIGENCE
COMP 241
(1434-35)
0
Chapter 1: Introduction to AI
In this chapter contain overview and introduction of the artificial intelligence. This new
technology is considered as the most important in computer science now and it will be the one of
the tools that will be used to build the future of all sciences. In the following points we will see
some general and basic information about artificial intelligence:
There are many definitions of AI with the same meaning but here are some of major of
these definitions:
The most major definition of AI is written by John McCarthy (considered as the father of
AI). McCarthy writes a question and gives his answer like this: "the Question: What is artificial
intelligence? The Answer is:
“It is the science and engineering of making intelligent machines, especially intelligent
computer programs. It is related to the similar task of using computers to understand human
intelligence, but AI does not have to confine itself to methods that are biologically observable”.
1. Artificial Intelligence is the branch of computer science and has four important features:
Thinking Capability
Reasoning
Behavior
Performance
2. Artificial intelligence (AI) is technology and a branch of computer science that studies
and develops intelligent machines and software.
4. It is a part of computer science which tries to simulate the human intelligence by using
computer.
1
Characteristics of intelligence behavior
Really, there is no one exact definition that is defines the intelligence. But here are some
of major behaviors of the intelligence:
John McCarthy wrote and answers the question of "What is intelligence?" like this:
"Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds
and degrees of intelligence occur in people, many animals and some machines".
The Goals of AI
The advantages of AI
Save the computer storage space: by running and storing small programs.
2
Acting Humanly: The Turing Test Approach
The Turing Test designed by Alan Turing to provide a satisfactory operational definition of
intelligence in 1950.
Alan Turing suggested that computer passes the test with the help of intelligent system, but for
the human cannot tell whether the written response correct or not. We note that programming a
computer to pass the test to work on.
The Turing test is a test of a machine's ability to demonstrate intelligence. A human judge
engages in a natural language conversation with one human and one machine, each of which tries
to appear human. All participants are separated from one another. If the judge cannot reliably tell
the machine from the human, the machine is said to have passed the test. In order to test the
machine's intelligence rather than its ability to render words into audio, the conversation is
limited to a text-only channel such as a computer keyboard and screen.
Applications Areas of AI
There are famous typical problems such as missioner and cannibals (M-C), 8-Puzzle, 4-
Queens, maze and Tour of Hanoi. These problems called general problems and usually
considered as a field to ably AI. Solving these types of problems by human it needs some
degrees of intelligence. So by default, if we use a computer program or algorithm this program
and that algorithm must be intelligent.
2. Expert Systems
Expert system is an intelligent software program that uses knowledge base and tries to
solve problems just like human expert. So, the expert system is a program that simulates the
activities and the actions of human expert in his expertise domain.
3. Robotics
An intelligent robot is a moving device that can do the suitable action depending on what
happening in the round environment. It uses deferent sensors to read from the environment.
Natural Language Processing (NLP) is the computerized approach to analyzing text that
is based on both a set of theories and a set of technologies. And, being a very active area of
research and development, there is not a single agreed-upon definition that would satisfy
everyone, but there are some aspects, which would be part of any knowledgeable person’s
definition.
3
5. Games Playing
Computers games are mainly depend on AI techniques. These games let the player inputs
himself in a virtual world and plays against computer or against another human but with the
control and supervision of computer. The famous game that represents the intelligent of the
computer is the chess game. The IBM Deep Blue supercomputer that defeated World Chess
Champion Garry Kasparov in 1997 employed 480 custom chess chips. The computer wins Garry
Kasparov in many rounds.
Those systems, creates new knowledge by finding previously unknown patterns in data.
In contrast to knowledge representation approaches, which model the problem-solving structure
of human experts, machine learning systems derive solutions by "learning" patterns in data, with
little or no intervention by an expert.
There are 3 major models of artificial neural networks all of them are trying to simulate
the organization of the biological (natural) model of human nervous cells and mechanism of
work. These for models are:
1- Mathematical Model.
2- Chemical model.
The mathematical model simulates functions of the original neural network using
mathematical operations. So, it is capable with dealing with many problems just like human
brain.
4
8. Intelligent Agent
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
Robot agent might have camera and infrared range finders for sensors and various motors
for actuators.
Software agent receives key strokes, file contains, and network packets as sensory inputs
and acts on the environments by displaying on the screen, writing files, and sending network
packets.
9. Pattern Recognition
2- Biometrics: identifying person by his face image, voice, finger print, iris image, ear
image, or DNA.
3- Optical Character Recognition (OCR): recognize the hand written text or printed text and
translate it to its equivalent digital text.
5- Data Mining: it is the process of discovering new correlations, patterns, and trends by
sifting through large amounts of data stored in repositories, using pattern recognition
technologies as well as statistical and mathematical techniques.
5
Category of AI:
Q: Can artificial intelligence ever match the human mind in every aspect? Can a computer
be “aware” like we are?
This is a fascinating but difficult question. Researchers in the field of artificial intelligence (AI)
have given many different answers to this question over the years. I will summarize some of the
disagreements and encourage you to read more and develop your own views. My summary is
based on Stuart Russell and Peter Norvig’s book, “Artificial Intelligence: A Modern Approach,”
which you can consult yourself for further reading.
Before getting to the issue of building intelligent computers, it is worth mentioning the related
work in biology and biological engineering from the last few decades. In particular, you can
think of cloning technology as a different way of creating “artificial” human life in a laboratory.
This technology has already been successfully applied to clone plants and some animals, though
the attempt to clone humans is still ongoing. It is likely that scientific advances in the next few
decades will make this a real possibility. In some ways, this is not artificial intelligence, since we
6
would be creating an actual human being, but it is still a way of creating consciousness in a
laboratory as opposed to the usual way (through sexual reproduction).
This said, when people talk about artificial intelligence, they are usually referring to computers
and robots. Unfortunately, it has been difficult for researchers to agree on exactly what “artificial
intelligence” should mean, and to agree on what researchers in the field of AI should be trying to
build. There have been four major categories:
(A) Systems that think like humans. This experiment is sometimes called the “cognitive
modeling” approach and defines AI as automating activities associated with human thinking,
such as decision-making, general problem solving, learning, and so on. A major difficulty with
this approach is that in order to be able to say that a computer is thinking like a human, we need
to first figure out how humans think, which is in itself a very hard and still unsolved problem.
This requires conducting psychological experiments and trying to develop a biological and
cognitive theory of how the mind works. Once that theory is good enough, we can use the theory
to design a computer program that matches human behavior.
One example of a system that was trying to do this was the “General Problem Solver” that Allen
Newell and Herbert Simon designed in 1961. They were more interested in having the computer
go through the same reasoning process as human subjects solving the same problems; they were
less concerned with having the program solve the problems correctly. In the early days of AI,
people were often confused about this distinction: they would argue that because a program
solved a problem correctly, the program was a good model of human performance. The issue is
that, of course, humans often don’t solve problems correctly, and a program that really mimics
human behavior should make the same mistakes. These days, people would separate the two
claims, and the field of cognitive science is more concerned with building models of human
thought and behavior, while researchers in artificial intelligence now focus more on getting
programs to solve difficult problems correctly.
This highlights a split between older research in AI (1950s-1970s) and more modern research
(1970s-present) and brings us to the next two experiments. Roughly, the two definitions above
measure success in terms of how close the program is to human behavior, while the two below
7
will measure success against an ideal concept of intelligence, which is usually called rationality.
Basically, a system is rational if it “does the right thing” based on the information it has access to
at the time.
(B) Systems that think rationally. The attempt to codify “thinking the right way” goes back to
the Greek philosopher Aristotle’s work on formal logic. He suggested some patterns for
arguments that always yielded the correct conclusions when given correct premises. For
example, “Socrates is a man; all men are mortal; therefore, Socrates is mortal.” Originally, these
laws of thought were intended to be the ones that governed the operation of the mind. Even by
1965, there were programs that could, in principle, solve any solvable problem that could be
written in logical notation. Through the 1970s, there was a strong movement within AI called
the logicist school that tried to improve on such programs to try to create intelligent systems.
There were two problems with this approach. First, it is hard to take informal knowledge and
state it in formal logical notation, particularly when there is some uncertainty about the
information. Second, these programs were often very slow, and solving even small problems
proved impractical with these methods. This brings us to the most common modern view.
(C) Systems that act like humans. This experiment is closest to the intuitive definition of AI
that most people have. The Turing Test, proposed by the famous computer scientist Alan Turing
in 1950, was designed to provide a satisfying operational definition of intelligence. By
“operational definition,” I mean that instead of proposing a long and likely controversial list of
conditions needed for intelligence, Turing suggested a simple test that could be applied to a
system to determine whether or not the system was intelligent. The test works the following way:
a human being poses some questions to the computer and then gets some written responses back.
If the human being cannot tell that the answers were written by a person or not, then the
computer passes and is considered to be “intelligent.”
In order to pass the Turing test, a computer would need to have a number of capabilities, such as
the ability to communicate successfully in English or other natural language, to effectively store
what it knows and reads, to perform reasoning to use the information it is given to answer
questions and draw new conclusions, and to learn from conversational patterns over time.
8
Researchers have not spent much time trying to build systems that actually pass the Turing test,
but trying to solve these underlying problems has been a major part of the research effort in AI.
(D) Systems that act rationally. This attempts to define artificial intelligence to be concerned
with the design and construction of rational agents, which are computer systems that acts to
achieve the best outcome, or when there is uncertainty involved, the best expected outcome.
Agents are assumed to have additional features that separate them from normal programs, such
as operating by themselves, perceiving and reacting to their environment, and adapting to
change.
One major advantage of this approach is that it is much easier to develop scientifically. The
standard of rationality is very clearly defined and very general, while we still do not have a good
understanding of human thought and behavior. Moreover, it is more useful to build systems that
can do the right thing in a given situation, and solve problems better than humans might, rather
than trying to design something that makes all the same mistakes humans do.
Because this definition is so general, however, it also includes many kinds of systems that most
people do not think of as real AI. For example, a spam filter is considered a rational agent
because it adapts over time depending on the email you receive. Its goal is to behave rationally
by correctly classifying every email as spam or not spam. Similarly, some researchers are
working on cars that can drive themselves or helicopters that can fly themselves. A robotic car is
certainly not a replacement for the human mind, but it does need to be able to learn, adapt, and
exert control intelligently in the context of driving. This kind of work is considered to be part of
modern AI.
To summarize, your first question has been approached in different ways and even redefined
over the years. Human beings are themselves agents of a certain kind that do certain things well
(e.g. commonsense reasoning or communicating) and other things poorly (e.g. carrying out
complex calculations quickly). Computers are different kinds of agents that, in general, appear to
have complementary capabilities (e.g. they can be very logical and calculate very quickly but
they cannot as yet engage in satisfactory commonsense reasoning). Whether computers will be
able to match humans where they perform well remains an open question. While there is no a
9
priori reason why such a goal is not realizable, my own view is that we will get progressively
closer to it without ever quite reaching it.
Your second question was about whether a computer could be “aware” in the same way that
humans are, if it could have a mind in exactly the same sense human beings have minds. In its
modern form, AI is no longer trying to accomplish this goal, but there are still ongoing
philosophical discussions about whether the goal is even possible.
The disagreement hinges, in part, on the difference between “strong AI” and “weak AI” — on
the difference between simulatinga mind and actually having a mind. Proponents of weak AI say
that even an accurate simulation is only a model of the mind, while those of strong AI argue that
the correct simulation really is a mind, that the appropriately programmed computer with the
right inputs and outputs would have a mind in exactly the same sense human beings have minds.
This is a more philosophical argument because it has nothing to do with how intelligently a
computer can act, only with whether we can really call that having a “mind” in the full sense of
the word.
It would be too difficult to summarize the full argument here, but much of the debate centers on
a famous and controversial thought experiment of the philosopher John Searle called
the Chinese Room Argument. The Chinese Room is an argument against the possibility of
strong AI, of true artificial intelligence. I would suggest reading Searle’s original paper, called
“Minds, Brains, and Programs”, or the following web page, which also explains the many
responses to it over the years: http://plato.stanford.edu/entries/chinese-room/. Needless to say,
the argument is still unresolved, but future advances in the fields of cognitive science,
neuroscience, and artificial intelligence may shed more light on the debate in the years to come.
In closing, this point out that most of our tools and technologies – telephones, airplanes,
calculators, and many other examples – are usually designed to extend our capabilities, not
replicate them. Computers are also ultimately best viewed in this way, although the philosophical
questions surrounding them are undoubtedly well worth asking.
10
History of AI: The early history of AI (before 1965):
The first work of AI developed by Warren McCulloch and Walter Pitts in 1943.
Karel Capek's play "R.U.R." (Rossum's Universal Robots) produced in 1921 (London opening,
1923). - First use of the word 'robot' in English.
Emil Post proves that production systems are a general computational mechanism (1943). See
Ch.2 of Rule Based Expert Systems for the uses of production systems in AI.
A.M. Turing published "Computing Machinery and Intelligence" (1950). - Introduction of
Turing Test as a way of operationalizing a test of intelligent behavior.
Claude Shannon published detailed analysis of chess playing as search in "Programming a
computer to play chess" (1950).
Isaac Asimov published his three laws of robotics (1950).
Arthur Samuel (IBM) wrote the first game-playing program, for checkers, to achieve sufficient
skill to challenge a world champion. Samuel's machine learning programs were responsible for
the high performance of the checkers player. (1952-1962)
In 1956 John McCarthy coined the term "artificial intelligence" as the topic of the
Dartmouth Conference, the first conference devoted to the subject.
In 1957 the General Problem Solver (GPS) demonstrated by Newell, Shaw & Simon.
11
The 60's period (1960-1969)
In 1967: the first expert system program Dendral (Edward Feigenbaum, Joshua
Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford) demonstrated to interpret mass
spectra on organic chemical compounds. First successful knowledge-based program for
scientific reasoning.
In 1969: Roger Schank (Stanford) defined conceptual dependency model for natural
language understanding. Later developed (in PhD dissertations at Yale) for use in story
understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding
memory by Janet Kolodner.
In 1975: Marvin Minsky published his widely-read and influential article on Frames as a
representation of knowledge, in which many ideas about schemas and semantic links are
brought together.
In 1979: Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated
the CHI system for automatic programming.
In 1979: The Stanford Cart, built by Hans Moravec, becomes the first computer-
controlled, autonomous vehicle when it successfully traverses a chair-filled room and
circumnavigates the Stanford AI Lab.
12
In 1979: Drew McDermott & Jon Doyle at MIT, and John McCarthy at Stanford begin
publishing work on non-monotonic logics and formal aspects of truth maintenance.
In 1980: Lee Erman, Rick Hayes-Roth, Victor Lesser and Raj Reddy published the first
description of the blackboard model, as the framework for the HEARSAY-II speech
understanding system.
In 1984: Neural Networks become widely used with the Backpropagation algorithm (first
described by Werbos).
In 1987: Marvin Minsky publishes The Society of Mind, a theoretical description of the
mind as a collection of cooperating agents.
In 90's period, major advances in all areas of AI, with significant demonstrations in machine
learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling,
uncertain reasoning, data mining, natural language understanding and translation, vision,
virtual reality, games, and other topics.
In 1997: The Deep Blue chess program beats the current world chess champion, Garry
Kasparov, in a widely followed match.
In 1997: First official Robo-Cup soccer match featuring table-top matches with 40 teams of
interacting robots and over 5000 spectators
At the late of 90's: Web crawlers and other AI-based information extraction programs
become essential in widespread use of the world-wide-web.
At the late of 90's: Demonstration of an Intelligent Room and Emotional Agents at MIT's AI
Lab. Initiation of work on the Oxygen Architecture, which connects mobile and stationary
computers in an adaptive network.
Interactive robot pets (a.k.a. "smart toys") become commercially available, realizing the
vision of the 18th cen. novelty toy makers.
13
Cynthia Breazeal at MIT publishes her dissertation on Sociable Machines, describing
KISMET, a robot with a face that expresses emotions. The Nomad robot explores remote
regions of Antarctica looking for meteorite samples.Biometrics are commercial used.
There are many researches in the field of AI field some of them are considered as strange
and exciting others are applicable and acceptable. The following are some of the desired
future applications of AI:
2- Machine-rights societies.
3- Human-machine hybrid (i.e. machine including biological human parts or human with
electronic and mechanical parts).
Exercises:
References:
Artificial Intelligence – A modern Approach, Second edition, Stuart Russell & Peter Norvig
14