#1 Lecture
#1 Lecture
#1 Lecture
Introduction
Human call themselves as wise because of their intelligence. For thousands of years,
human have tried to understand how they think, perceive, understand, predict and
manipulate a large and complicated world. The field of artificial intelligence (AI), is an
attempts not just to understand but also to build intelligent entities.
AI is one of the relatively new fields in science and engineering. Work started soon after
World War II, and the name itself was coined in 1956. AI currently encompasses a huge
variety of subfields, ranging from general to specific, such as plying chess, proving
mathematical theorems, writing poetry, driving a car on crowded street, diagnosis
disease. AI is relevant to any intellectual task; it is truly a universal field.
Figure 1 explains eight definitions of AI. These definitions are laid out along two
dimensions. The definitions on top are concerned with though processes and reasoning,
whereas the ones on the bottom address behavior. The definitions on the left measure
success in terms of fidelity to human performance. Whereas the ones on the right
measure against an ideal performance measure, called rationality. A system is rational if
it does the “right thing” given what it knows.
Page |1
Al-Mustaqbal University Computer Engineering Department
College of Engineering & Technology Artificial Intelligence
Page |2
Al-Mustaqbal University Computer Engineering Department
College of Engineering & Technology Artificial Intelligence
These six disciplines compose most of AI. Yet AI researchers have devoted little
effort to passing the Turing Test, believing that it is more important to study the
underlying principles of intelligence than to duplicate an exemplar. For example,
the quest for “artificial flight” succeeded when the researchers stopped imitating
birds and started using wind tunnels and learn about aerodynamics.
Page |3
Al-Mustaqbal University Computer Engineering Department
College of Engineering & Technology Artificial Intelligence
concerned with comparing the trace of its reasoning steps to traces of human
subjects solving the same problems. The interdisciplinary filed of cognitive
science brings together computer models from AI and experimental techniques
for psychology to construct precise and testable theories of the human mind.
3. Thinking rationally: The “laws of though” approach
The Greek philosopher Aristotle was one of the first to attempt to codify “right
thinking”, that is, irrefutable reasoning process. His syllogisms provided patterns
for argument structures that always yielded correct conclusions when give
correct premises. For example, “Socrates is a man, all men are mortal; therefore,
Socrates is mortal. These laws of thought were supposed to govern the operation
of the mind, their study initiated the filed called logic.
Logicians in the 19th century developed a precise notation for statements about
all kinds of objects in the world and the relations among them. By 1965,
programs existed that could, in principle, solve any solvable problem described in
logical notation. The so-called logicians tradition within artificial intelligence
hopes to build on such program to create intelligent systems.
There are two main obstacles to this approach. First, it is not easy to take
informal knowledge and state it in the formal terms required by logical notation,
particularly when the knowledge is less than 100% certain. Second, there is a big
difference between solving a problem “in principle” and solving it in practice.
Even problems with just a few hundred facts can exhaust the computational
resources of any computer unless is has some guidance as to which reasoning
steps to tray first.
4. Acting rationally: The rational agent approach
An agent is just something that acts (agent comes from the Latin agere, to do).
Of course, all computer programs do something, but computer agents are
expected to do more: operate autonomously, perceive their environment, persist
over a prolonged time period, adapt to change, and create and pursue goals. A
rational agent is one that acts so as to achieve the best outcome or, when there
is uncertainty, the best expected outcome.
Page |4
Al-Mustaqbal University Computer Engineering Department
College of Engineering & Technology Artificial Intelligence
Page |5
Al-Mustaqbal University Computer Engineering Department
College of Engineering & Technology Artificial Intelligence
H. Linguistics
How dose language related to thought?
Page |6
Al-Mustaqbal University Computer Engineering Department
College of Engineering & Technology Artificial Intelligence
The early of AI were full of success, in a limited way. Given the primitive
computers and programming tools of the time and the fact that only a few
years earlier computers were seen as things that could do arithmetic and no
more, it was astonishing whenever a computer did anything remotely clever.
An example of an early AI projects is the General Problem Solver (GPS). The
GPS was probably the first program to embody the “thinking humanly”
approach. A second example is the famous physical symbol system
hypothesis, which states that “a physical symbol system has the necessary
and sufficient means for general intelligent action” what they meant is that any
system (human and machine) exhibiting intelligence must operate by
manipulating data structures composed of symbols. Another example is the
IBM geometry theorem prover, which was able to prove theorems that many
students of mathematics would find quite difficult. And many other examples.
D. A dose of reality (1966-1973)
AI researchers were not shy about making prediction of their coming
successes. They made a concrete prediction: that within 10 years a computer
would be chess champion, and a significant mathematical theorem would be
proved by machine. These predictions came true within 40 years rather than
10 years. The early AI system faced some difficult problems: the first kind of
difficulty arose because earliest programs knew nothing of their subject matter,
they succeeded by means of simple syntactic manipulation. In other word, the
absence of background knowledge was the first difficulty. The second kind of
difficulty is the need for faster hardware and larger memories.
E. Knowledge-based systems: the key to power? (1969-1979)
The picture of problem solving that had arisen during the first decade of AI
research was of a general-purpose search mechanism trying to string together
elementary reasoning steps to find complete solutions. Such approaches have
been called weak methods because, although general, they do not scale up to
large or difficult problem instances. The alternative to weak methods is to use
more powerful, domain-specific knowledge that allows larger reasoning steps
Page |7
Al-Mustaqbal University Computer Engineering Department
College of Engineering & Technology Artificial Intelligence
and can more easily handle typically occurring cases in narrow areas of
expertise (expert system).
F. AI becomes an industry (1980 - present)
The first successful commercial AI (expert system), R1, began operation at the
Digital Equipment Corporation. The program helped configure orders for new
computer systems; by 1986, it was saving the company and estimated $40
million a year. By 1988, Digital Equipment Corporation group had 40 expert
system deployed, with more on the way. Saving and estimate of $10 million a
year. Nearly every major U.S. corporation had its own AI group and was either
using or investigating expert systems.
G. The return of neural networks (1986-present)
In the mid of 1980s at least four different group reinvented the back-
propagation learning algorithm first found in 1969. The Algorithm was applied
to many learning problems in computer science and psychology, and the
widespread dissemination of the results in the collection of Parallel Distributed
Processing caused great excitement.
H. AI adopts the scientific method (1987-present)
Recent years have seen a revolution in both the content and methodology of
work in artificial intelligence. It is now more common to build on existing
theories than to propose brand-new one, to base claims on rigorous theorems
or hard experimental evidence rather than on intuition, and to show relevance
to real-world applications rather than toy examples.
I. The emergence of intelligent agents (1995-present)
Perhaps encouraged by the progress in solving sub-problems of AI,
researchers have also started to look at the “Whole agent” problem. One of
the most important environments for intelligent agents is the Internet. AI
systems have become so common in Web-based applications that the “-bot”
suffix has entered everyday language. Moreover, AI technologies underlie
many internet tools, such as search engines, recommender systems and web
site aggregators.
J. The availability of very large data sets (2001-present)
Page |8
Al-Mustaqbal University Computer Engineering Department
College of Engineering & Technology Artificial Intelligence
Throughout the 60-year history of computer science, the emphasis has been
on the algorithm as the main subject of study. But some recent work in AI
suggests that for many problems, it makes more sense to worry about the data
and be less picky about the what algorithm to apply. This is true because of
the increasing availability of very large data sources. For example, trillions of
words of English and billions of images from the web and so on.
4. The state of the art
What can AI do today? A concise answer is difficult because there are so many
activities in so many subfields. Here we sample a few applications:
Robotic vehicles
Speech recognition
Autonomous planning and scheduling
Game playing
Spam fighting
Logistics planning
Robotics
Machine Translation
Page |9