0% found this document useful (0 votes)
25 views14 pages

AI Chapter 1

Uploaded by

muazabdi199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views14 pages

AI Chapter 1

Uploaded by

muazabdi199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Department of Information Systems School of Computing and Informatics

FUNDAMENTALSOF ARTIFICIAL INTELLIGENCE (INSY3102)

CHAPTER 1
Objectives:
 Define the term AI, Explain how it differs from Human Intelligence
 Describe the Basic areas in Artificial Intelligence, History and Foundations of AI
 Explain the applications of AI and different approaches to AI design

INTRODUCTION

Data: Basic Facts: Cricket Score Card


Information: Processed Data: Man of the match
Knowledge: Detailed Information: Previous Performance.

Intelligence: Ability to acquire, understand and apply knowledge, predict and manipulate a
world(environment/situation): projected score analysis.

 Intelligence means Learning, Reasoning, Problem solving.

Artificial Intelligence:

Artificial Intelligence is the branch of computer science concerned with the study and creation of
computer systems that behave with some form of intelligence:
 Systems that learn new concept/tasks.
 Systems that reason and draw useful conclusions about the world around us.
 Systems that can understand natural language.
 Systems that perceive visual scene.
 Systems that perform other types of activities that require human intelligence.

Different areas of AI are more closely related to psychology, philosophy, logic, linguistics and neuro-
physiology.

Human Brain Vs Computer:

Human Brain Computer


1. Has Hearing Device Non Hearing Device
2. Self willing and creating Dependent and must be
programmed
3. Basic unit is neurons Basic unit is RAM cell
4. Has capacity to learn Must be programmed
5. Has Inductive and Deductive No reasoning power
Reasoning case power
6. Logic adopted is fuzzy logic Binary logic is used.

Therefore the goal of AI is


 Replicate Human intelligence,
 Solve Knowledge-intensive tasks,
 Enhance human-computer interaction

Mizan-Tepi University Page 1


Department of Information Systems School of Computing and Informatics

BASIC AREAS OF ARTIFICIAL INTELLIENCE

1. Learning: Gathering useful information


(i) Rote Learning: Learning by trial-and-error.

Example: A program for solving mate-in-one chess problem might try out moves at random until one is
found that achieves mate. The program remembers the successful move and next time the computer is given
the same problem it is able to produce the answer immediately.
(ii) Learning that involves generalization: Learner able to perform better in situations not
previously encountered.

Example: A program that learns past tenses of regular English verbs by rote will not be able to produce
the past tense of, (e.g. “Jump”) until presented at least once with “jumped”, whereas a program that is able to
generalize from examples can learn the “Add-ed” rule, and so form the past tense of “jump” in the absence of
previous encounter with this verb. Sophisticated modern techniques enable programs to generalize complex
rules from data.
2. Reasoning: To draw inferences appropriate to the situation in hand.

(i) Deductive reasoning: The truth of the premises guarantees the truth of the conclusion.

Example: Premises: “Fred is either in the museum or the café”;


Conclusions: He is not in the café; so he is in the museum”.
(ii) Inductive Reasoning:

Premises: “Previous accidents just like this one have been caused by instrument failure;
Conclusions: So probably this one also was caused by instrument failure”.
The truth of the premises lends support to the conclusion that the accident was caused by instrument
failure, but nevertheless further investigation might reveal that, despite the truth of the premiss, the
conclusion is in-fact false.
Reasoning involves drawing inferences that are relevant to the task or situation in hand. One of the
hardest problems confronting AI is that of giving computers the ability to distinguish the relevant from the
irrelevant.

3. Problem solving:

Example: Playing Game

We require rules of the game and the targets for winning as well as a means of representing positions in the
game. The opening position can be defined as the initial state and a winning position as a goal state, there
can be more than one legal moves allowed for transfer from initial state to other states leading to the goal
state.

Mizan-Tepi University Page 2


Department of Information Systems School of Computing and Informatics

However, the rules are for to copious in most games especially chess. The storage also presents another
problem.The number of rules that are used must be minimized and the set can be produced by expressing
each rule in as general a form as possible. It is known as state space representation.

This representation allows for the formal definition of a problem which necessitates the movement from a set
of initial positions to one of a set of target positions. It means that the solution involves using known
techniques and systematic search.Problem-solving methods divide into special-purpose and general-purpose:

Special-purpose method is tailor-made for a particular problem and often exploits very specific features of
the situation in which problem are embedded.
General-purpose method is applicable to a wide range of different problems.

One general purpose technique used in AI is Means-End Analysis, which involves the step-by-step
reduction of the difference between the current state and the goal state.
Example: Simple Robot:

The program selects actions from a set of means-might consist of pickup, putdown, move forward, move
back, move left and move right, until the current state is transformed into the goal state.

4. Language Understanding:

A language is a system of signs having meaning by convention.

Ex: Traffic signs – form a mini-language, meaning by convention.

Natural Language:

Ex: Those clouds mean rain. The fall in pressure means valve is malfunctioning.

APPROACHES TO ARTIFICIAL INTELLIGENCE

Thinking Humanly Thinking Rationally


Acting Humanly Acting Rationally

Humanly - Measures success in terms of conformity to human performance


Rationally- Measures success in terms of ideal(best) performance
Thinking – Concerned with thought process and reasoning.
Acting – Concerned with Behavior

Mizan-Tepi University Page 3


Department of Information Systems School of Computing and Informatics

1. Acting Humanly: The Turing Test Approach: proposed by Alan Turing (1950)

Trying to imitate the behavior of a human. A computer passes the test if a human interrogator, after posing
some written questions, cannot tell whether written responses come from a person (or) from a computer.
(Imitation game).Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5
minutes.
Suggested major components of AI:

1. Natural language processing – to enable it to communicate successfully in English.


2. Knowledge Representation – to store what it knows or hears.
3. Automated reasoning – to use the stored information to answer questions and to draw new
conclusions.
4. Machine learning – to adapt to new circumstances and to detect to extrapolate patterns.

Turing’s test deliberately avoided direct physical interaction between the interrogator and the computer,
because physical simulation of a person is unnecessary for intelligence. However, the so-called total Turing
test includes the following:

5. Computer vision – to perceive objects


6. Robotics – to manipulate objects and move about.

Problem:Turingtest is not reproducible, constructive, or amenable to mathematical analysis.

2. Thinking Humanly: The cognitive modeling Approach (1960s)

Based on information-processing psychology of human. If we are going to say that a given program thinks
like a human, we must have some way of determining how humans think. Requires scientific theories of
internal activities of the brain:

What level of abstraction? “Knowledge or circuits”?


Requires,
1. Predicting and testing behavior of human subject. (Cognitive science)
2. Direct identification from neurological data. (cognitive neuroscience)

Both share with AI the following characteristics: The available theories do not explain anything resembling
human-level general intelligence.

3. Thinking Rationally: “Laws of thought” (1960s)

Normative or prescriptive rather than descriptive. The Greek philosopher Aristotle was one of the first to
attempt to codify “right thinking,” that is, irrefutable reasoning processes. His syllogisms provided patterns
for argument structures that always yielded correct conclusions when given correct premises. Notation and
rules of derivation for thoughts.

Example:“Socrates is a man; all men are mortal;

therefore Socrates is mortal”.

Mizan-Tepi University Page 4


Department of Information Systems School of Computing and Informatics

Problem:

 Not all intelligent behavior is represented by logical notation, particularly when the knowledge is less
than 100% certain.
 Big difference between solving a problem “in principle” and solving it “in practice”.

4. Acting Rationally: The rational agent Approach:

A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best
expected outcome.

Rational Behavior: doing the right thing.


The right thing: that which is expected to maximizes goal achievement, with given the available
information.
In the “laws of thought” approach to AI, the emphasis was on correct inferences. Making correct inferences
is sometimes part of being a rational agent, because one way to act rationally is to reason logically to the
conclusion that a given action will achieve one’s goals and then to act on that conclusion. In some situations,
there is no provably correct thing to do, but something must still be done.

Advantages:

 It is more general than the “laws of thought” approach because correct inference is just one of several
possible mechanisms for achieving rationality.
 It is more amenable to scientific development than are approaches based on human behavior or
human thought.

The standard of rationality is mathematically well defined and completely general. Human behavior, on the
other hand, is well adapted for one specific environment.

Limitation: Computational limitations make perfect rationality unachievable, specifically in complicated


environments.

FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

The disciplines that contributed ideas, view points, and techniques to AI:

Philosophy: [By Aristotle, Da vinci, Thomas Hobbes]

Means, The study of the fundamental nature of knowledge, reality, and existence in the world. A theory that
acts as a guiding principle for behaviour.

Can formal rules be used to draw valid conclusions?


How does the mind arise from a physical brain?
Where does knowledge come from?
How does knowledge lead to action?
Logic, methods of Reasoning, mind as physical system, foundations of Learning, Language, Rationality.

Mizan-Tepi University Page 5


Department of Information Systems School of Computing and Informatics

Mathematics : [By Boole, Frege, Alan Turing, Cardano, Pascal, Fermat, Bernoulli, Laplace, Bayes ]
What are the formal rules to draw valid conclusions?
What can be computed?
How do we reason with uncertain information?
Formal representation and proof, Algorithms, Computation, (Un)decidability, (Un)tractability, Probability.
Psychology:
Means, the scientific study of the human mind and its functions, especially those affecting behaviour in a
given context.How do humans and animals think and act?
Adaptation, phenomena of Perception and motor control, experimental Techniques, Behaviorism.
Economics:
How should we make decisions so as to maximize payoff?
How should we do this when others may not go along?
How should we do this when the payoff may be far in the future?
Formal theory of rational decisions, preferred outcomes, Operational Research.
Linguistics:
How does language relate to thought?
Knowledge representation, Grammar
Neuroscience :
How do brains process information?
plastic physical substrate for mental activity.
Control Theory:
How can artifacts operate under their own control?
Homeostatic systems, stability, simple optimal agent designs, self-controlling machine.

HISTORY OF ARTIFICIAL INTELLIGENCE

The history of Artificial Intelligence is quite interesting and started around 100 years ago.

Rossum’s Universal Robots (R.U.R.)

In 1920 the Czech writer Karel Čapek published a science fiction play named Rossumovi Univerzální Roboti
(Rossum’s Universal Robots), also better known as R.U.R. The play introduced the word robot. R.U.R. deals
about a factory, which creates artificial people named as robots. They differentiate from today’s term of
robot. In R.U.R. robots are living creatures, who are more similar to the term of clones. The robots in R.U.R.
first worked for the humans, but then there comes are robot rebellion which leads to the extinction of the
human race.

Mizan-Tepi University Page 6


Department of Information Systems School of Computing and Informatics

The play is quite interesting, because of different reason. First it is introducing the term robot, even it
represents not exactly the modern idea of robots. Next it is also telling the story of the creation of robots, so
some kind of artificial intelligence, which first seems to be a positive effect to the humans, but later on this
the robot rebellion which threat the whole human race.Artificial Intelligence in literature and movies is a big
topic for its own. The example of R.U.R. should have shown the importance and influence for Artificial
Intelligence on researches and society.

Alan Turing
Alan Turing was born on 23th June 1912 in London. He is widely known, because the encrypted the code of
the enigma, which were used from Nazi Germany to communicate. Alan Turing’s study also led to his theory
of computation, which deals about how efficient problems can be solved. His presented his idea in the model
of the Turing machine, which is today still a popular term in Computer Science. The Turing machine is an
abstract machine, which can,despite the model’s simplicity, construct any algorithm’s logic. Because of
discoveries in neurology, information theory and cybernetics in the same time researches and with them Alan
Turing created the idea that it is possible to build an electronic brain.

Some years after the end of World War 2, Turing introduced his widely known Turing Test, which was an
attempt to define machines intelligent. The idea behind the test was that are machine (e.g. a computer) is then
called intelligent, if a machine (A) and a person (B) communicate through natural language and a second
person (C), a so-called elevator, cannot detect which of the communicators (A or B) is the machine.

Mizan-Tepi University Page 7


Department of Information Systems School of Computing and Informatics

Back in the 50’s, the general belief was that machines would not be able to behave intelligently. Almost
seven decades later, the matter of machines achieving human-level intelligence seems to be just a matter of
time. Ray Kurzweil, a futurist well-known for his history of accurate predictions, claims that by 2029
Artificial Intelligence will pass a valid Turing test and hence achieve human-level intelligence. What’s
more, Kurzweil seems quite convinced that Singularity – the moment we will multiply our effective
intelligence a billion fold by merging with the intelligence we have created, will occur in 2045.

Time will tell, but there are two ways to measure the future of AI. One is to consider its current successes
and failures as well as the ever-increasing power of technology. The other is to look back in the past and see
how far it has come in a span of 60 years.

The 1956 Dartmouth Conference

In the early 50s, the study of “thinking machines” had various names like cybernetics, automata theory, and
information processing. By 1956, genius scientists as Alan Turing, Norbert Wiener, Claude Shannon and
Warren McCullough had already been working independently on cybernetics, mathematics, algorithms and
network theories.However, it was computer and cognitive scientist John McCarthy who came up with the
idea of joining these separate research efforts into a single field that would study a new topic for the human
imagination – Artificial Intelligence. He was the one who coined the term and later on founded the AI labs at
MIT and Stanford.

In 1956, the same McCarthy set up the Dartmouth Conference in Hanover, New Hampshire. Leading
researchers in the complexity theory, language simulation, the relationship between randomness and creative
thinking, neural networks were invited. The purpose of the newly created research field was to develop
machines that could simulate every aspect of intelligence. That’s why the 1956 Dartmouth Conference is
considered to be the birth of Artificial Intelligence.

Ever since, Artificial Intelligence has lived through alternate decades of glory and scorn, widely known as AI
summers and winters. Its summers were characterized by optimism and huge fundings, whereas its winters
were faced with funding cuts, disbelief and pessimism.Meanwhile here’s the proposal advanced at the time
by John McCarthy (Dartmouth College) together with M. L. Minsky (Harvard University), N. Rochester
(IBM Corporation) and C.E. Shannon (Bell Telephone Laboratories).

“We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956
at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture
that every aspect of learning or any other feature of intelligence can in principle be so precisely described

Mizan-Tepi University Page 8


Department of Information Systems School of Computing and Informatics

that a machine can be made to simulate it. An attempt will be made to find how to make machines use
language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve
themselves. We think that a significant advance can be made in one or more of these problems if a carefully
selected group of scientists work on it together for a summer.”

AI Summer 1 [1956-1973]

The Dartmouth Conference was followed by 17 years of incredible progress. Research projects carried out at
MIT, the universities of Edinburgh, Stanford and Carnegie Mellon received massive funding, which
eventually paid off.It was during those years that programming computers started to perform algebraic
problems, prove geometric theorems, understand and use English syntax and grammar. In spite of the
abandonment of connectionism and the failed machine translation, which put the Natural Language
Processing (NLP) research on hold for many years, many accomplishments from back then made history.
Here are a few of them:

 Machine learning pioneer Ray Solomonoff laid the foundations of the mathematical theory of AI,
introducing universal Bayesian methods for inductive inference and prediction;

 Thomas Evans created the heuristic ANALOGY program, which allowed computers to
solve geometric-analogy problems;

 Unimation, the first robotics company in the world, created the industrial robot Unimate, which
worked on a General Motors automobile assembly line;

 Joseph Weizenbaum built ELIZA – an interactive program that could carry conversations in English
on any topic;

 Ross Quillian demonstrated semantic nets, whereas Jaime Carbonell (Sr.) developed Scholar – an
interactive program for computer assisted instruction based on semantic nets;

 Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of
articles on AI.

Encouraged by these impressive first results, researchers gained hope that by 1985 they would be able to
build the first truly thinking machine capable of doing any work a man could do.

AI Winter 1 [1974-1980]

By 1974, the general perception was that researchers had over-promised and under-delivered. Computers
could not technically keep up the pace with the complexity of problems advanced by researchers. The more
complicated the problem, the greater amount of computing power it required. For instance, an AI system that
analyzed the English language could only handle a 20-word vocabulary, because that was the maximum data
the computer memory could store.

Mizan-Tepi University Page 9


Department of Information Systems School of Computing and Informatics

Back in the 60’s, the Defense Advanced Research Projects Agency (DARPA) had invested millions of
dollars in AI research without pressuring researchers to achieve particular results. However, the 1969
Mansfield Amendment required DARPA to fund only mission-oriented direct research. In other words,
researchers would only receive funding if their results could produce useful military technology, like
autonomous tanks or battle management systems, which they did. For example, they built the Dynamic
Analysis and Replanning tool, a battle management system that proved to be successful during the first Gulf
War. However, it wasn’t good enough.

DARPA was disappointed in the failure of the autonomous tank project. The allegedly poor results of the
Speech Understanding Research (SUR) program carried out at Carnegie Mellon University did not help
either. Apparently, DARPA had expected a system that could respond to voice commands from a pilot. The
system built by the SUR team could indeed recognize spoken English, but only if the words were uttered in a
specific order. As it turned out, the results led to massive grant cuts, which put a lot of research on hold.In
spite of the early optimistic projections, funders began to lose trust and interest in the field. Eventually, the
limitless spending and vast scope of research were replaced with a “work smarter, not harder” approach.

AI Summer 2 [1981 – 1987]

This period was governed by what is now known as Weak Artificial Intelligence (Weak AI). The term is an
interpretation of AI in a narrow sense, meaning that Weak AI accomplished only specific problem-solving
tasks like configuration; it did not encompass the full range of human cognitive abilities.This new chapter in
the history of AI brought to life the “expert systems”, i.e. AI programs that could automate highly specific
decisions, based on logical rules derived from expert knowledge. It was the first time AI solved real-world
problems and saved corporations a lot of money.

For example, before expert systems were invented, people had to order each component for their computer
systems and often dealt with the wrong cables and drivers. This was due to sales people from hardware
companies making erroneous, incomplete and unpractical configurations. XCON, an expert system
developed by Professor John McDermott for the Digital Equipment Corporation (DEC), was a computer
configurator that delivered valid and fully configured computers. XCON is reported to have earned DEC
approximately $40 million per year.

AI Winter 2 [1988-1993]

In the late 80’s, AI’s glory began to fade yet again. In spite of their usefulness, the expert systems proved to
be too expensive to maintain; they had to be manually updated, could not really handle unusual inputs and

Mizan-Tepi University Page 10


Department of Information Systems School of Computing and Informatics

most of all could not learn. Unexpectedly, the advanced computer systems developed by Apple and IBM
buried the specialized AI hardware industry, which was worth half a billion dollars at the time.

DARPA concluded that AI researchers underdelivered, so they killed the Strategic Computing Initiative
project. Japan hadn’t made any progress either, since none of the goals of the Fifth Generation Computer
project had been met, in spite of $400 million spendings. Once again over-inflated expectations were
followed by curbed enthusiasm.

In spite of the funding cuts and the stigma, Artificial Intelligence did make progress in all its areas: machine
learning, intelligent tutoring, case-based reasoning, uncertain reasoning, data mining, natural language
understanding and translation, vision, multi-agent planning, etc. Here are some of the highlights:

 In 1997, IBM’s supercomputer Deep Blue beat world chess champion Garry Kasparov after six
games; Deep Blue used tree search to calculate up to a maximum of 20 possible moves;

 Also in 1997, the Nomad robot built by Carnegie Mellon University, navigated over 200 km of the
Atacama Desert in Northern Chile, in an attempt to prove that it could also ride on Mars and the
Moon;

 In the late 90’s, Cynthia Breazeal from MIT published her dissertation on Sociable Machines and
introduced Kismet, a robot with a face that could express emotions (an experiment in affective
computing);

 In 2005, Stanley, an autonomous vehicle built at Stanford, won DARPA Grand Challenge race;
Starting with 2010, Artificial Intelligence has been moving at a fast pace, feeding the hype
apparently more than ever. It’s hard to forget that only last year social humanoid robot was granted
citizenship by the Kingdom of Saudi Arabia. What’s more, this year British filmmaker Tony
Kaye hasannounced that his next movie, Second Born, would star an actual robot, trained in different
acting methods and techniques.

Without a doubt there’s more to come since the technologies developed by AI researchers in the past two
decades have proven successful in so many fields – machine translation, Google’s search engine, data
mining, robotics, speech recognition, medical diagnosis and more.

21st Century: Deep learning, Big Data and Artificial General Intelligence

In the last two decades, Artificial intelligence grow heavily. The AI market (hardware and software) has
reached $8 billion in 2017 and the research firm IDC (International Data Corporation) predicts that the
market will be $47 billion by 2020.This all is possible through Big data, faster computers and advancements
in machine learning techniques in the last years.With the usage of Neural Networks complicated tasks like
video processing, text analysis and speech recognition can be tackled now and the solutions which are
already existing will become better in the next years.

Mizan-Tepi University Page 11


Department of Information Systems School of Computing and Informatics

START OF THE ART APPLICATIONS OF AI

1961 UNIMATE First industrial robot, Unimate, goes to work at GM replacing humans on the assembly line

1964 ELIZA Pioneering chatbot developed by Joseph Weizenbaum at MIT holds conversations with humans

1966 SHAKEY The ‘first electronic person’ from Stanford, Shakey is a general-purpose mobile robot
reasons about its own actions

1997 DEEP BLUE Deep Blue, a chess-playing computer from IBM defeats world chess champion, Garry
Kasparov

1998 KISMET Cynthia Breazeal at MIT introduces KISmet, an emotionally intelligent robot insofar as it
detects and responds to people’s feelings

1999 AIBO Sony launches first consumer robot pet dog AiBO (AI robot) with skills and personality that
develop over time

2002 ROOMBA First mass produced autonomous robotic vacuum cleaner from iRobot learns to navigate
and clean homes

2011 SIRI Apple integrates Siri, an intelligent virtual assistant with a voice interface, into the iPhone 4S

2011 WATSON IBM’s question answering computer Watson wins first place on popular $1M prize
television quiz show Jeopardy

2014 EUGENE Eugene Goostman, a chatbot passes the Turing Test with a third of judges believing Eugene
is human

2014 ALEXA Amazon launches Alexa, an intelligent virtual assistant with a voice interface that can
complete shopping tasks

2016 TAY Microsoft’s chatbot Tay goes rogue on social media making inflammatory and offensive racist
comments

2017 ALPHAGO Google’s A.I. AlphaGo beats world champion Ke Jie in the complex board game of Go,
notable for its vast number (2*170) of possible positions

Speech Recognition – Reservation systems, a traveler calling Uniter Airlines to book a flight can have the
entire conversation guided by an automated speech recognition and dialog management system.

Computer vision – Face Recognition, Handwriting Recognition, Manufacturing-line inspection, Baggage


inspection

Autonomous planning and scheduling – A hundred million miles from Earth, NASA’s Remote Agent
program became the first on-board autonomous planning program to control the scheduling of operations for
a spacecraft.

Mizan-Tepi University Page 12


Department of Information Systems School of Computing and Informatics

Spam fighting –Each day, learning algorithms classify over a billion messages as spam, saving the recipient
from having to waste time deleting each.

Logistic planning – During the Gulf crisis of 1991 U.S. forces deployed a Dynamic Analysis and
Replanning Tool(DART) to do automated logistic planning and scheduling for transportation.

Expert Systems – Diagnostic systems,

 MS-Office Assistant,
 Medical Diagnosis,
 Customer Assistant (Whirlpool)

-System Configuration

 DEC’s System for custom Hardware configuration


 Radiotherapy treatment planning

-Financial Decision making

 Banks, credit cards, mortgage, Fraud detection

-Mathematical theorem proving

 Inference methods to prove new theorems

-Natural Language Understanding

 Alta Vista’s translation of web pages


 Translation of Caterpillar Truck manuals into 20 languages.

-Scheduling and planning

 Automatic Scheduling for manufacturing, logistics (during Gulf War)


 Spacecraft Assembly, integration and verification (NASA-Remote Agent).

Robotic Car –
 STANLEY in 2005-win DARPA Grand Challenge-in desert, 22mph.
 CMU’s Boss in 2006 – win Urban Challenge.

iRobotic Corp. –
 robotic vacuum cleaners
 packBot to remove mines

Machine Translation:
 Arabic to English translation in 2007
AI “Grand Challenges”:
 Translating telephone conversation
 Accident avoiding car
Mizan-Tepi University Page 13
Department of Information Systems School of Computing and Informatics

 Smart clothes
 Tutors
 Self-organizing systems

Exercises required to be read in Chapter 1 of our Text Book:1.1,1.6,1.7,1.9,1.10,1.11,1.12,1.13,1.14

Mizan-Tepi University Page 14

You might also like