AISC - Techmax (Searchable) With Index - Compressed
AISC - Techmax (Searchable) With Index - Compressed
AISC - Techmax (Searchable) With Index - Compressed
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Introduction to Artificial
Intelligence(Al) and Soft Computing
- Definition 1: “The art of creating machines that perform functions that requires intelligence when performed by
people.” (Kurzweil, 1990)
Definition 2 : “The study of how to make computers do things at which, at the moment, people are better.” (Rich and
Knight, 1991)
Scanned by CamScanner
Al&SC (MU-Sem. 7- Comp) Intro. to Artificial Intelligence Al) and Soft Computing
To judge whether the system can act like a human, Sir Alan Turing had designed a test known as Turing test.
As shown in Fig. 1.2.1, in Turing test, a computer needs to interact with a human interrogator by answering his
questions in written format. Computer passes the test if a human interrogator, cannot identify whether the written
responses are from a person or a computer. Turing test is valid even after 60 year of research.
Human
Human
interrogator
For this test, the computer would need to possess the following capabilities:
“1: (NLP) : This unit enables computer to interpret the English language and
Natural Language Processing
communicate successfully.
2. Knowledge Representation : This unit is used to store knowledge gathered by the system through input devices.
3. Automated Reasoning: This unit enables to analyze the knowledge stored in the system and makes new
inferences to answer questions.
4, Machine Learning: This unit learns new knowledge by taking current input from the environment and adapts to
new circumstances, thereby enhancing the knowledgebase of the system.
To pass total Turing test, the computer will also need to have computer vision, which is required to perceive objects
from the environment and Robotics, to manipulate those objects.
Fig. 1.2.2 lists all the capabilities a computer needs to have in order to exhibit artificial intelligence. Mentioned above
are the six disciplines which implement most of the artificial intelligence.
— Definition1 : “The exciting new effort to make computers think ... machines with minds, in the full and literal sense”.
(Haugeland, 1985)
Definition 2 : “The automation of activities that we associate with human thinking, activities such as decision making,
problem solving, learning ...” (Hellman, 1978)
— Cognitive science :It is interdisciplinary field which combines computer models from Artificial Intelligence with the
‘techniques from psychology in order to construct precise and testable theories for working of human mind.
Scanned by CamScanner
: .
; WF Al&SC (MU-Sem. 7-Comp) 1-3 Intro. to Artificial Intelligence(Al) and Soft Computing
— In order to make machines think like human, we need to first understand how human think. Research showed that
there are three ways using which human’s thinking pattern can be caught.
1. Introspection through which human can catch their own thoughts as they go by.
- By catching the human thinking pattern, it can be implemented in computer system asa program and if the program's
input output matches with that of human, then it can be claimed that the system can operate like humans.
— Definition1 : “The study of mental faculties through the use of computational models”. (Charniak and McDermott,
1985)
— Definition2 ; “The study of the computations that make it possible to perceive, reason, and act”.
their study initiated the field called
— The laws of thought are supposed to implement the operation of the mind and
logic. It provides precise notations to express facts of the real world.
Also computer programs based on
It also includes reasoning and “right thinking” that is irrefutable thinking process.
those logic notations were developed to create intelligent systems.
oach
4.2.4 Acting Rationally : The Rational Agent Appr
of the design of intelligent agents”. (Poole et at, 1998)
— Definition 1 :“Computational Intelligence is the study
ligent behaviour in artifacts”.
— Definition 2:“Al ... is concerned with intel
(Nilsson, 1998)
Rational Agent
period and a dapt to change to create and
through sensors over a prolonged time
— Agents perceive t heir environment
nal agent i s the one that does “right”
ue goal s and tak e actio ns thro ugh actuators to achieve those goals. A ratio
purs
tainty in knowledge.
things and acts rationally s 0 as to achie
ve the best outcome even when there is uncer
w TechNnowledgé
Pp ubii¢ations
Scanned by CamScanner
. ,
oF _Al&SC
Al&SC (MU-Ser
(MU-Sem. 7-Comp) 1-4 Intro. to Artificial Intelligence(Al) and Soft Computing
—. The two approaches namely, thinking huma
nly and thinking rationally are based on the
reasoning epee
intelligen syst
t ems while; the other two acting humanly and acting rationally are based on
the intelligent behaviour
expected from them. .
Weak Al is Al that specializes in one area. It is not a general purpose intelligence. An intelligent agent is built to solve a
particular problem or to perform a specific task is termed as narrow intelligence or weak Al. For example, it took years
of Al development to be able to beat the chess grandmaster, and since then we have not been able to beat the
machines at chess. But that is all it'can do, which is does extremely well.
Strong Al or general Al refers to intelligence demonstrated by machines in performing any intellectual task that human
can perform. Developing strong Al is much harder than developing weak Al. Using artificial general intelligence |
machines can demonstrate human abilities like reasoning, planning, problem solving, comprehending complex ideas,
learning from self experiences, etc. Many companies, corporations’ are working on developing a general intelligence
but they are yet to complete it.
As defined by a leading Al thinker Nick Bostrom, “Super intelligence is an intellect that is much smarter than the best
human brains in practically every field, including scientific creativity, general wisdom and social skills.” Super
” intelligence ranges from a machine which is just a little smarter than a human to a machine that is trillion times
smarter. Artificial super intelligence is the ultimate power of Al.
- :
1.4 Components of Al
Al is a vast field for research and it has got applications in almost all possible domains. By keeping this in mind,
components of Al can be identified as follows: (Refer Fig.1.4.1)
1. Perception
2. Knowledge representation
3. Learning
4. Reasoning
5. Problem solving
Wey TechKnowledge
Scanned by CamScanner
y-Al&SC (MU-Sem: 7-Gomp) =» and Soft Computing
intro. to Artfcial ntaligence(Al)
Reasoning Linguistic intelligence
8
Learning
isa
Perception
j
Problem solving
1. Perception
In order to work in the environment, intelligent agents need to scan the environment and the various objects in it.
Agent scans the environment using various sense organs like camera, temperature sensor, etc. This is called as
perception. After capturing various scenes, perceiver analyses the different objects in it and extracts their features
and relationships among them. 7
2. Knowledge representation
The information obtained from environment through sensors may not be in the format required by the system.
Hence, it need to be represented in standard formats for further processing like learning various patterns, deducing
inference, comparing with past objects, etc. There are various knowledge representation techniques like Prepositional
logic and first order logic.
3. Learning
Learning is a very essential part of Al and it happens in various forms. The simplest form of learning is by trial and
error. In this form the program remembers the action that has given desired output and discards the other trial
actions and learns by itself. It is also called as unsupervised learning. In case of rote learning, the program simply
remembers the problem solution pairs or individual items. In other case, solution to few of the problems is given as
input to the system, basis on which the system or program needs to generate solutions for new problems. This is
known as supervised learning.
4. Reasoning
Reasoning is also called as logic or generating inferences form the given set of facts. Reasoning is carried out based on
strict rule of validity to perform a specified task. Reasoning can be of two types, deductive or inductive. The deductive
reasoning is in which the truth of the premises guarantees the truth of the conclusion while, in case of inductive
reasoning, the truth of the premises supports the conclusion, but it cannot be fully dependent on the premises. In
programming logic generally deductive inferences are used. Reasoning involves drawing inferences that are relevant
to the given problem or situation.
5. Problem-solving
Al addresses huge variety of problems. For example, finding out winning moves on the board games, planning actions
in order to achieve the defined task, identifying various objects from given images, etc. As per the types of problem,
there is variety of problem solving strategies in Al. Problem solving methods are mainly divided into general purpose
methods and special purpose methads. General purpose methods are applicable to wide range of problems while,
special purpose methods are customized to solve particular type of problems.
Scanned by CamScanner
Intro. to Artificial Intelligence(Al) and Soft Computing
6. Natural language processing
Natural Language Processing, involves machines or robots to understand and process the language that human speak,
and infer knowledge from the speech input. It also involves the active participation from machine in the form of dialog
ie, NLP aims at the text or verbal output from the machine or robot. The input and output of an NLP system can be
speech and written text respectively.
Computational Intelligence is the study of the design of | Artificial Intelligence is study of making
machines which
intelligent agents can do things which at presents human do better.
Cl involves numbers and computations. Al involves designs and symbolic knowledge
representations.
Cl constructs the system starting from the bottom level | Al analyses
the overall structure of an intelligent system
computations, hence follows bottom-up approach.
by following top down approach.
Cl concentrates on low level cognitive function |
Al concentrates of high level cognitive structure design.
implementation.
TechKnowledga
Puostcations Useee rEC
Scanned by CamScanner
Al&SC (MU-Sem.7-Comp) = 1-7 Intro. to Artificial Intelligence(Al) and Soft Computing
Meanwhile we will see the ideas, viewpoints and techniques which Artificial Intelligence has inherited from other
disciplines. They can be given as follows :
1. Philosophy : Theories of reasoning and learning have emerged, along with the viewport that the mind is
constituted by the operation of a physical system.
2. Mathematical : Formal theories of logic, probability, decision making and computation have emerged.
3. Psychology : Psychology has emerged tools to investigate the human mind and a scientific language which are
used to express the resulting theories.
5. Computer science : The tools which can make artificial intelligence a reality has emerged.
Medical
complex operations of
card iolo gy (CRG ), Neu rol ogy (MRI), E mbryology (Sonography),
Al has applications in th e field of f rotations, store and ret
rieve
in organ izing bed schedules, managing staf
internal organs, etc. It can be also used can provi de with medica
l
expert system s are enable d to predict the decease and
information of patient. Many
prescriptions. py pl{c ations -
Scanned by CamScanner
4. Military
in Ii :
or - ife Stacking :
Training simulators can be used in military applications. Also areas where human cannot reach
to be made quickly taking into”
conditions, robots can be very well used to do the required jobs. When decisions have
can p provide crucial ~ é
account an enormous amount of information, and when lives are at stake, artificial intelligence
r creatin 3
assistance. From developing - intricate flight plans to implementing complex supply systems 0 8 training
simulation exercises, Al is a natural partner in the modern military.
Latest generation of robots are equipped well with the performance advances, growing integration of vision and an
enlarging capability to transform manufacturing.
Intelligent planners are available with Al systems, which can process large datasets and can consider all the -
constraints to design plans satisfying all of them.
7. Voice technology
Voice recognition is improved a lot with Al. Systems are designed to take voice inputs which are very much applicable
in case of handicaps. Also scientists are developing an intelligent machine to emulate activities of a skillful musician.
Composition, performance, sound processing, music theory are some of the major areas of research.
8. Heavy industry
Huge machines involve risk in operating and maintaining them. Human robots are better replacing human operators.
These robots are safe and efficient. Robot are proven to be effective as compare to human in the jobs of
repetitive
nature, human may fail due to lack of continuous attention or laziness.
We TechKnowledge
Scanned by CamScanner
_- 1-9 _ Intro. to Artificial Intelligence(Al) and Soft Computing
¥ Al&SC (MU-Sem..7-Comp)
whose environment is the
2. Robotics : One more major application of Al is in Robotics. Robot is an active agent
field, in military, etc. for
physical world. Robots can be used in manufacturing and handling material, in medical
automating the manual work.
Neural Network is a system that works like
3. Neural networks : Another application of Al is using Neural Networks.
analysis, in character recognition, in image
a human brain/nervous system. It can be useful for stock market
Optical Character Recognition (OCR), etc.
compression, in security, face recognition, handwriting recognition,
the help of Fuzzy Logic. Fuzzy Logic can be useful in
4. Fuzzy logic : Apart from these Al systems are developed with
making appr oximations rather than having a fixed and exact
reasoning for a problem. You must have seen sy stems
fuzzy logic (they call it we sense technology!”).
like AC, fridge, washing machines which are based on
1. Deep learning
area of focus in Artificial *
enabling th e concept of deep learning is the top most
Convolutional Neural Networks
and applications areas of Al like,
natural language and text processing,
problems
intelligence in todays’ era. Many wered by
imodal information processing empo
recognition, computer vision, inf ‘ormation retrieval, and mult
speech
multi-task deep learning.
2, Machine learning
e a given
computers to use example data or past experience to solv
of machine learning is to program
The goal ict
that analyze past sales data to pred
y succ essf ul appl icat ions of machine learning include systems
problem. Man minimum resources, and extract
r, opti mize robo t beha viou r so that a task can be completed using
customer behaviou
data.
knowledge from bioinformatics
3. Alreplacing workers
are getting replaced by
fety haza rds, robo ts are doin g a good job. Human resources
sa
In industry where there are exceedingly well
see that the whit e colo r jobs of data processin g are being done
rie 5 to
robots rapidly. People are wor ght together technologists and
A stud y from The Nati onal Academy of Sciences brou
by intelligent programs.
to happen.
ts to figure out what's going
economists and social scientis
be exploited.
TechKnowledye
G Peprications
Scanned by CamScanner
W_Algsc (MU-Sem. 7-Comp) 4-10 ——
Intro. to Artificial Intelligence(Al) and Soft Computing
esses |
5. Emotional Al
Emotional Al, where Al can detect human emotions, is another upcoming and important area of research. Computers
ability to understand speech will lead to an almost seamless interaction between human and computer. With d
increasingly accurate cameras, voice and facial recognition, computers are better able to detect our emotional state, 3
Researchers are exploring how this new knowledge can be used in education, to treat depression, to accurately
predict medical diagnoses, and to improve customer service and shopping online.
Using Al, customers’ buying patterns, behavioral patterns can be studied and systems that can predict the purchase or
can help customer to figure out the perfect item. Al cab be used to find out what will make the customer happy or
unhappy. For example, if a customer is shopping online, like a dress pattern but needs dark shades and thick material,
computer understand the need and brings out new set of perfectly matching clothing for him.
7. Ethical Al
With all the evolution happening in technology in every walk of life, ethics must be considered at the forefront of
research. For example, in case of driverless car, while driving, if the decision has to be made between weather to dash
a cat or a lady having both in an uncontrollable distance in front of the car, is an ethical decision. In such cases how
the programming should decide who is more valuable, is a question. These are not the problems to be solved by
computer engineers or research scientists but someone has to come up with an answer.
Agent is something that perceives its environment through sensors and acts upon that environment through effectors
or actuators. Fig. 1.9.1 shows agent and environment.
Take a simple example of a human agent. It has five senses : Eyes, ears, nose, skin, tongue. These senses sense the
environment are called as sensors. Sensors collect percepts or inputs from environment and passes it to the
processing unit.
Actuators or effectors are the organs or tools using which the agent acts upon the environment. Once the sensor
senses the environment, it gives this information to nervous system which takes appropriate action with the help of
actuators.
— Incase of human agents we have hands, legs as actuators or effectors.
Agent
_ Sensors
ovide inpu
Environment
Fig. 1.9.1 : Agent and Environment
TechKnowledge -
Pubiicatians ;
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) Intro. to Artificial Intelligence(Al) and Soft Computing’ .
Robotic |
various motors ;
_ > for actuators
and robotic agent
Fig.1.9.3 : Sensors and actuators in human
to do. The agent function
funct ion is the descr iptio n of what all functionalities the agent is supposed
The agent
represented as [f: P* > A]
ences to the desired actions. It can be
provides mapping between percept sequ
architecture suitable language. Agent
that implements agent function in an
Agent program is a computer program
That device must have some form of
need s to he insta lled on a devic e in order to run the device accordingly.
programs ion of the architecture
envi ronm ent and actua tors to act upon it. Hence agent is a combinat
sensors to sense the
.
hardware and program software
m
Agent = Architecture + Progra
cleaner agent in “WALL-E” (animated
exam ple of vacu u m clea ner agent. You might have seen vacuum
Take a simple (outputs) used in case of a vacu
um
rsta nd how t o repr esen t the percept’s (input) and actions
movie). Let’s unde
cleaner agent.
: OF Tecthnam letns
= Publications
Scanned by CamScanner
a Comp) 112 __ Intro. to Artificial Inteligence(Al)
and Soft Computing
OC»)
OOSd 8S |
Fig.1.9.4 : Vacuum cleaner agent
As shown in Fig. 1.9.4, there are two blocks A and B having some dirt. Vacuum cleaner agent supposed to sense the
dirt and collect it, thereby making the room clean. In order to do that the agent must have a camera to see the dirt
and a mechanism to move forward, backward, left and right to reach to the dirt. Also it should absorb the dirt. Based
on the percepts, actions will be performed. For example : Move left, Move right, absorb, No Operation.
Hence the sensor for vacuum cleaner agent can be camera, dirt sensor and the actuator can be motor to make it
move, absorption mechanism. And it can be represented as :
There are various definitions exist for an agent. Let’s see few of them.
IBM states that agents are software entities that carry out some set of operations on behalf of a user or another
program.
— FIPA : Foundation for Intelligent Physical Agents (FIPA) terms that, an agent is a computational process that
implements the autonomous functionality of an application.
— Another definition is given as “An agent is anything that can be viewed as perceiving
its environment through sensors
and acting upon the environment through effectors”.
Solving
experiments
and
assignments
Washing clothes Cleaning service
Fig.1.9.5 : Interactive Intelligent Agen
t
— By Russell and Norvig, F. Mills and R. Stuff
le beam’s definition says that “An agent
is anything that is capable of acting |
upon information it perceives. An intelligent agent Is an agen
t capable of making decisions about how it acts based on
experience”.
Sa TechKnewledga
Fubtications
Scanned by CamScanner
— From above definitions we can understand that an agent is: (As per Terziyan, 1993)
o Goal-oriented o Creative
o Adaptive . o Mobile
o Social o — Self-configurable
o Responsive
o Goal-Oriented
ugh its sensors and act upon the
take input from the environment thro
— Intelligent agent is the one which can a goal.
actions are always directed to achieve
environment through its actuators. Its
Scan DB for
corresponding action
Database of Input
Action is
given as output
Scanned by CamScanner
Intro. to Artificial Intelligence(Al) and Soft Computing 3 a
WF AlaSc (MU-Sem. 7-Comp) 1-14
- They say for an intelligent agent to meet design objectives, flexible means three things:
Reactiveness . Pro-activeness
Social ability
1. Reactiveness : It means giving reaction to a situation in a stipulated time frame. An agent can perceive the
environment and respond to the situation in a particular time frame. In case of reactiveness, reaction within
situation time frame is more important. You can understand this with above example, where, if an agent takes
more time to take his hand away from the hot pan then agents hand will be burnt.
2. Pro-activeness : It is controlling a situation rather than just responding to it. Intelligent agent show goal-directed
behavior by taking the initiative. For example : If you are playing chess then winning the game is the main
objective. So here we try to control a situation rather than just responding to one-one action which means that
killing or losing any of the 16 pieces is not important, whether that action can be helpful to checkmate your
opponent is more important.
3. Social ability ; Intelligent agents can interact with other agents (also humans). Take automatic car driver exa mople,
where agent might have to interact with other agent or a human being while driving the car.
- Following are few more features of an intelligent agent.
o Self-Learning : An intelligent agent changes its behaviour based on its previous experience.
This agent keeps
updating its knowledge base all the time.
o Movable/Mobile : An Intelligent agent can move from one machine to another
while performing actions.
o Self-gaverning : An Intelligent agent has control over its own actions.
Scanned by CamScanner
¥ Al&SC (MU-Sem. 7-Comp) IS iit 10Artificial Intelligence(Al) and Soft Computing
criterion of success for
criteria: First is the performance measure which defines the
Rationality depends on fou r main the agent
an agent, second is the agent's prior knowledge of the environment, and third is the action pe rformed by
to date.
and the last one is agent's percept sequence
measuring success of an agent's performance. Take a
Performance measure is one of the major criteria for s
re of a vacuum-cleaner agent can depend upon variou
vacuum-cleaner agent's example. The performance measu
that dirt, consumption of electricity, etc.
factors like it's dirt cleaning ability, time taken to clean
se it
which is very useful for decision making, becau
_ For every percept sequence a built- in knowledge base is updated, then
consequen ces of performin g some particular action. If the consequences direct to achieve desired goal
stores the state, then we get a
if the consequences do not lead to desired goal
we get a good performance measure factor, else,
poor performance measure factor.
Fig. 1.10.1
it for the next time
see Fig.1 .10.1 . If agent hurts his finge r while using nail and hammer, then, while using
— for example, will be able to use the
be more caref ul and the proba bilit y of not getting hu rt will increase. In short agent
agent will
hammer and nail more efficiently.
nce, experience and knowledge to
agent who m akes use of its percept seque
— Rational agent can be defined as an
s the most feasible action which
of an agent for every pro bable action. It select
maximize the performance measure
will lead to the expected results optimally.
Environmental
experience
oI
Agent
1.11 Nature of Environment and PEAS Properties of
/ Task Environment Properties
1.11.1 Environments Types / Nature of Environment
sia
TechXnowledge
Puofigatioas
Scanned by CamScanner
7-
WF alasc (MU-Sem.h.Z-COMP)
am _intro.to Artificial Inteligence(Al) and Soft Computing .
Environments are called partially observable when sensors cannot provide errorless information at any given
time for every internal state, as the environment is not seen completely at any point of time.
Also there can be unobservable environments where the agent sensors fail to provide information about internal
states.
For example, In case of an automated car driver system, automated car cannot predict what the other drivers are
thinking while driving cars. Only because of the sensor’s information gathering expertise it is possible for an
automated car driver to take the actions.
Scanned by CamScanner
[
In case of checkers we have a multi-agent environment where an agent might be unable to predict the action of
the other player. In such cases if we have partially observable environment then the environment is considered
to be stochastic.
If the environment is deterministic except for the actions of other agents, then the environment is strategic. That
is, in case of game like chess, the next state of environment does not only depend upon the current action of
agent but it is also influenced by the strategy developed by both the opponents for future moves.
We have one more type of environment in this category. That is when the environment types are not fully
observable or non-deterministic; such type of environment is called as uncertain environment.
An episodic task environment is the one where each of the agent's action is divided into an atomic incidents or
episodes. The current incident is different than the previous incident and there is no dependency between the
current and the previous incident. In each incident the agent receives an input from environment and then
performs a corresponding action. :
Generally, classification tasks are considered as episodic. Consider an example -of pick and
place robot agent tl
which is used to detect defective parts from the conveyor belt of an assembly line. Here, every time
agent will
make the decision based on current part, there will not be any dependency between
the current and previous
decision.
Sa TechKnowledge
Pudlications
Scanned by CamScanner
1-18 Intro. to Artificial Intelligence(Al) and Soft Computing
For or example, z in checkers where previous move can affect all the following moves. Also sequential environment
vet
can be understood with the help of an automatic car driving example where, current decision can affect the next
decisions. If agent is initiating breaks, then he has to press clutch and lower down the gear as next consequent
actions.
Observable Partially
Partlally fully Partially
Agents . M ulti Full
ully
agent|Sin gle agent
- single Multi
— (coope
. rative) ; i
|(Competitive) eee
Deterministic (compe titive) =
tochastici
Stochastic D eterminisitic
Strategic
3
W
Strategici
TachKnowledga
Poolications °
” ~
.
|
Scanned by CamScanner
+
a7 Al&SC (MU-Sem. 7-Comp)
— PEAS : PEAS stands for Performance Measure, Environment, Actuators, and Sensors. It is the short form used for
performance issues grouped under Task Environment.
— You might have seen driverless/ self driving car videos of Audi/ Volvo/ Mercedes, etc. To develop such driverless cars
we need to first define PEAS parameters.
— Performance Measure : It the objective function to judge the performance of the agent. For example, in case of pick
’ and place robot, number of correct parts in a bin can be the performance measure.
— Environment: It the real environment where the agent need to deliberate actions.
— Actuators : These are the tools, equipment or organs using which agent performs actions in the environment. This
works as the output of the agent. -
— Sensors : These are the tools, equipment or organs using which agent captures the state of the environment. This
works as the input to the agent.
(i) Safety : Automated system should be able to drive the car safely without dashing anywhere.
(ii) Optimum speed : Automated system should be able to maintain the optimal speed depending upon the
surroundings.
(iii) Comfortable journey : Automated system should be able to give a comfortable journey to the end user, i.e.
depending upon the road it should ensure the comfort of the end user.
of energy
(iv) Maximize profits : Automated system should provide good mileage on various roads, the amount
user is
consumed to automate the system should not be very high, etc. such features ensure that the
benefited with the automated features of the system and it can be useful for maximizing the profits.
2. Environment
(i) Roads : Automated car driver should be able to drive on any kind of a road ranging from city roads to
highway. :
wW TechKnowledyé
Pudiications
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) Intro. to Artificial Intelligence(Al) and Soft Computing :
(ii) Traffic conditions : You will find different set of traffic conditions for different type of roads. Automated
system should be able to drive efficiently in all types of traffic conditions. Sometimes traffic conditions are
formed because of pedestrians, animals, etc.
(iii) Clients : Automated cars are created depending on the client’s environment. For example, in some countries 3
you will see left hand drive and in some countries there is a right hand drive. Every country/state can have
designed.
different weather conditions. Depending upon such constraints automated car driver should be
(i) Steering wheel which can be used to direct car in desired direction (i.e. right/left)
(ii) Aaccelerator, gear, etc. can be useful to increase or decrease the speed of the car.
(iv) Light signal, horn can be very useful as indicators for an automated car.
4. Sensors: To take input from environment in car driving example cameras, sonar system, speedometer, GPS,
(ii) Environment : Conveyor belt used for handling parts, containers used to keep parts, and Parts.
(ili) Actuators : Arm with tooltips, to pick and drop parts from one place to another.
(iv) Sensors : Camera to scan the position from where part should be picked and joint angle sensors which are used
to sense the obstacles and move in appropriate place.
a. Healthy patient: system should make use of sterilized instruments to ensure the safety (healthiness) of the
patient.
b. Minimize costs : The automated system results should not be very costly otherwise overall expenses of the
patient may increase, Lawsuits. Medical diagnosis system should be legal.
_ (iv) Actuators : Keyboard and mouse which is useful to make entry of symptoms, findings, patient's answers to given
questions. Scanner to scan the reports, camera to click pictures of patients.
(ii) Environment: Team players, opponent team players, playing ground, goal net.
Scanned by CamScanner
Al&SC ( MU Sem 7 7 Comp ) 1-2 1 Intri 0. to to Artifici | telli
Icial
ic I Intelligenc
lig e (Al) and
in ft Co 7
So mputing
1.12 Structure of Agents / Types of Agents
Di epending upon the degree of 'intelligence and ability to achieve th
Boal, agents are categorized into five basic types.
e
leaming agents
output / action
Effectors
_/
Scanned by CamScanner
f\&SC (MU-Sem. 7-Comp) 1-22 Intro. to Artificial Intelligence(Al) and Soft Computin
— Few possible input sequences and outputs for vacuum cleaner world wi th 2 locations are considered for simplicity.
Table 1.12.1
A B
A B
A B
,
i
Sg TechKnawledga
Publications
Scanned by CamScanner
_Al&SC (MU-Sem. 7- Comp)_
In case of above mentioned vac
uum agent only one sensor i 5
used and that is a dirt se nsor
if there is dirt or not. So the possible . This dirt sensor can detect
inputs are ‘dirt’ and ‘clean’.
Also the agent will have to mai
ntain a database of actions, whi
c h will help to decide what outp
an agent. Database will contain ut should be given by
conditions like : If there is dirt
on the floor to left or right.then
find out if there is dirt
il the entire assigned area is clea
ned then, vacuum cleaner should
suck
instruction.
input/percept
Sensors <— =
». What action...
should.be tak
| output / action
Effectors
L Model-based Reflex Agents J
Scanned by CamScanner
wAl&SC (MU-Sem_7-Comp) Intro. to Artificial Intelligence(Al) and Sott Computing
The knowledge about “how the world is changing” is called as a model of the world. Agent which uses such mode| :
while working is called as the “model-based agent”. a4
— Consider a simple example of automated car driver system. Here, the world keeps changing
all the time. You must ‘
have taken a wrong turn while driving on some or the other day of your life. Same thing applies for. an agent. Suppose
if some car “X” is overtaking our automated driver agent “A”, then speed and direction in which “X”
and “A” are i
moving their steering wheels is important. Take a scenario where agent missed a sign
board as it was overtaking other
car. The world around that agent will be different in that case.
input/percept
Sensors «<—
‘ Whataction — ~
oul ai sya Sones
output / action
Goal-based Agents Effectors
LL
TechKaowledga
Pudsications
Scanned by CamScanner
input/percept
Sensors < _
_
What is the curren
of th
__. What willbe the stateif
ction A’.is.perfo
output /
action
Fig.1.12.5:Utility-based agents
Utility function is used to map a state to a measure of utility of that state. We can define a measure for determining
how advantageous a particular state is for an agent. To obtain this measure utility function can be used.
The term utility is used to depict how “happy” the agent is to find out a generalized performance measure, various
world states according to exactly how happy they would make an agent és compared.
Take one example; you might have used Google maps to find out a route which can take you from source location to
your destination location in least possible time. Same logic is followed by utility based automatic car driving agent.
Goals utility based automatic car driving agent can be used to reach given location safely within least possible time
and save fuel. So this car driving agent will check the possibte routes and the traffic conditions on these routes and will
select the route which can take the car at destination in least possible time safely and without consuming much fuel.
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) Intro. to Artificial Intelligence(Al) and Soft Computing
Performance standard
~
input/percept
Feedback
r—______ changes
_ Learning
a element
ees «
Learning g
goals
- Problem
See
generator
IN :
Effectors output
/ action
Learning Agents ‘
a /
Fig. 1.12.6 : Learning agents
Why do you give mock tests ? When you get less marks for some question, you come to know that you have made
some mistake in your answer. Then you learn the correct answer and when you get that same question in further
examinations, you write the correct answer and avoid the mistakes which were made in the mock test. This same
concept is followed by the learning agent.
Learning based agent is advantageous in many cases, because with its basic knowledge it can initially operate in an
unknown environment and then it can gain knowledge from the environment based on few parameters and perform
actions to give better results.
Following are the components of learning agent :
1. Critic
Learning element
Performance element
Problem generator
1. Critic : It is the one who compares sensor’s input specifying effect of agent’s action on the environment with the
performance standards and generate feedback for leaning element.
2. Learning element : This component is responsible to learn from the difference between performance standards
and the feedback from critic. According to the current percept it is supposed to understand the expected -
behavior and enhance its standards
3. Performance element : Based on the current percept received from sensors and the input obtained by the
learning element, performance element is responsible to choose the action to act upon -the external
environment. ;
4. Problem generator : Based on the new goals learnt by learning agent, problem generator suggests new OF
alternate actions which will lead to new and instructive understanding.
Scanned by CamScanner
Al&SC (MU-Sem. 7- Comp) _ ‘
_1-27— ___Intro. toArtificial Intelligence(Al) and Soft Computing
- There are three main requirements of any
intelligent system.
1. They must possess human like expertise
within a specific domain.
2. They should be able to adapt and learn to do
better in changing environment.
3. Should be capable of making decisions and taking
actions accordingly.
Unlike hard computing, soft computing techniques are tolerant of imprecision, uncertainty, partial truth and
approximation that are present in the real world problems. Examples of soft computing techniques are Neural
networks, Fuzzy logic, genetic algorithms etc.
1. Hard computing is a conventional type of computing that | Soft computing techniques are imprecision,
requires a precisely stated analytic model. , approximation and uncertainty tolerant.
2. Hard computing requires programs to be written. Soft computing techniques are model free.
They can evolve their own models and
programs.
3. Hard computing is deterministic and uses two-valued logic. Soft computing is stochastic and uses
multi-valued logic such as fuzzy logic.
4. Hard computing needs exact data to solve a particular | Soft computing can deal with incomplete,
problem. uncertain and noisy data.
5. Hard computing techniques perform sequential computation. | Soft computing allows parallel
computations. E.g. Neural networks.
6. The solution or output of hard computing is precise. Soft computing can generate approximate
output or solution.
7. Hard computing is based on crisp logic, binary logic and | Soft computing is based on neural
numerical analysis. networks, fuzzy logic, and evolutionary
computations etc.
8. Hard computing techniques are not fault tolerant. The reason | Soft computing techniques are fault
is conventional programs and algorithms are bullt in such a | tolerant due to __ their redundancy,
. way that errors have serious consequences, unless enough | adaptability and reduced precision.
L redundancy Is added Into the system. characteristics.
‘Ey TechKnculedgs
Puolications
Scanned by CamScanner
= i
ifici Intelligence
Artificial
Intro. toto Artiicial (Al) and | Soft Computi nqa
M 7-Com
m. 7-Comp)
yy _Al&SC (MU-Sem. —_1-28- _Intro.
1.15 Various Types of Soft Computing Techniques —
i
Soft Computing is the fusion of different techniques that were designed ,
to model.and enable solu tions to com plex lex real
world problems.
These real world problems are the problems that are too difficult to model, mathematically.
These problems result from the fact that our world seems to be imprecise,
i i ifficult to categorize.
uncertaini and diffi B
. . ‘
The soft computing techniques are capable of handling such uncertainty, impreciseness and vagueness ess p present in th e
real world data.
we
Most of the Soft computing techniques are based on some biological inspired methodologiesi such as hu mann ervous
systems, genetics, evolution, ant’s behaviors etc.
Soft Computing is the fusion of different techniques that were designed to model and enable solutions to complex real
world problems, which are not modeled or too difficult to model, mathematically.
Scanned by CamScanner
W_Al&SC (MU-Sem.7- Comp) 1-29 Intro. to Artificial Intelligence(Al) and Soft Computing
An ANN learns by examples the way humans learn by their experiences.
ANN can be designed and configured for a specific application such as data classification, pattern reorganization, data
clustering etc.
Parallel organization of neural networks permits solutions to problems where multiple constraints must be satisfied
simultaneously,
5. Because of its parallel nature, when an element of the neural network fails, it can continue without any problem.
Neural networks have been successfully applied to a broad spectrum of data-intensive applications. Few
of them are
listed below.
(a) Forecasting
Neural network can be used very effectively in forecasting exchange rates, predicting stock
values, inflation and cash
forecasting, forecasting weather conditions etc. Researchers have proved that the forecastin
g accuracy of NN systems
tend to excel over that of the linear regression model.
Well known application using image recognition is the Optical. Character Recognition (OCR)
available with the standard scanning software for the tools that. are
home computer.
Scansoft has had great success in combining NN with
a rule based s ystem for correctly recognizing
characters a nd words, to get a high level of accur both
acy. .
W TechKnowledga.
Puoticarinas
Scanned by CamScanner
d
Ww Al&SC (MU-Sem. 7-Comp) 1-30 Intro. to Artificial Intelligence(Al) and Soft Computing: :
Customer Relationship Management requires key information to be derived from raw data collected for each
individual customer. This can be achieved by building models using historical data information.
— Many companies are now using neural technology to help in their day to day business
Processes. They are doing this to achieve better performance, greater insight, faster development and increased
productivity.
:
— By using Neural Networks for data mining in the databases, patterns, however complex, can be
identified for the
different types of customers, thus giving valuable customer information to the company.
— Also, NN could be useful for important tasks related to CRM, such as forecasting
call centre loading, demand and
sales levels, monitoring and analyzing the market, validating, completing and
enhancing databases, clustering and
profiling client base etc.
— One example is the airline reservation system AMT, which could predict
sales of tickets in relation to destination,
time of year and ticket price.
Scanned by CamScanner
ence(Al) and Soft Computing
1-311
1-3 itro. to Artificiat Intellig
Intr
| v Al&SC (MU-Se m.. 77-Comp)
sem
of arbitrary complexity
— can model nonlinear functions
TachKnowledge
Pootications
Scanned by CamScanner
— Good for noisy environment.
— They can solve multimodal, non differentiable, non continuous or even NP-complete problems.
— Useful when the search space is very large and there are a large number of parameters involved.
Applications of Genetic Algorithms
1. Automotive design : Genetic algorithms can be used to design composite materials and aerodynamic shapes
for race cars to provide faster, lighter, more fuel efficient and safer vehicles for all the things we use vehicles for.
2. Engineering design : GA are most commonly used to optimize the structural and operational design of
buildings, factories, machines, etc. GAs are used for optimizing the design of robot gripping
arms, satellite booms,
building trusses turbines, , flywheels or any other computer-aided engineering
design application. 3.
3. Robotics : GAs have found applications that span the range of architec
tures for intelligent robotics. GAs can be
used to design the entirely new types of robots that can
perform multiple tasks and have more general
application.
Ww TechKnowledga
Publications
=
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) 1-33 Intro. to Artificial Intelligence(Al) and Soft Computing
Q. 14 What are various agent environments ? Give PEAS representation for an agent.
Q. 16 Explain various types of intelligent agents, state limitations of each and how it is overcome in other type of agent.
o00
Scanned by CamScanner
Problem Solving
_ State Space Search : Searching in a given space of states pertaining to a problem under consideration is called a state
space search.
- Path: Apath is a sequence of states connected by a sequence of actions, in a given state space.
1. Initial state : The initial state is the one in which the agent starts in.
2. Actions : It is the set of actions that can be executed or applicable in all possible states. A description of what each
action does; the formal name for this is the transition model.
3. Successor function : It is a function that returns a state on executing an action on the current state.
4. Goal test : It is a test to determine whether the current state is a goal state. In some problems the goal test can be
carried out just by comparing current state with the defined goal state, called as explicit goal test. Whereas, in some
of the problems, state cannot be defined explicitly but needs to be generated by carrying out some computations, it is
called as implicit goal test.
state
For example : In Tic-Tac-Toe game making diagonal or vertical or horizontal combination declares the winning
which can be compared explicitly; but in the case of chess game, the goal state cannot be predefined but it’s a
scenario called as “Checkmate”, which has to be evaluated implicitly.
5. Path cost : It is simply the cost associated with each step to be taken to reach to the goal state. To determine the cost
to reach to each state, there is a cost function, which is chosen by the problem solving agent.
and path
Problem solution : A well-defined problem with specification of initial state, goat test, successor function,
the goal state.
cost. It can be represented as a data structure and used to implement a program which can search for
initial state
A solution to a problem is a sequence of actions chosen by the problem solving agent that'leads from the
to a goal state. Solution quality is measured by the path cost function.
solutions.
Optimal solution : An optimal solution is the solution with least path cost among all
with the goal to be
A general sequence fallowed by a simple problem solving agent is, first it formulates the problem
achieved, then it searches for a sequence of actions that would solve the problem, and then executes the actions one at a
time.
Scanned by CamScanner
7|/6]5 7) 8) -
3. Vacuum-cleaner problem
States : In vacuum cleaner prob
lem, state can be repr
esented as [<block>,
in one of the two blocks which can clean] or [<block>, dir
be eith er clean or dirty. ty]. The agent can be
Hence there are tot
18 |
1. Initial State : Any state can bec on
a sta tes in the vacuum cleaner
sidered as initial sta world. |
te. For example
2.__ Actions : The possible actions f TA, dirty]
:
TachKnowledga ft, right, absorb,
"PuURLICaALIons idle.
td
Scanned by CamScanner
Problem Solving
3.
(lee I ath R
BP | AS | RS
| RS
(JA) 7] [adh
V7 LJ$s
$s
1. Initial state : This is specified by the user's query, stating initial location, date and time.
2. Actions : Take any flight from the current location, select seat and class, leaving after the current time, leaving
enough time for within airport transfer if needed.
3. Successor function : After taking the action i.e. selecting fight, location, date, time; what is the next location date
and time reached is denoted by the successor function. The location reached is considered as the current location
and the flight's arrival time as the current time.
Path cost : In this case path cost is a function of monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards and so on.
Ep 3 Tectiinculedys
Publications
Scanned by CamScanner
2.3 Measuring Performance of Problem Solving Algorithm / Agent
There are variety of problem solving methods and algorithms available in Al. Before studying any of oe aeons |
in detail, let’s consider the criteria to judge the efficiency of those algorithms. The performance of all these algorithms cap,
be evaluated on the basis of following factors.
1. Completeness : If the algorithm is able to produce the solution if one exists then it satisfies completeness criteria.
2. Optimality : If the solution produced is the minimum cost solution, the algorithm is said to be optimal. . .
3. Time complexity : It depends on the time taken to generate the solution. It is the number of nodes generated during
the search.
Space complexity : Memory required to store the generated nodes while performing the search.
Complexity of algorithms is expressed in terms of three quantities as follows :
1. Bb: Called as branching factor representing maximum number of successors a node can have in the search tree.
2. di: Stands for depth of the shallowest goal node.
3. m:Itis the maximum depth of any path in the search tree.
Parent node
State
=) Child node
Scanned by CamScanner
SF AlBSC(MU-Sem.7-Comp) 8G Problem Solving
state along with
The term “uninformed” means they have only information about what is the start state and the end
the problem definition.
— These techniques can generate successor states and can distinguish a goal state from a non-goal state.
- All these search techniques are distinguished by the order in which nodes are expanded.
- The uninformed search techniques also called as “blind search”.
2.6.1 Concept
- Indepth-first search, the search tree is expanded depth wise; i.e. the deepest node in the current branch of the search
tree as expanded. As the leaf node is reached, the search backtracks to previous node. The progress of the search is
illustrated in Fig.2.6.1. :
- The explored nodes are shown in light gray. Explored nodes with no descendants in the fringe are removed from
memory. Nodes at depth three have no successors and M is the only goal node.
Process
7+®
A)
()
OQ) ©
GE) @
Fig. 2.6.1 : Working of Depth first search on a binary tree
Scanned by CamScanner
AI&SC (MU-Sem. 7-Comp) 2-7 Problem Solving
2.6.2 Implementation 2
DFS uses a LIFO fringe i.e. stack. The most recently generated node, which is on the top in the fringe, is
chosen first for
expansion. As the node is expanded, it is dropped from the fringe and its successors are added. So when there are no more :
successors to add to the fringe, the search “back tracks” to the next deepest node that is still unexplored.DFS can be
implemented in two ways, recursive and non-recursive. Following is the algorithm for the same
2.6.3 Algorithm
(a) Non recursive implementation of DFS
1. Push the root node on a stack
2. while (stack is not empty)
(a) Pop anode from the stack;
(i) if node is a goal node then return success;
(ii) push all children of node onto the stack;
3. return failure ,
DFS(c) :
1. If node is a goal, return success;
2. for each child c of node
(a) if DFS(c) is successful,
(i) return success
3. return failure;
ses pe
Scanned by CamScanner
Problem Solving
Fig. P. 2.6.1
2.7 _ Breadth First Search (BFS)
2.7.1 Concept
- Asthe name suggests, in breadth-first search technique, the tree is expanded breadth wise.
- The root node is expanded first, then all the successors of the root node are expanded, then their successors, and so
on.
— In turn, all the nodes at a particular depth in the search tree are expanded first and then the search will proceed for
the next level node expansion.
- Thus, the shallowest unexpanded node will be chosen for expansion. The search process of BFS is illustrated in
Fig. 2.7.1.
2.7.2 Process
=>O ® ® ©
® ©@
=>O ® =O =O
2.7.3 Implementation
— In BFS we use a FIFO queue for the fringe. Because of which the newly inserted nodes in the fringe will automatically
be placed after their parents.
— Thus, the children nodes, which are deeper than their parents, go to the back of the queue, and old nodes, which are
shallower, get expanded first. Following is the algorithm for the same.
2.7.4 Algorithm
1. Put the root node on a queue
2. while (queue is not empty)
(a) remove a node from the queue
(i) if (node is a goal node) return success;
(ii) put all children of node onto the queue;
3. return failure;
Scanned by CamScanner
ree &SC-(MU-S8m. 7-COMP) es
2.7.5 Performance Evaluation
Completeness : It is complete, provided the shallowest goal node is at some finite depth.
— Optimality : it is optimal, as it always finds the shallowest solution.
~ Time complexity : O(b°), number of nodes in the fringe.
— Space complexity : O(b*), total number of nodes explored.
— Uniform cost search can be achievedby implementing the fringe as a priority queue ordered by
path cost. The
algorithm shown below is almost same as BFS; except for the
use of a priority queue and the addition of an extra
check in case a shorter path to any node is discovered.
— The algorithm takes care of nodes which are inserted in
the fringe for exploration, by using a data structure having
priority queue and hash table.
The priority queue used here contains total cost from root
to the node. Uniform cost search gives the minimum path
cost the maximum priority. The algorithm using this priority
queue is the following.
2.8.3 Algorithm
Else
— Insert all the children of the dequeued node, with
their total costs as priority.
— The algorithm returns the best cost path
which is encountered first and will never
80 for other possible paths. The
solution path is optimal in terms of cost.
— As the priority queue is maintained
on the basis of the total path cost
of node, the algorithm never expa
which hasa cost greater than the cost
of the shortest nds a node |
path in the tree,
- 7
The nodes in the priority queue have
almost the same costs at a given time,
Search”. and thus the name “Uniform Cost |
f
wer Tarh¥nmnladaa
Scanned by CamScanner
, Al&SC (MU-Sem. 7-Comp) - 2-10 Problem Solving
aS tw BAO
The nodes with the specified depth limit are treated as if they don’t have any successor
s. The depth limit solves the
infinite-path problem. ,
But as the search is carried out only till certain depth in the search tree, it introduces problem
of incompleteness.
Depth-first search can be viewed as a special case of depth-limited search with depth limit equal
to the depth of the
tree. The process of DLS is depicted in Fig. 2.9.1.
2.9.2 Process
lf depth limit is fixed to 2, DLS carries out depth first search till second level in the search tree.
Q @ @ Q
© OO ®©§ © QO GQ 9
©
Fig. 2.9.1 : DIS working with depth limit
2.9.3 Implementation
— Asincase of DFS in DLS we can use the same fringe implemented as queue.
Additionally the level of each node needs to be calculated to check whether it is within the specified depth limit.
Depth-limited search can terminate with two conditions :
2.9.4 Algorithm
Set TechKnnwledye
~Pebjicatinns 9)
Scanned by CamScanner
AI&SC (MU-Sem. 7- Comp)
If not: Do nothing
If yes : return
Check if the current node is with
in the Specified search depth
If not: Do nothing i
wW TethKnowledge
Publicatians
Scanned by CamScanner
WF _Al&SC (MU-Sem. 7-Com p) 2-12
Problem Solving =
eee
Limit =0 +@)
ee oo on oe
Limit = 2 >) :
o © © ©
O ©
(A)
Tech Knowledge
Peblicarions
Scanned by CamScanner
Fig. 2.10.1 : Search process in DFID
— Fig. 2.10.1 shows four iterations of ona binary search tree, where the solution is found on the fourth iteration.
2.10.2 Process
Fig. 2.10.1 the iterative depending search algorithm, which repeatedly applies depth limited search with increasing
limits. It terminates when a solution is found or if the depth limited search returns failure, meaning that no solution exists.
2.10.3 Implementation
It has exactly the same implementation as that of DLS. Additionally, iterations are required to increment the depth
limit by one in every recursive call of DLS.
2.10.4 Algorithm
— Initialize depth limit to zero.
— Repeat Until the goal node is found.
DFID()
Bee
: limit = 0;
found = false;
while (not found)
Scanned by CamScanner
2.10.6 Performance Evaluation
- Optimality : It is optimal when the path cost is a non-decreasing function of the depth of the node.
- Time complexity : .
o Do you think in DFID there is a lot of wastage of time and memory in regenerating the same set of nodes again
and again ?
o It may appear to be waste of memory and time, but it’s not so. The reason is that, in a search tree with almost
same branching factor at each level, most of the nodes are in the bottom level which are explores very few times
as compared to those on upper level.
o The nodes on the bottom level that is level ‘d’ are generated only once, those on the next to bottom level are
generated twice, and so on, up to the children of the root, which are generated d times. Hence the time
complexity is O(b’).
- Space somplenity Memory requirements of DFID are modest, i.e. O(b* ).
2.11.1 Concept
In bidirectional search, two simultaneous searches are run. One search starts from the initial state, called forward
search and the other starts from the goal state, called backward search. The search process terminates when the searches
meet at a common node of the search tree. Fig. 2.11.1 shows the general search process in bidirectional search.
2.11.2 Process
2.11.3 Implementation
— In Bidirectional search instead of checking for goal node, one need to check whether the fringes of the two searches
intersect; as they do, a solution has been found.
Scanned by CamScanner
SS (uC Sain.
7 -Comp)
— When each node is generated or selected for expansion, the check can be done. It can be implemented: with a hash
table, to guarantee constant time.
— For example, consider a problem which has solution at depth d= 6. If we run breadth first search in each direction,
then in the worst case the two searches meet when they have generated all of the nodes at depth 3. If b= 10.
— This requires a total of 2,220 node generations, as compared with 1,111,110 for a standard breadth-first search.
- Completeness : Yes, if branching factor b is finite and both directions use breadth first
search.
— Optimality : Yes, if all costs are identical and both directions use breadth first
search.
— Time complexity : Time complexity of bidirectional search using breadth-first searches
in both directions is O(b”),
— Space complexity: As at least one of the two fringes need to kept in memory
to check for the common fede, the
space complexity is o(b y,
CURSE
Table 2.12.1 depicts the comparison of all
uninformed search techni ques basis
on their Performance evalua
know, the algorithms are evaluated on four tion. As We
criteria viz, completeness,
optimality, time complexity and space complexity.
The notations used are as follows :
- b: Branching factor
Scanned by CamScanner
Table 2.12.1 : Comparison of tree-search strategies basis on performance Evaluation
idirectio
1. | These methods use search tree, start node and goal node as input | These methods have additional information
for starting search. -about the search tree nodes, along with the
start and goal node.
2. | They use only the information from the problem definition. They incorporate additional measure of a
potential of a specific state to reach the goal.
3. | Sometimes these methods use past explorations, e.g. cost of the | All these methods use a potential of a state
path generated so far. (node) to reach a goal is measured through
heuristic function. :
4. | All unidirectional techniques are based on the pattern of | All bidirectional search techniques totally
exploration of nodes in the search tree. depend on the evaluated value of each node
generated by heuristic function.
5. | In real time problems uninformed search techniques can be | In real time problems informed search
costly with respect to time and space. techniques are cost effective with respect to
time and space.
6, | Comparatively more number of nodes will be explored in these | As compared to uninformed techniques less
methods. number of nodes are explored in this case.
BFS Stands for “Breadth First Search”. DFS stands for “Depth First Search”.
. BES traverses the tree level wise.i.e. each node near DFS traverses tree depth wise. i.e. nodes in particular
to root will be visited first. The nodes are explored branch are visited till the leaf node and then search
left to right. continues branch by branch from left to right in the
tree.
TechKnowledgs
Pubiteations
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
3. Breadth First Search is implemented using | Depth First Search is implemented using Stack which
queue which is FIFO list. is LIFO list. _ a
— OO mV;
4. This is a single step algorithm, wherein the visited | This is two step algorithm. In first stage, the visiteg
vertices are removed from the queue and then | vertices are pushed onto the stack and later on when
displayed at once. ; ‘) there is no vertex further to visit those are popped
out.
5. BFS requires more memory compare to DFS. DFS require less memory compare to BFS.
Informed searching techniques is a further extension of basic un-informed search techniques. The main idea is to
generate additional information about the search state space usin
g the knowledge of problem domain, so that the.
search becomes more intelligent and efficient. The evalua
tion func tion is developed for each state, which quantities
the desirability of expanding that state in order to reach
the goal.
All the strategies use this evaluation function in
order to select the nex t state under consideration,
“Informed hence the name)
Search”. These techniques are very much efficient with respect to time and space requirements as
compared to uninformed search techniques.
er TechKnowledga : 7
Scanned by CamScanner
2.14 Heuristic Function
of a I euristic functio 1ata given ode in the Ss earch process gives a good estimate o tk at node 6
— individu srineatin
enanonmay
It evaluates
One nr an determines how much promising the state is. Heuristic functions are the
\ ing additional knowledge of the problem states t o the search i . i Fig.
function
Fig. 2.14.1 : General representation of Heuristic
red to
the goal node or number of hopes requi
The representation may be the approximate cost of the path from
Scanned by CamScanner
2-49 : Pribiem ov
W Alasc (MU-Sem. 7-Comp)
2.14.1 Example of 8-puzzle Problem
715 )4 om 1 |2
5 6 3 °)4]5
8 {3 |1 6 |7 |8
Start state Goal State
Fig. 2.14.2 : A scenario of 8-puzzle problem
- Two simple heuristic functions are:
© h, =the number of misplaced tiles. This is also known as the Hamming Distance. In the Fig. 2.14.2 example, the
start state has h, = 8. Clearly, h, is an atceptable heuristic because any tile that is out of place will have to be
moved at least once, quite logical. Isn’t it? a
© h,=the sum of the distances of the tiles from their goal positions. Because tiles cannot be moved diagonally, the
distance counted is the sum of horizontal and vertical distances. This is also known as the Manhattan Distance. In
the Fig. 2.14.2, the start state hash, =3+1+2+2+ 2+3+3+42= 18. Clearly, h, is also an admissible heuristic.
because any move can, at best, move one tile one step closer to the goal.
As expected, neither heuristic overestimates the true number of moves required to solve the puzzle, which is
26 (h,+ h,). Additionally, it is easy to see from the definitions of the heuristic functions that for any given state, h, will
always be greater than or equal to h,. Thus, we can say that h, dominates h,.
Start Goal
— Fig. 2.14.3 depicts a block problem world, where the A, B, C,D letter bricks are piled u
P On one another and required
to be arranged as shown in goal state, by moving one brick at a time. As shown, th
€ goal state with the particular
arrangement of blocks need to be attain from the given start state. Now it’ s time to scratch your head and
define a
heuristic function that will distinguish start state from goal state. Confused??
- Let’s design a function which assign $+ 1 for the brick at right position and —1 for
the one which i
Consider Fig. 2.14.4 is at wrong positionition. «|
GF Technamletg
Publications
Scanned by CamScanner
as M
y
Start | Goal
Blocks world
Fig. 2.14.4 : Definition
of heuristic function “hy”
AR flee
Fig. 2.14.5 : State evaluations using Heuristic function “hy”
Fig. 2.14.5 shows the heuristic values generated by heuristic function “h,” for various different states in the state
space. Please observe that, this heuristic is generating same value for different states.
promising
Due to this kind of heuristic the search may end up in limitless iterations as the state showing most
state as the state
heuristic value may not hold true or search may end up in finding an undesirable goal
evaluation may lead to wrong direction in the search tree.
a new heuristic function “hz”
Let’s have another heuristic design for the same problem. Fig. 2.14.6 is depicting
+1 for each brick in the support structure.
definition, in which the correct support structure of each brick is given
for each brick in the wrong support structure.
And the one not having correct support structure, —1
Start | Goal
-B 6
Blocks world
function “he”
Fig. 2.14.6: Definition of heuristic
Fig. 2.14.5, but this time using h2,
states are considered again as that of
As we observe in Fig. 2.14.7, the same ion h2.
value generate according to heuristic funct
each one of the state is assigned a unique
ch will be carried
easi ly unde rsta nd that, in the second part of the example, sear
Observing this example one can to it.
state is getting a unique value assigned
out smoothly as each unique
le search is
vital role in search process, as the who
that, the design of heuristic plays a
This example makes it clear
cting the next state to be explored.
ied out by con sid eri ng the heuristic values as basis for sele
carr prior candidate for
promising the goal state will be the first
value to reach to
The state having the most
till we find the goal state.
exploration, this continues
° ° ” tys
Tettanaleions
OF -Puplicar
: : - :
Scanned by CamScanner
(b) Global heuristic :
- For each block that has the wrong support structure :— 1 to every block in the support structure.
— This leads to a discussion of a better heuristic function definition. ,
- Is there any particular way of defining a heuristic function that will guarantee a better performance in search.
process??
1 It should generate a unique value for each unique state in search space.
2 The values should be a logical indicator of the profitability of the state in order to reach the goal state.
3. It may not guarantee to find the best solution, but almost always should find a very good solution.
4 It should reduce the search time; specifically for hard problems like travelling salesman problem where the time
required is exponential.
The main objective of a heuristic is to produce a solution in a reasonable time frame that is good enough for solving
the problem, as it’s an extra task added to the basic search process.
The solution produced by using heuristic may not be the best of all the actual solutions to this problem, or it may
simply approximate the exact solution. But it is still valuable because finding the solution does not require a
prohibitively long time. So we are investing some amount of time in generating heuristic values for each state in.
search space but reducing the total time involved in actual searching process.
Do we require to design heuristic for every problem in real world? There is a trade-off criterion for deciding whether
to use a heuristic for solving a given problem. It is as follows.
o Optimality : Does the problem require tofind the optimal solution, if there exist multiple solutions for the same? ’
o Completeness : In case of multiple existing solution of a problem, is there a need to find all of them? As many.
heuristics are only meant to find one solution.
o Accuracy and precision : Can the heuristic guarantee to find the solution within the precision limits? Is the error
bar on the solution unreasonably large?
o Execution time : Is it going to affect the time required to find the solution? Some heuristics converge faster than.
others. Whereas, some are only marginally quicker than classic methods.
In many Al problems, it is often hard to measure precisely the goodness of a particular solution. But still it is important
to keep performance question in mind while designing algorithm. For real world problems, it is often useful to
introduce heuristics based on relatively unstructured knowledge. It is impossible to define this knowledge in such a
way that mathematical analysis can be performed. 3
Scanned by CamScanner 7
_ Problem S
LY UM
gy alasc (MU-Sem. 7-ComP)
2.15 Best First Search
apped on
2.15.1 Concept And br ea dt h fir st se arch never gets tr
getting expanded. a 5! ingle path at a ti
me, but
all co mp et in g branches are not and BFS, it wou ld be “fol low
st sear ch es of both DFS Best First
— Indepth frpath
dead end s. If we combine these properti ing than the curr ent one”. This is what the
s whe nev er som e com pet ing path look more promis
switch path
node chosen
search is..!!
the sear ch tree by exp anding the most promising
algorithm which explores mise of node
_ Best-first search is a search l described best-first sear
ch as estimating the pro
value of nodes. Judea Pear description of the
according to the heuristic gene ral, may dep end on the description of n, the
function f(n) which, in
nbya “heuristic evaluation a knowledge about the
up to that poin t, and most important, on any extr
hered by the sear ch
goal, the information gat
problem domain”. 4 priority queue.
ate for exte nsio n is typi cally implemented using
current best candid noted below the
Efficient selection of the first search on an example
search tree. The values
ch process of Best
Fig. 2.15.1 depicts the sear
heuristic values of nodes.
nodes are the estimated
a fl fo
3 5
[E|
4 6
ion scenario
t search tree expans
Fig. 2.15.1: Best firs
Scanned by CamScanner
PWN repr Remove the best node from OPEN, call it n, add it to CLOSED.
If n is the goal state, backtrack path to n through recorded parents and return path.
Create n's successors.
For each successor do :
a. If it is not in CLOSED and it is not in OPEN : evaluate it, add it to OPEN, and record its parent.
b. Otherwise, if it is already present in OPEN with different parent node and this new path is better than )
previous one, change its recorded parent.
i. Ifit is not in OPEN add it to OPEN.
ii. Otherwise, adjust its priority in OPEN using this new evaluation.
done
This algorithm of Best First Search algorithm just terminates when no path is found. An actual implementation would
of course require special handling of this case.
Completeness : Not complete, may follow infinite path if heuristic rates each state on such a path as the best option.
Most reasonable heuristics will not cause this problem however.
Optimality : Not optimal; may not produce optimal solution always.
Time Complexity : Worst case time complexity is still O(b "where m is the maximum depth.
m
Space Complexity : Since must maintain a queue of all unexpanded states, space-complexity is also O(b ).
A greedy algorithm is an algorithm that follows the heuristic of making the locally optimal choice at each stage with
the hope of finding a global optimum.
When Best First Search uses a heuristic that leads to goal node, so that nodes which seems to be more promising are
expanded first. This particular type of search is called greedy best-first search.
In greedy best first search algorithm, first successor of the parent is expanded. For the successor node, check the
following :
1. If the successor node's heuristic is better than its parent, the successor is set at the front of the queue, with the
parent reinserted directly behind it, and the loop restarts.
2. Else, the successor is inserted into the queue, in a location determined by its heuristic value. The procedure will
evaluate the remaining successors, if any of the parent.
In many cases, greedy best first search may not always produce an optimal solution, but the solution will be locally
optimal, as it will be generated in comparatively less amount of time. In mathematical optimization, greedy algorithms:
solve combinatorial problems.
For example, consider the traveling salesman problem, which is ofa high computational complexity, works well with -
greedy strategy as follows. Refer to Fig. 2.15.2. The values written on the links are the straight line distances from the ¥
nodes. Aim is to visit all the cities A through F with the shortest distance travelled. ‘a
Let us apply a greedy strategy for this problem with a heuristic as, “At each stage visit an unvisited city nearest to the’
current city”. Simple logic... isn’t it? This heuristic need not find a best solution, but terminates in a reasonable number
of steps by finding an optimal solution which typically requires unreasonably many steps. Let’s verify.
Scanned by CamScanner
Fig. 2.15.2 : Travelling Salesmen Problem example
— As greedy algorithm, it will always make a local optimal choice. Hence it will select node C first as it found to be the
one with less distance from the next non-visited node from node A, and then the path generated will be
A>C>D->B-E-F with the total cost = 10 + 18 +5 + 25 + 15 = 73. While by observing the graph one can find the
optimal path and optimal distance the salesman needs to travel. It turns out to be, A->~B—>D—>E—»F—9C where the cost
comes out to be 18+5+15+15+4+18=68.
2. Optimality : It’s not optimal; as it goes on selecting a single path and never checks for other possibilities.
m
3. Time Complexity : O(b ), but a good heuristic can give dramatic improvement.
2.16 A* Search
MU=May 13) Dec: 13; May 14; Dec. 14, May 15
At isoptimally efficient.
@ At algori
‘At algorithm with example.
2.16.1 Concept
= A* pronounced as “Aystar” (Hart, 1972) search method is a combination of branch and bound and best first search,
‘
combined with the dynamic programming principle.
— It’s a variation of Best First search where the evaluation of a state or a node not only depends on the heuristic value of
the node but also considers its distance from the start state. It’s the most widely known form of best-first search.
A* algorithm is also called as OR graph / tree search algorithm.
— In A* search, the value of a node n, represented as f(n) is a combination of g(n), which is the cost of cheapest path to
reach to the node from the root node, and h(n), which Is the cost of cheapest path to reach from the node to the goal
node. Hence f(n) = g(n) + h(n).
~ As the heuristic can provide only the estimated cost from the nade to the goal we can represent h(n) as h*(n);
similarly g*(n) can represent approximation of g(r) which Is the distance from the root node observed by A* and the
algorithm A* will have,
f*(n) = g*(n)
+ h*(n)
Scanned by CamScanner
o. 7
BF Al&SC (MU-Sem. 7-Comp) 2-25 Problem Solving 4
— As we observe the difference between the A* and Best first search is that; in Best first search only the heuristic ’
estimation of h(n) is considered while A* counts for both, the distance travelled till a particular node and the
estimation of distance need to travel more to reach to the goal node, it always finds the cheapest solution. 4
4
— Areasonable thing to try first is the node with the lowest value of g*(n) +h*(n). It turns out that this strategy is more
than just reasonable, provided that the heuristic function h*(n) satisfies certain conditions which are discussed further —
in the chapter. A* search is both complete and optimal.
2.16.2 Implementation
i. Remove the node with the lowest value of f from OPEN to CLOSED and call it as a Best_Node.
ii If Best_Node = Goal state then Found = true
iii. else
C
{ 4
a. Callthe matched node as OLD and add it in the list of Best_Node successors.
b. Ignore the Succ node and change the parent of OLD, if required.
- If g(Succ) < g(OLD) then make parent of OLD to beBest_Node and change the values of g and f for OLD
- If g(Succ) >= g(OLD) then ignore
y
i. Call the matched node as OLD and add it in the list of Best_Node successors.
ii. Ignore the Suce node and change the parent of OLD, if required
- If g(Succ) < g(OLD) then make parent of OLD to be Best_Node and change the values of g and f for OLD.
- Propogate the change to OLD’s children using depth first search
- If g(Succ) >= g(OLD) then do nothing
~
we
Scanned by CamScanner
i. Add it to the list of Best_Node’s successors
ii. Compute f(Succ) = g(Succ) + h(Succ)
iii,
. Put Suce on OPEN list
ist with
with its
i f value
} /* for loop*/
}¥ /* else if */
} /* End while */.
ve
If Found = true then re port the b est path else report failure
i
Example :
mated.
f = g+h,Herehis underesti A
Underestimated
(1+4)C (145)D
(1+3)B
goal
3 moves away trom
(2+3)E
from goal
3 moves away
(3+3)F
Fig. 2.16.1 uted. B is
d D. ‘f va lu es fo r each node is comp
an
expanded to B, C e path currently we
of all ar cs to be 1. Ais PP OS e we re solve in favor of E, th
st f(E) = £(C) = 5. SU
— {If we consider co E, We notice that F)=6 so we will now
expand node C.
ex pa nd ed to F is st op pe d as f(
chosen to be nsion © fa node was farther away
E is ex pa nd ed to F. Expa t ev en tu al ly discovered that B
bu
are expanding. e d some effort
in g h( B) , We have wast ll find optimal
path.
de re st im at th , an d wi
- Hence by un and try anothe
r pa
th ou gh t. Th en Ww e go back
than we
Wie
Scanned by CamScanner
; _Al&SC> (MU- -Sem. _7-Comp) )
B. Overestimation
generated for each nade is greate
r than the actual number of step
Here h is overestimated that is, the value
required to reach to the goal node.
Example :
~—\
\. B (1+4)C (1+5)D
(2+2) E
(3+1)F
(440) G
Fig. 2.16.2
— As shown in the example, A is expanded to B, C and D. Now B is expanded to E, E to F and F to G for a solution
a path off
path of length 4. Consider a scenario when there a direct path from D to G with a solution giving
length 2.This path will never be found because of overestimating h(D).
Thus, some other worse solution might be found without ever expanding D. So by overestimating h, one cannot
guarantee to find the cheaper path solution.
2.16.5 Admissibility of A*
MU - Dec. 12, May 14
— Asearch algorithm is admissible, if for any graph, it always terminates in an optimal path from initial state to goal
state, if path exists. A heuristic is admissible if it never overestimates the actual cost from current state to goal state.
Alternatively, we can say that A* always terminates with the optimal path in case h(n) is an admissible heuristic.
function.
— Aheuristic h(n) is admissible if for every node n, if h(n) < h*(n), where h*(n) is the true cost to reach the goal state
from n. An admissible heuristic never overestimates the cost to reach the goal. Admissible heuristics are by nature
optimistic because they think the cost of solving the problem is less than it actually is
— Anobvious example of an admissible heuristic is the straight line distance. Straight line distance is admissible because /
the shortest path between any two points is a straight line, so the straight line cannot overestimate the actual road,
distance.
o Theorem : If h(n) is admissible, tree search using A* is optimal.
o ~~ Proof: Optimality of A*with admissible heuristic. { ‘ 4
4
— Suppose some suboptimal goal G, has been generated and is in the fringe, Let n be an unexpanded node in the fringe
such that nis on a shortest path to an optimal goal G. ‘
ww TechKnowledge
Publications
Scanned by CamScanner
— if aS
Start a
Hence f(G2) > f(n), and A* will never select G, for expansion.
2.16.6 Monotonicity
2.16.7 Properties of A*
Scanned by CamScanner
- ee py o-cy Eroplem Solving!
2.16.8 Example : 8 Puzzle Problem using A*
Algorithm
Start stat Goal state
Evaluation function -
f for EPP
Start state
Search tree
1-044
up
(1+3)
5807 6
502
up
left
(243) (243) |
306
3°7°6 : 3°76
57 2 O52
240158 §20
Ac de Bs fold
left right
(342) | (3+4)
O36 3.61
5 72: B57 2)
4.408 413°
down
£6. 3h 6" 5 3° 6°
O72 7O2 an
A otyk. Ae jces Se
Fig. 2.16.4: Solution
of 8- puzzle usin
The choice of evaluatio g A*
n function critically de
termines search re
Consider Evaluation fun sults,
ction
f(X) = g(X) +(x)
h(X) = the number of til
es'not in their 80al
position in a given
state X
a(X) = depth of node X in the sea
rch tree.
For Initial node f(init
ial_node) =4
WF TochKnowtedga
Publications
Scanned by CamScanner
—=y
MM
i Al&SC (MU-Sem. 7-Comp) _
Problem Solving
(s)15
3 4
14G) (4) 12
TachKnowl
*Pedwicar
Scanned by CamScanner
Open : 3(8 +11), 6(10 + 10)
Open : 6(10 + 10)
Closed : S(15),4(12+4),5(10+6),
Closed : s(15), 4(12 + 4), 5(10 + 6),
/1(14+ 3),2(10+7) 1(14 +3) 2(10 + 7) 3(8 + 11)
2.16.9 Caparison among Best First Search, A* search and Greedy Best First Search
y | A* search | Best
Best First Search
"Algorithm _| Greed First Sear
Completeness Not complete complete | Not complete
TechKrowledga
puosicakions
Scanned by CamScanner
(D) 0+12=12
10 8
E G
Q) 815213
lorsa18
10 10 8 16
DC; 4 1
Cc
1642=18 2)! 24+0=24
2015=25 Ci 20+0=20
10 10 3/ ©
J K
E F
3045235 © 30+0=30 2440224 [© (2) 244+5=29
A A. 4 A
wo 13(15) ©
20
3 B G G
=o
150 Ors = 13 py
150 rd
H
4
8 A
8 A
SA
10 20(24)@
ie B
B GS
Oo 20()Q)
160
D
O20
nodes
* with memory size of 3
Fig. 2.16.5 : Process of SMA
Se ar ch Al go ri th ms an d Op timization Problems
2.17 Local
2.17.1 Hill Climbing MTU Ey 2s scoe ez cee May 16
:
Scanned by CamScanner
ee Ee Eroblen'Salam
AlgSC (MU-Sem. 7-Comp) 2-33
tion. But in hill climbi ng the test function
the test func tion will mer ely accept or reject a solu
In the depth-first search, e is to goal state.
which provides an esti mate o f how close a given stat the heuristic
is provided with a heuristic function d to fin d th e solution, i.e.
ion n ee de
ided with the additional informat
— In Hill climbing, each state is prov the c complete search tree.
Rather, it looks only a
ient since it does not mal intai n
value. The algorithm is memory effic
s.
the current state and immediate level state |
from your current lo cation. There are
n possible paths with different
For example, if you want to find a mall mall, so
ion will just give you the distance of each
path which is reaching to the
to reach to the mall. The heuristic funct
ent for you to reach to the mall.
that it becomes very simple and time effici
ider all the
nt state by means of an evaluation function. “Cons
— Hill climbing attempts to iteratively improve the curre point on the landscape corresponds
to thei
of a landscape. The height of any
possible states laid out on the surface
. Fig. 2.17.1 depicts the typical hill climbing
evaluation functionof the state at that point” (Russell and Norvig, 2003) 4
:
reach to the hill top from ground level.
scenario, where multiple paths are available to
Goal (Hill top)
Start
point
- = Hill climbing always attempts to make changes that improve the current state. In other words, hill climbing can onl
advance if there is a higher point in the adjacent landscape. “_ 4
t In many cases where state spac ia
— — Hill climbing is a type of local search technique. It is relatively simple to implemen space is)
"
of moderate size, hill climbing works even better than many advanced techniques
;7
— For example, , hill climbing when applied to travelli initially iit produc
elling salesman problem; ; initially produces random combi + age ofae
..
Then .it selects the better rout by switching the ination
. . ..
solut : - ord
7 ions having all the cities visited.
g the order, which visits all the cities inj
minimum cost..
- There are two variations of hill climbing as discussed follow
Scanned by CamScanner
‘7 AlaSC (MU-Sem. 7-Com 2-34 Problem Solving
(ii) If itis better than curren t state then make i
€ it a new current state.
If it is
(iii)
iii
not better than the current state then continue the loop, go to Step 2
is better than the current state is
.
As we study the algorithm, we obse
in every pass the first node / state that
considered for further exploration Tie ee the problem, but may
optimal solution to
egy may not guarantee that most
save upon the execution time.
Algorithm
t; otherwise make it as a current state.
1. Evaluate the initial state, if it is a goal state, return and qui
ent state :
a solu tion is foun d or a comp lete iteration produces no change to curr
2. Loop until will be better than SUCC.
of the current state
a. SUCC=a state such that any possible successor
e the new state :
b. For each operator that applies to the current state, evaluat
(i) If itis goal; then return and quit
(ii) If it is better than SUCC then set SUCC to this state.
set the current state to SUCC.
c. SUCC is better than the current state —
ti me requirement and
with stee pest ascent, we find that there is a tradeoff for the
As we compare simple hill climbing
solution.
the accuracy or optimality of the successors are
go for first bette r succ es sor, the time is saved as all the
g technique as we
— Incase of simple hill climbin ches ge tting explored, in
turn the solution found may
of node s and bran
to more num ber
not evaluated but it may lead
not be the optimal one. the bes t among all the successors is
selected for
que , as eve ry tim e
ascent hill climbing techni ution found will be
"= While in case of steepest eva lua tin g all the suc ces sor s at earlier stages, but the sol
involves more time in makes it clear that the
further expansion, it tes lea din g to hill top are explored. This also
sta
solutio n, as only the performance of the algorithm.
always the optimal def ini tio n pla ys a vit al role in deciding t he
on
ie. the heuristic functi
evaluation function
ng
Limitation s of Hill Climbi
2.17.1(C) MU -May 13, May 14, Dec.
re MEW ae
~ (May
13, Dec.14, )
May 14 May =
ction on the hill climbing techniques.
co rr ec t de si gn of heuristic fun
e impact © f in
algorithms may lead to
a position,
what can be th bing strateey- Some
t imes the
Nowlet’s see n hill cl im further
— may ar is e |
d to a better place on hill i. e. no
Following are the
probl ems that is no mo ve possible which will lea states.
- which th er e
the following three
so on, but from
lu ti if we ve reached one of
ha
which is not a the solution. This
will happen hill but is not
g cl o se r to ic h is at he ig ht fr om other parts of the
state that is goin wh
mu m’” is a location in hill s, but there is not
next better state
“c al ma xi all its ne ig hb or
1. Local Maximu
m : A te better than solution. In such
e se ar ch tree, it Is a sta so me ti me s oc cu r within sight of a
p. In th Local maximum
the actual hill to rt he r expansion.
osen for fu
which can be ch “goothills”.
lled
cases they are ca
¥ ‘Tech Knowles
Puoticatlt
Scanned by CamScanner
Fig. 2.17.2 : Local Maximum
- Inthe search tree local maximum can be seen as follows :
()
YO @ ©
Fig. 2.17.5
3. Ridge: A "ridge" is an area in
the hill such that, it is hi gher
uphill path from ridge. In the than the Surroundin
search tree it Is the situation g areas, but there is no further
lesser, it’s a ridge condition , where all SuCCesso
. The sultable successor c ann rs are eithe r of same value or
ot be searched in a
simple move,
a , Ridgo
Scanned by CamScanner
=
Se
|
Fig. 2.17.7
. |
Fig. 2.17.8 depicts all the different situations together in hill climbing
global
maxima
Objective function
Plateau
Local
maxima
Local maxima
State space
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
Problem So}Vin
.
— _ The idea is to use simulated annealing to search for feasible .
solution s and converge to an optii mal solution
tion.. In Order to,
achieve that, at the beginning of the process, some downhill moves may
be made. These em moves ae Made
purposely, to do enough exploration of the whole space early on, so that the final
solution is relatively insensitive ta
. :
the starting state. It reduces the chances of getting ;
caught at a local maximu idge.
m, or plateau, or a ridg
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or there are no
new operators left to be applied :
‘Set T according to an annealing sche
dule
Select and apply a new operator
Evaluate the new state :
goal quit
AE = Val(current state) —Val(new
state)
AE < 0 > new current state
else new current state with probability elk
— We observe in the algorithm that,
if the next state is better than the curre
nt, it readily accepts it as a new curre
State. But in case when the next nt
5 tate is not having the desirable value
probability, e“*"" where AE even then it accepts that state with some
is the p Ositive change in the energy level,
T is temperature and k is Boltzmann’s cons
tant.
the small one. Also, the
temperature decrease. Hence uphil
beginning of the annealing process, l mov @s are more likely
when the temperature is high. As in the
the cooling process starts, tem
down, in turn the uphill moves.
Downhill moves are allow ed any per atu re comes
time in the whole Process. In this
very small upward moves are allo
wed till finally, the process con
way , comparatively
verges to a local minimu
desired low point destination in m configuration, i.e. the
the valley.
2.17.2(A) Comparing Simulated Annealing
with Hill Climbing
A hill climbing algorithm never mak
es “downhill” moves toward
states with lower value
because it can get stuck on a loca
l maximum. and it can be incomplete,
Hill climbi
' Ng Procedure chooses the
best state from those av
€xpansion. Unlike ailable or at least bett
hill climb er than the current stat
. State turned out ng, simulated annealing chooses e for further.
a random move from the neighb
Hastie Cessor
to be bett t : ourhood. If the successor c
. ah han
state is worse, then it WillitsbeCurr ent state then simulated annealing will acce
acce pted based on some probability pt it for further expansion, If the.
a
W TechKnowle i ge
S c anned by CamS canner
Al&SC (MU-Sem. 7-Com 4
2 2-38 Problem Solving
2.17.3 Local Beam Search
| n all the variations of hill climbing
climbing till¢ now, we have considered only one node getting selected
at a time for further
search p process. These algori gorithms are memory efficient
Ses in that
sense. But when an unfruitful branch gets explored
even for some amount of time it 's a
Complete waste of time and memory. Also the solution produce may not be the
optimal one. ;
In parallel local beam search, the parallel threads communicate to each other,
hence useful information is passed
among the parallel search threads.
In turn, the states that generate the best successors say to the others, “Come
over here, the grass is greener!” The
algorithm quickly terminates unfruitful branches exploration and moves its resources to where
the path seems most
promising. In stochastic beam search the maintained successor states are chosen with a
probability based on.their
goodness.
{
Sort OPEN list;
Select top W elements from OPEN list and put it in W_OPEN list and empty OPEN list;
{ .
Get NODE from W_OPEN;
{
Find SUCCs of NODE, if any with its estimated cost
}
}// end inner while
Scanned by CamScanner
¥ AI&SC (MU-Sem. 7-Comp)
k=2
Td TethXnowledga
Pubfications
7 a
Scanned by CamScanner
Alxsc (MU-Sem.7-Comp)
It smooths the weight changes ang Suppresses ae
error valey; Cross-stitching, t
‘9 When all weight changes are all in the same
directi .
convergence, ection the momentum amplifies the learning rate causing a faster
for training,ticular’
and the length ofth training oft €n* isSequential
cxamole, measured or inrandom presentation - the epoch is the fundamental Mie
terms of epochs. During a training epoch with revision
after a Fare pie, the examples can be presented in the same sequential order, or the examples could be
presented in a different random order for
each epoch. T he random representation usually yields better results.
:
The randomness has advantages and disadvantages :
o aaa : It gives the algorithm some stochastic search properties. The weight state
tends to jitter around its
equilibrium, and may visit occasionally nearby points. Thus it may escape trapping in
suboptimal weight
configurations. The on-line learning may have a better chance of finding a global minimum than the true gradient
descent technique.
Disadvantages : The weight vector never settles to a stable configuration. Having found a good minimum it may
then continue to wander around it.
Random initial state - unlike many other learning systems, the neural network begin in a random state. The network
weights are initialized to some choice of random numbers with a range typically between -0.5 and 0.5 (The inputs are
usually normalized to numbers between 0 and 1 ). Even with identical learning conditions, the random initial weights
can lead to results that differ from one training session to another.
The training sessions may be repeated till getting the best results.
Gas are adaptive heuristic search algorithms based on the evolutionary ideas of natural selection and genetics. As such
they represent an intelligent exploitation of a random search used to solve optimization problems. Although
randomized, Gas are by no means random, instead they exploit historical information to direct the search for better
performance within the search space. The basic techniques of the Gas are designed to simulate processes in natural
systems necessary for evolution, especially those following the principles of “survival of the fittest” laid down by .
Charles Darwin.
Genetic algorithms are implemented as a computer simulation in which a population oikabstract representations
(called chromosomes or the genotype oF the genome) of a solutions (called individuals, creatures, or
as
better solutions. The solutions are bgbiesanted in binary
Phenotypes) to an optimization problem evolves towlatus
strings of Os and 1s, but other encodings are also possible. The phon usually starts trom a population of randomly
generated individual and occurs in generations. In each generation, the fitness of every individual in the population is
evaluated Itiple individuals are stochastically selected from the current population (based on their fitness), and
ated, multiple in
used in the next iteration of the algorithm.
Modifieledd toto f form a new population : The new population is then
Ee Tech knawleays }
Pubiications
Scanned by CamScanner |
W_Alasc (Mu-Sem. 7-Comp) _
2.17.4(A) Terminologies of GA
- Gene
in the problem
Gene is the smallest unit in genetic algorithm. The gene represents the smallest unit of information
the problem context were, for:
domain and can be thought of as the basic building block for a possible solution. If
number of shares of a |
example, the creation of a well-balanced investment portfolio, a gene might represent the
‘particular security to purchase.
-— Chromosome
Chromosome is.a series of genes that represent the components of one possible solution to the problem. The
chromosome is represented in computer memory as a bit string of binary digits that can be “decoded” by the genetic |
algorithm to determine how good a particular chromosome’s-gene pool solution is for a given problem. The decoding
process simply informs the genetic algorithm what the various genes within the chromosome represent.
- Encoding
the type of 1
Encoding of chromosomes is one of the problems, to start solving problem with GA. Encoding depends on
the problem. There are various types of encoding techniques like binary encoding, permutation encoding, value
encoding, etc.
— Population
A population is.a pool of individuals (chromosomes) that will be sampled for selection and evaluation. The
performance of each individual will be computed and a new population will be reproduced using standard genetic
operators.
— Reproduction
Reproduction is the process of creating new individuals called off-springs from: the parents population. This new
population will be evaluated again to select the desired results. Reproduction is done basically using two genetic
operators : crossover and mutation. However, the genetic operators used can vary from model to model, there are a
few standard or canonical operators : crossover and recombination of genetic material contained in different parent —
chromosomes, random mutation of data in individual chromosomes, and domain specific operations, such as
migration of genes.
—. Selection
In this process, chromosomes are selected from the population to be the parents for the crossover. The problem is
how to select these chromosomes. According to Darwin’s evolution theory the best ones should survive and create ©
new offspring. There are many methods how to select the best chromosomes, for example roulette wheel selection,
Boltzman selection, tournament selection, rank selection, steady state selection, etc.
— Crossover
Crossover involves the exchange of gene information between two. selected chromosomes. The purpose of the
crossover operation is to allow the genetic algorithm to create new chromosomes that shares positive characteristics el
while simultaneously reducing the prevalence of negative characteristics in an otherwise reasonably fit solution. Types .”
of crossover techniques include single-point crossover, two-point crossover, uniform crossover, mathematical
crossover, tree crossover, etc. ©
a
Saal
Scanned by CamScanner
m Solving
ED
Al&sC (MU-Sem. 7-Comp) .
2-42 Problem Solving
mutation
is another re finement step th
Mutation a
: rity of 7 a ndomly chan setting to 4
completely differen one. The majo
t
Mutations f or Bes the value
of a gene from its current
the case in nature, less
problem
fit than more so. Occasional
ly, however,
a
.
hi ht med by this process are, as is often the
ighly superio Mutation provides
were, for
netic algorithm with t he Opportunity to Create rand beneficial mutation will occur.
previously
ares of a Be
uncharted areas of the solution 5 Pace, thus in chromosomes and information genes that can explore
;
creasing th ; i i
‘ous typ types of mutati .
various utation techniques like bit inversion y siasrching
changing, adage
value encoding. ea
2.17.4(C) The Basic Genetic Algorithm
em. The
po :
genetic Start] Generate random
1. ions for the problem) -
ecoding Population of n chromosomes (suitable solut
2. [Fitness] Evaluate the fitness f(x) of each chromoso me x foll
in the population
populati
repeating
Create a new population by
;
3, [New population] two parent chromosomes from. ing i
following g ste steps untili the new population is complete
1 ;
(selection] select
(the better fitness, the bigger
type of population according to their fitness
chance to be selected)
, value no crossover was
5, , 4 With
[Crossover] of a crossover probabilit y cross over the | parents to form a new offspring (children). If
performed, offspring is an exact copy of parents.
in chromosome).
6. [Mutation] With a mutation probability mutate new offspring at each locus (position
1. The
7, [Accepting] Place new offspring in a new population
enetic
run of algorithm
8, [Replace] Use new generated population for a further
return the best solution in current population
9. [Test} If the end condition is satisfied, stop, and
a. Parent chromosome
Chromosome A
B
Chromosome
m is h chromosomes.
cro sso Vv er: i.e adding bits of bot
sate after arithmetic
Child chromosome
ion,
b. Child chromosome
909 9 8 7837
Chromosome c . subtracting 1
ing 2 sma ll val ue to selecte d values. E.g
su btract
- adding or
the mutation i.€
After applying valu e encoding
tics 7
to3™ and 4" bit.
pes
ical
" 908 e
c sT
8 8 7837
Se is mu c h better than
both parents.
- Techkanaledys
Oe Podrtications
uc ed
pr od
It can be observed that the child
Scanned by CamScanner
: Al&SC (MU-Sem. 7-Comp) 2-43 Problem Solving
There can be two types of environments in case of multi-agent : Competitive and cooperative.
1. Competitive environment
— In this type of environment every agent makes an effort to win the game by defeating or by creating superior it~
over other agents who are also trying to win the game.
— Chess is an example of a competitive environment.
2. - Cooperative environment
— — Inthis type of environment all the agents jointly perform activities in order to achieve same goal.
— Car driving agent is an example of a cooperative environment.
Two player When there are two opponents/agents playing the game
it is called 2- play game. To increase difficulty of the
game Intelligence is added to agents in Al games.
Note that, in case of Al Games we must have at least two
players. (i.e. single player games don’t come under Al
games category). ,
7 TechKnowledge
USF punsicarions
Scanned by CamScanner
SS
_—
Multi-agent When there are two o Monopoly
T more opponents/agents playing
the
naa game veit j's called multi agent environment where
ery agents affects the action of other agents.
Non-cooperative When th @ surrounding
i environment is not helpful for | / Card games
environment winni
ing the game it is called as non-cooperative or
competitive.
2.18,2(C) Positive Sum Game have same goal and they contribu
te together to play the game.
ayers
It is also called as cooperative Bame- Here, all play
For example, educational games-
,
It is also called as competitive game Here the most. 2
everybody loses. Real world example of a war suits TechKnowledye
Puotications
Scanned by CamScanner
8 Alasc (MU-Sem. 7-Comp) 2-45 Problem Solving
2.19 Relevant Aspects of Al Game ; 4
To understand game playing, we will first take look at all appropriate aspects of a game which give overview of the
stages in a game play. See Fig. 2.19.1.
- Accessible environments : Games with accessible environments have all the necessary information handy. For
example : Chess.
|
— Search : Also there are games which require search functionality which illustrates how players have-to search through
possible game positions to play a game. For example : minesweeper, battleships.
— Unpredictable opponent : In Al games opponents can be unpredictable, this introduces uncertainty in game play and
thus game-playing has to deal with contingency/ probability problems. For example : Scrabble.
Accessible
environments,
"Relevant aspects ©
— Fig. 2.20.1shows examples of two main varieties of problems faced in artificial intelligence games. First type is “Toy
Problems” and the other type is “Real World Problems”.
"Real World
B-queen/n-queen “(NP hard)
Vacuum World - oe Robot Navigation
— Game play follows some strategies in order to mathematically analyse the game and genera
te possible outcomes. A ”
two player strategy table can be seen in Table 2.20.1.
4
Scanned by CamScanner
~
y Alas (MU-Sem. 7-Comp)
= 2-46
Problem Solvin
a
le 2.20.1 ; Tw g _
.
o Player Stra
tegy table
By2 ae
oon ~ tooo 7
On the basis of how many times Player | or Player Il is winning the game, following strategies
can be discussed.
Equalizing Strategy : A strategy that produces the same average winnings no matter what the
opponent does is called
an equalizing strategy.
Optimal Strategy : If | has a procedure that guarantees him at least A(x, y) amount on the
average, and Il has a
procedure that keeps her average loss to at most A(x, y). Then A(x, y) is called the value of the game,
and the
Procedure each uses to insure this return is called an optimal strategy or a minimax strategy. -
Pure Strategies and Mixed Strategies : It is useful to make a distinction between a pure strategy
and a mixed strategy.
We refer to elements of X or Y as pure strategies. The more complex entity that chooses
among the pure strategies at
random in various proportions is called a mixed strategy.-
(®) Deterministic
It is a fully ob ble environment. When there are two agents playing the game alternatively and
is a fully observa the final
. ae
results of the game are equal and opposite then the game is called deterministic.
Tak le of ticctac-toe where two players play a game alternatively
and when one player wins a game then
© example of tic-tac- :
Other player losses game.
Scanned by CamScanner
(b) Probabilistic
; erministic games, where you can ha
Probabilistic is also called as non-deterministic type. It is apposite of tet
q
multiple players and you cannot determine the next action of the player. salilietic
type sbutaritake exaniil :
You can only predict the probability of the next action. To understand pro
a
card games.
another way of classification for games can be based on exact/per : i ed on j
approxima
fect information or based on exact “/
te information. Now, let us understand these ter
ms. ‘ layer is called as a 3
1. Exact/perfect information : Games in
which all the actions are known to A ae
exact or perfect information. For example tic-tac-toe or boa of
rd games like chess, chec . , a 4
2. Inexact / approximate information : Game in which all the action
s are not known to omer p
er (or action
are unp
redictable) is called game of inexact or approximate inform
ation. In this type a game, player's next
action depends upon who played last, who won last hand, etc.
For example card games like hearts.
Consider following games and see how they
are classified into various types of games
we have learnt in above sections based on the parameters Which
:
(a) Chess bo
ard
(b) Deep Blue
ll vs Garry Ka
sparov
final Position
‘Tech Knowledga
game 1
* Pubricagy ons Fig. 2.20.2
2
Scanned by CamScanner
) _ problem Solving
(MU-Serm. 7-Comp
ie] «+e
‘ts
Medien bee
8 a
:
d et Se prehsted
F F
sid
tree
Fig. 2.20.3 : Chess game
rs
9.20.2(B) Checke y. This ga me is a two pers
on game where
and exa ct/ per fect information ¢ ategor er.
Checkers comes under det
erm ini sti c the other in an ord
y and players Pp lay one after world
both players can see board
positions, so ther e is no sec rec
dev e loped which de
feated human
call ed dra ugh ts) was
m name Chinook (also
In 1990's a comput er progra
- Tinsley.
champion Marion
rs board
Fig. 2.20.4 : Checke
omer
japan ered
repehaeias)
Foi obi)
ie}
ie)
ie}
OsO<0%08
ey
po orosc
aka e]
o=0
ome}
rome o pie!
o_0-0.0
Merce e(er ome!
kaemmo
e emoO) ne
|
B
omy je p. °8 8
[o] |
o.58 @
eer oil eet
ee ipn-o. O20
om om ome! TethKnowledge
So Puotications
Scanned by CamScanner
We saw game tree of chess and checkers in earlier section. |
Game tree is defined as a directed graph with nodes and edge. Here nodes indicate positions in a game and edges |
indicate next actions.
Let us try to understand what a game tree is with the help of
Tic-Tac-Toe example. Tic-Tac-Toe is a 2-player game, it is.
deterministic and every player plays when his/her turn come. Game tree
has a Root Node which give the starting
position of the game board, which is a blank 3 x 3 grid.
Say in given example player 1 takes 'X' sign. Then
MAX(X) indicates the board for the best single
indicates that it is player 1's turn. (Remember next move. Also it.
that initial move is always indicated with a
MAX).
If the node is labelled as MIN then it means
that it is opponents turn (i.e. player 2's turn)
single next move is shown. and the board for the best
,
Possible moves are represented with the help
of lines. Last level shows terminal board posi
tion, which illustrates end
of the game instance, Here, we get zero as sum
of all the play offs. Terminal state gives winning
indicates the play off points gained by the board position. Utility
player (— 1, 0, + 1).
In a similar way we can draw a game tree for
any artificial intelligence based games.
MAX (X)
X x x
MIN (0) i
x BS xX _
_ | x x XxX
Ox x]o [Ix
MAX (X)
O eae
Scanned by CamScanner
ee Se
gC (MU-Sem. 7-Comp)_ __
Problem Solvin
Al
Which
Terminal state hich ind :
indicates that all in
stance of game are
over
utility It displays a nu m ' .
ber which indicates if the game was won or lost or it was draw.
; '
From Tic-Tac-Toe
atively small (it game's example nodes),
has 91 terminal you can understan d that for a 3 x 33 grid, player game, where the ga me tree is
pri two os
rel
how O' difficult itit wou would b stillwe cannot draw it completely on one single page.
i
games with bigger grid size.
h © '0 creat e a game
; tree for multi-player games or for the
h a v e have huge
Many game s
7_ ive ge search space complexity. Games have limitation over the time and the amount of memory
space i: can consume.
ume. Finding
Findi :
an optimal .
solution is not feasible most of the times, so there is a need for
approximation.
- Therefore mere 5 a need for an algorithm which will reduce the tree size and eventually will help in reducing the
processing time and in saving memory space of the machine.
and
- One method is pruning where, only the required parts, which improve quality of output, of the tree are kept
,
reaming parts are removed.
- Another method is heuristic method (it makes use of an evaluation function) it does not require exhaustive research.
This method depends upon readily available information which can be used to control problem solving.
o| |x
oO
O'sTum
0 Oo 0
Fig, 2.21.1
Scanned by CamScanner
Next action : 'x'
Fig. 2.21.2
Next action : 'O'
x|x|o
o}| |x
Oo
° Co Oo olo
Scanned by CamScanner
Problem Solving
¥ Al&SC. (MU-Sem. 7-Comp)
x
me)
x|x}|o
O° xX
O°
AN
Xx xX|O x|x}O
x | Oo x
oO
O° x|oO
x
oO
Fig. 2.21.5
g minimax decision.
(pa yof f yalu e) sele ct the action at the root node usin
utility value
Step 4: : WitWith h the the m max(of the min)
¥ TechKnowledys
Pubtleakions
Scanned by CamScanner
7 _Al&SC (MU-Sem.7-Comp) _
Oo 1 1 1 1 oO
x|x]o xx x|xlo x{x x|xlo xIxlo
10x ol@lx olx|x “olg|x o1x|x olfo]x
x/o X]}O -erete g)xlo -ete x}olo
Fig. 2.21.6 °
x|xlo
o| |x
Best move
Oo
oO
Fig. 2.21.7
(Ir CasSie of Step: Ss 2
a id 3 we
are assuming
that the Oppone t will play
pe t fectly
as p er ou re xp ecta
t: ti
tion )
Wy : Tethkhowteaga
Pucitcations ja
l
!
¥
&
ic
f
‘ 2 i
Scanned by CamScanner
a ae ¥ Al&SC (MU-Sem. 7-Comp)_
2-54 Problem Solving=
2.21.2 Properties of Minimax Algorithm
eS
Off. In game search iti resembles to clipping a branch in the search tree, probably which is not
Pruningital means cutting g off.
so fruitful.
choice found i.e.,
At any choice point along the path for max, a is considered as the value of the best possible
then, MAX will avoid it. Similarly we can define
highest-value. For each “x”, if "X" is worse i.e. lesser value than a value
B value for MIN.
making process need not consider each and every
a-B pruning is an extension to minimax algorithm where, decision
node of the game tree.
in making the search
are considered in decision making. Pruning helps
Only the important nodes for quality output
more efficient.
the result remaining parts of
which contribute in improving the quality of
Pruning keeps only those parts of the tree
the tree are removed.
e tree :
Consider the following gam
MAX
MIN
Fig. 2.22.1
Scanned by CamScanner
A [00,00]
MAX
MIN
MAX
MIN
MAX
MIN
MAX
MIN
MAX
MIN
W Tachn a a i
Scanned by CamScanner
" SC (MU-Sem, 7-Com .
AlaSCs ) 2-56 Problem Solvin
ee
MIN
[-0o, 13]
MAX
MIN
MIN
[~», 13] [-29, 7] [2, 2]
— Soin this example we have pruned 2 B and 0 a branches. As the tree is very small, you may not appreciate
the effect
of branch pruning; but as we consider any real game tree, pruning creates a significant impact on search as far as the
time and space is concern.
22 1 134437-+11096542656
Fig. P. 2.22.1 : Game Tree
Scanned by CamScanner
Al&ZSC (MU-Sem. 7-Comp)_ a ns ip OT
1
2211394437-110354256
Tota
Ex.
j
q
s
- Fig. P. 2.22.1
Scanned by CamScanner
|_AI&SC (MU-Sem. 7-Com 58 _ en Problem Solving:
221134437 410354256
B- cuts =3
so |-——_,
Jo oh CaSe
Fig. P. 2.22.2
Soln. :
.
No. of B - cuts = 2
Scanned by CamScanner
min
max
min
10° 5 7 11 (12 8 5 12 11 9 8 7
Scanned by CamScanner
yd @F _Al&SC (MU-Sem. 7-Comp) : ‘
— 2-60 Problem Solving
ce Review Questions. .
Q.2 How the drawbacks of DFS are overcome by DLS and DFID?
ing techniques.
O38 Compare and contrast.all the un-informed search
h.
Q.4 Write short note on bidirectional searc
orm cost search.
Q.5 Write anote on BEA and Unif
FLS and DFID.
Q.6 Compare and contrast DFS,
A* with example.
var iou s inf orm ed sea rch techniques? Explain
Q.7 What are
mple.
rch and A* with an exa
Q.8 Compare Be st First Sea
e it with simple h ill climbing.
of ste epe st asc e' nt hill climbing. And compar
Q.9 Write algorithm
we solve them?
lim ita tio n s of hill climbing? How can
Q. 10 What are the
s| pecify its propertie
s.
thm for Bes t first search and
Q.11 Write algori with example.
t and gre edy best firs t search? Explain
firs
difference between
bes t
Q.12 What is the heuristic? —
are the qu alities of a good
functio n? What
Q.13 What is heuristic .
local beam search
te d an nealing a nd
Q.14 Writeshort note on simu la
ng.
Hill climbi
Si mu la te d an nealing with
Q.15 Compare and contrast 5? explain with suitab
le example.
ch pr oc es
s the sear
uristic affect tic.
Q.16 How the definitio n of he d overesti mating Heuris
of u nd erestimating an
on behavior of A* in case
Q.17 Write short note
0ptimality.
of A* in case of en options?
Q.18 Discuss admissibility choose SMA* giv
When sho uld we
ge
‘YechKnowledons
ple. ¥ pupiicati
with exam
Q.19 Explain SMA“ alg orithm
Scanned by CamScanner
WF _Alasc (MU-Sem.7-Comp)
Q. 20 Write a short notes on:
(a) Game types
(b) . Zero-sum game
(c) Relevant aspects of Al games
(d) Features of Al game
Q. 22 Give a-B pruning algorithm with an example and it’s properties, also explain why is it called a-B pruning.
Q. 24 Apply alpha-beta pruning on example given in Fig. Q. 24 considering first node as max.
Fig. Q. 24
Ol
Scanned by CamScanner
Comme
Knowledge, Reasoning and Planning
Unit Ill j=
3.2: First
et zi ord ler logic:
ic: syntax and Semantic,i Knowledge Engineering in FOL Infere
nce in FOL : Unification, Forward
Chaining, Backward Chaining and Resolution
3.3 Planning Agent, Types of Planning: Partial Order, Hierarchical Order, Conditional Order
- Understanding theoretical or practical aspects of a subject is called as knowledge. We can gain knowledge through
experience acquired based on the facts, information, etc. about the subject.
- After gaining knowledge about some subject we can apply that knowledge to derive conclusions about various
problems related to that subject based on some reasoning.
- We have studied various types of agents in chapter 1. In this chapter we are going to see what is “knowledge based
agent”, with a very interesting game example.
- Weare also going to study how do they store knowledge, how do they infer next level of knowledge from the existing
set. In turn, we are studying various knowledge representation and inference methods in this chapter.
Domain-independent algorithms
1. Knowledge level:
specific content.
— Knowledge level is a base level of an agent, which consists of domain-
f ‘ormation about the surrounding environment in which they are working, it does
— In this level agent has facts/in
lementation.
not consider the actual imp
2. Implementation level: .
the data
domain independent algorithms. At this level, agents can recognize
— Implementation level consists of
structures used in knowledge
base and algorithms which use them. For example, propositional logic and
t logic and resolution in this chapter)
resolution. (We will be learning abou
. e choosing any action,
- Knowl
Knowl edge based
edge bas agents are crucial to use in partially observable environments Befor
knowledge along with the current inputs
: from the environment
F
base agents make use of the existing
ledge e based
ki nowledg
Scanned by CamScanner
WF _Alasc (MU-Sem.7-Comp) Knowledge, Reasoning and Plannin
As we have learnt that knowledge base is a set of representations of facts/information about the surroUundin,
environment (real world). Every single representation in the set is called as a sentence and sentences a,
which is
expresses with the help of formal representation language. We can say that sentence is a statement
on language, ?
set of words that express some truth about the real world with the help of knowledge representati
Declarative approach of building an agent makes use of TELL and ASK mechanism. |
o TELL the agent, about surrounding environment (what it needs to know in order to perform some action
TELL mechanism is similar to taking input for a system.
© . Then the agent can ASK itself what action should be carried out to get desired output. ASK mechanism|
similar to producing output for a system. However, ASK mechanism makes use of the knowledge base t
‘ decide what it should do.
TELL and ASK mechanism involve inference. When you run ASK function, the answer is generated with the help,
; knowledge base, based on the knowledge which was added with TELL function previously.
o TELL(K) :Is a function that adds knowledge K to the knowledge base.
© _ASK(K) :Is a function that queries the agent about the truth of K.
An agent carries out following operations: First, it TELLs the knowledge base about facts/information it perceive
with the help of sensors. Then, it ASKs the knowledge base what action should be carried out based on the inpy
it has received. Lastly, it performs the selected action with the help of effectors.
Knowledge based agents can be implemented at three levels namely, knowledge level, logical level an
implementation level.
Implementation level
See ae oS
Knowledge level :
It is the most abstract level of agent implementation. The knowledge level describes agent by saying what
knows. That is what knowledge the agent has as the initial knowledge.
Basic data structures and procedures to access that knowledge are defined in his level. Initial knowledge
knowledge base is called as background knowledge.
Agents at the knowledge level can be viewed as an agent for which one only need to specify what the aget
knows and what its goals are in order to specify its behaviour, regardless of how
it is to be implemented.
For example :A taxi driving agent might know that the Golden Gate Bridge connects San Francisco with
the Mati
county.
Logical level :
At the logical level, the knowledge is encoded into sentences. This level uses some formal language to represef
the knowledge the agent has. The two types of representations we have are propositional logic and first order¢
predicate logic. :
Both these representation techniques are discus
sed in detail in the further sections.
For example: Links(Golden Gate Bridge, San Francisco, Marin County) “tte
Imptementation level :
In implementation level, the physical
representation of | Ogical level sentences is done. This level also describ
data structures used in knowledge base and algorithms t
hat used for data manipulation: : £4 Dene
Scanned by CamScanner
i 74
AlaSC. (MU Som Comp) :
3-3 Knowledge, Reasoning and Planning
- Forexa mpl
: Links(G e
olden
Sr Sn
ee el a Gate Brid
“Rin etion KB mney Be, San Francisco , Marin Co unty)
~ Agent (percept) retu ts ani action! | Sarge eens
“static: KB, a knowledge base
_ , @ counter, initially 0,
indicating time
oe Tew (KB. MAKE — Penceyy
“SENTENCE (perce
pt, t)) .
ee action ASk (KB, MAKE-AcTi
; on-Quey ty(t))
LL(KB, MAKE-ACTION “SENTENC
pers
E (actio n,t))
pe ted
rem
returns action
RST
© WUMPUS is a monster who lives in one of the rooms of the cave. WUMPUS eats the player (agent) if player
(agent) comes in the same room. Fig. 3.2.1 shows that room (3, 1) where
WUMPUS js Staying.
0 Player (agent) starts from any random position in cave and has to explore the c ave, We are sta
rting from (1, 1)
position.
There are various sprites in the game like pit, stench, breeze
, gold, and arrow. Every sprite has some feature, Let'
Understan this s
d one-by-one :
9 Few rooms have bottomless pits, which trap the player (agent)
if he comes to that room. You can see in the
Fig. 3.2.1 that room (1,3), (3,3) and (4,4) have bottomless pit. Note that
even WUMPUS can fall into a pit.
© Stench experienced In a room, which has a WUMPUS in its neighb
ourhood room, See the Fig. 3.2.1, here room
(2,1), (3,2) and (4,1) have Stench.
° Breeze js experienced in a room, which has a pit In Its
nelghbourhood room, Fig. 3.2.1 shows that room (1,2),
(1,4), (2,3), (3,2), (3,4) and (4,3) consists of Breeze,
° Player (Agent) has arrows and he can shoot these arrows in straight
line to kill WUMPUS.
“— ‘.
Scanned by CamScanner
3-4
WF _Algsc (MU-Sem. 7-Comp) ws that room
ists of gold , this roo m glitters. Fig.3.2.1 sho
° One of the rooms cons ch are; Bump and scream. A bump jig}
ept two types of percepts whi
s player (agent) can acc
Apart from above feature: a sad scream ¢
walks into a wall. While
generated if player (agent )
is killed.
‘sO s
DAA AAEMA Breeze
AStenchws tt
NY C
TT a
cBreeze" yA i
I< xan
DPAAAAM Breeze
ree
3 Ve “Stench COS
Nyy
7
Gold)
DAAABAMA yu »
AStenchw
2 wStenchys cBreeze"
% san yu
Start
1 2 3 4
— Anagent receives percepts while exploring the rooms of cave. Every percepts can be represented with the help of five
element list, which is {stench, breeze, glitter, bump, scream]. Here player (agent) cannot perceive its own location.
— If the player (agent) gets percept as [Stench, Breeze, None, None, None]. Then it means that there is a stench anda
breeze, but no glitter, na bump, and no scream in the WUMPUS world at that position in the game. .
— Let's take a look at the actions which can be performed by the player(agent) in WUMPUS World :
‘9 Move: To move in forward direction,
o Turn: To turn right by 90 degrees or left by 90 degrees,
oO Grab: To pick up gold if it is in the same room as the player(agent),
o Shoot : To Shoot an arrow in a straight line in the direction faced
by the player (agent).
- These actions are repeated till the player (agent) kills the
WUMPUS or if the player (agent) is killed. If the WUMP
US is
killed then it is a winning condition, else if the player(agent
) is killed then itis a losing condition and the game is over.
Scanned by CamScanner
ward and punishment Points are assigned t ~ Knowledge, Reasoning and Planning
0 a player (A °
as follows: Bent) based on the actions it performs. Points can be given
o 100 points are awarded ifplayer (ag
ent ) come
wae S Out of the cave with the gold.
o 1 pointis taken away for every act
onta ken
o 10 points are taken away Y j if the
arrow Is used
4, Performance measure
- +100 for grabbing the gold and coming back to the starting position
- -—200if the player(agent) is killed. .
— -1 per action,
2. Environment
Empty Rooms.
(a ss um in g 4 robotic agent)
4. Effectors
left, right
— Motor to move
gold
~ Robot arm to grab the
arrow
m to shoot the g characteristics
+ .:
Robot mechan!
~ is
owin
The WUMPUS world agent 2 has f°pe! terministic; Episodic .
.
ate g. _ Single agen
1. Fully observable Discr'
4. ‘St ati 5. ,
ete
Static Pu
Scanned by CamScanner
SF Al&SC (MU-Sem.7-Comp) ___ 3-6 _ Knowledge, Reasoning and Planni
B -—Breeze
, -| OK - Safe square
' P - Pit
24 2.2 2.3 2.4
S$ - Stench
V - Visited
11 1.2 1.3 1.4 W -Wumpus
OK OK
— The knowledge base initially contains only the rules (facts) of the WUMPUS world environment.
Step 1: Initially the player(agent) is in the room (1,1). See Fig. 3.2.2(a).
The first percept received by the player is [none, none, none, none, none]. (remember percept consists of
[stench, breeze, glitter, bump, scream])
Al - t
41 4.2 4.3 4.4 ae
B - Breeze
- OK - Safe square
2.1 2.2 2.3 2.4
P - Pit
- Asroom (1,1) is visited you can see “V” mark in that room. The player receives following percept :[none, breeze, none
none, none]. ;
- _ As breeze percept is received room (1,2) is marked with “B” and it can be predicted that there is a bottomless pit!
the neighboring room. ‘ ‘ ’
— You can see that room (1,3) and room (2,2) is marked with “P?”. So raom (1,3) and (2,2) is not safe to move in Th
player should return to room (1,1)and try to find other, safe room to move to.
Scanned by CamScanner
W_AI&SC (MU-Sem. 7-Comp)
3-7 Knowledge, Reasoning and Planning
—<——
figure, Step 3:
ee
41 =
42 43 44
Ls
3. 1 =
3.2 33 34
i 22 |ag 2A
14 -
teem
Vv
12 —
B
13
[|
OK OK
Fig. 3.2.2(c) WUMPUS world with player moving back to room (1,1) and then moves to other safe room (2,1).
As seer: in Fig.3.2.2(c). Player in now in room (2,1), where it receives a percept as follows : [stench, none, none, none,
none] which means that there is a WUMPUS in neighboring room (i.e. either room (2,2) or (3,1) has WUMPUS).
NSists of
As we did not get breeze percept in this room, we can understand that room (2,2) cannot have any pit and from step 2
~ we can understand that room (2,2) cannot have WUMPUS because room (1,2) did not show stench percept.
Thus room(2,2) is safe to move in.
Step4: Player receives [none, none, none, none, none] percept when it comes to room (2,2). From Fig. 3.2.2(d) you
can understand that room (2,3) and room (3,2) are safe #0 move in.
41 42 4.3 4.4
that room (3,1), of this game is to grab the gold and go back to the startin
g
the pla yer gra bs the gold first. As +the aim
gold. So,
3 in. Thus WUMPUS.
being killed by the
position, without
= ‘ ~~ WF Tettnoateays
Publications
Scanned by CamScanner
Knowledge, Reasoning and Plann,
WF algsc (MU-Sem. 7-Comp)
41 4.2 4.3 4.4
P?
Step 6: As can be seen in Fig. 3.2.2(f). We will go from room (2,2) to room (2,1) and from room (2,1) to room (1,1).
Thus we won the WUMPUS World game!!!
41 4.2 4.3 4.4
Ww?
P?
3.1 3.2 3.3 3.4
WwW? W?
P? P?
2.1 2.2 2.3 24
A
vy AFA A
OK V
Yh 1.2 134. [14
V B
OK OK V
Fig.3.2.2(f) ; WUMPUS world with player moving back
to room (1,1) with gold
3.3 Logic
Logic can be called as reasoning which is carrled out oriti
sar eview based on strict rule of validity to perform a
specified task, a
In case of intelligent systems we say that any of logic's particu
lar form cannot bind logical representation and
_, feasoning, they are independent of any
particular form of logic.
Make a note that logic Is beneficlal only If the
knowledge Is represented in small extent and
~ Fepresented
in large quantity the logic when knowledge
Is not considered valuable.
Fig. 3.3.1 depicts that sentences
are physical confi gurations of an agent, also It is shows
This means that reasoning is a proc that sentences need sentence:
ess of forming new physical configurations
Wi TechK nowledge
ontliatians
from old ones.
Scanned by CamScanner
—~—_=—=_——=_{_{_ {ee eeeerrreteete
ee |
___ Representation 2 J
Real world Bfn- - <
8 B
1
Probabilistic logic
e using Rules
3.4 Representation of Knowledg
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) _ Knowledge, Reasoning and Plannin,
(a) Logical representation
— The logical representations are mostly concerned with truth of statements regarding the world.
These statements are f
most generally represented using statements like TRUE or FALSE.
.
— Logic is successfully used to define ways to infer new sentences from the existing ones. There
are certain logics that
are used for the representation of information, and range in terms of their expressiveness.
There are logic that are 4
more expressive and are more useful in translation of sentences from natural languages into the logical ones.
There
are several logics that are widely used :-
1. Propositional logic : These are restricted kinds that make use of propositions (sentences that are either
true or
false but not both) which can be either true or false. Proposition logic is also known as propositio
nal calculus;
sentential calculus or boolean algebra.
All propositions are either true or false, For example: .
(i) Leavesare green (ii) Violets are blue.
2. First Order Predicate Logic : These are much more expressive and make use of variables, constants, predicates,
functions and quantifiers along with the connectives explained already in previous section.
3. Higher Order PredicateLogic : Higher order predicate logic is distinguished from first order predicate logic by
using additional quantifiers and stronger semantics.
4. Fuzzy Logic : These indicate the existence of in between TRUE and FALSE or fuzziness in all logics.
5. Other Logic : These include multiple valued logic, modal logics and temporal logics.
One of the widest used methods to represent knowledge is to use production rules, it is also known as IF-THEN rules.
Syntax:
Example :
Scanned by CamScanner
AlgsC MU-Sem. 7-Comp) ' nin
-
a Knowledge, Reasoning and FIST
mantic networks
( <)
qhese represent knowledge in the form of gra
: Phical
,
;
since graphs are easy to be
ar
stored inside programs as
e networks
they are concisely represented by nodes and dges.
.
a semantic network basically comprises and
of nodesthat are named and represent concepts,
jabelled links representing; relations between conc pts. Nodes re S.
ror example, the semantic network in Fig, 3.4.1. expresses th e knowle
Sates acta
dge to represe nt the following data :
Tomis a cat.
8
Tom is a Mammal.
Gc
Fish is an Animal.
G0
Fig. 3.4.1
has a finite,
network, introduced by John Sowa,
Graph : It is a rec ent scheme used fo r semantic
Conceptual previous
- cep ts or conceptual relations. It differs from the
bipart ite graph. The no des represent either con cat color is grey can
connected, Brothers or
labell e d arcs . For exa mpl e: Ram, Laxman an d Bharat are
method that it does not use
be represented as shown.
Fig. 3.4.2
(d) ) F Frame representation Minsky in 1975 They are mostly used when the task becomes quite complex
7 :
irement of
This concept was introduced by Marvin jon. More structured the system becomes more would be the requ of
es that consists of a collection
and needs more structured representatio | Generall y frames are record like structur
using frames which would prove beneficial. t values.
names and values (subfields) called as facets. Facets can have names
Slots or attributes and the corresponding _
he slots have rson Ram,
Slots can be of any size and type
: Tsho i n In the Fig . 3.4.2 for a pe Ww TechKnowledgé
or numbers too. A simple frame is wn Pudlications
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) satire wodse:. Reasoning and Plannif,
Oo (Ram)
( PROFESSION (VALUE professor)
0
(AGE(VALUE 50))
0
(WIFE(VALUE sita))
o
CITY(VALUE banaras))
QQ
(STATE(VALUE mh))
O09
(ZIP(VALUE400615))
o
3.4.1 Ontology
Ontology is study about what kind of things or entities exist in the universe. In Al, ontology is the specification of
conceptualizations, used to help programs and humans to share knowledge about a particular domain. In turn,
ontology is a set of concepts, like entity, relationships among entities, events that are expressed in a uniform way in
order to create a vocabulary for information exchange.
An ontology should also enable a person to verify what a symbol means. That is, given a concept, they want to be able
_ to find the symbol, and, given the symbol, they want to be able to determine what it means.
— Typically, it specifies what types of individuals will be modelled, specifies what properties will be used,
and gives some
axioms that restrict the use of that vocabulary. Ontologies are usually written independently
of a particular
application and often involve a community to agree on the meanings of symbols
~ For example: Consider a map showing hotels, railway station, buildings, schools, hospitals
in a particular locality. In
this map the symbols used to indicate these entities are enough to describe them.
Hence the community who knows
the meaning of these symbols can easily recognize it. Hence that becomes
ontology of that map. In this ontology, it
may define a building as human-constructed artifacts.
It may give some restriction on the size of buildings so that shoeboxes cannot be
buildings or that cities cannot be
buildings. It may also state that a building cannot be at two geographically
dispersed locations at the same time.
Scanned by CamScanner
MU-Sem.7-Comp) a
Asscl smitefico thenrshisetbenish
;
3-13 Knowledge,
oo
Reasoning and Planning
ifA isa , e.
Tabl
able 3.5.1: Bic can be seen in the Table 3.5.1. .
Connectives used in Propositional logic
A !
_ _ AB Conjunction
ee
eee _
Vv Or 7
- - AvB Disjunction
———
= = — = _
a Not
- “A Negation
- To define logical connectives truth tables are used. Truth table 3.5.2 shows five logical
connectives.
Table 3.5.2
Bo AB
False false false False true true true
False true false True true true false
- . Jake an example, where A a B, i.e. Find the value of A A B where A is true and B is false. Third row of the Table 3.5.2
shows this condition, now see third row of the third column where, A A B shows result as false. Similarly other logical
connectives can be mapped in the truth table.
35.2 Semantics
~ World is set of facts which we want to represent to form propositional logic. In order to represent these facts
Propositional symbols can be used where each propositional symbol’s interpretation can be mapped to the real world
feature,
~ Semantics of a sentence is meaning
of a sentence. Semantics determine the interpretation of a sentence. For
in following manner:
example :You can define semantics of each propositional symbol
d;
Ameans “It is hot”
2 B means “It is humid”, etc.
Sentence , idered true when its interpretation in the real world is true. Every sentence results from a finite
S considered tr
Number of y f the rules. For example, if A and B are sentences then (A B), (A v B), (B > A) and (A <> B) are
sages of the . (
UF Techknowiedys
Pudticati
.
ons —
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
- If truth values of all symbols in a sentence are given then it can be evaluated for determining its truth value (ie, we.
can say if it is true or false). 3
3.5.3 What is Propositional Logic ?
- AABand BAA should have same meaning but in natural language words and sentences may have different Meanings
Say for an example,
1. — Radha started feeling feverish and Radha went to the doctor.
2. Radha went to the doctor and Radha started feeling feverish.
— Here, sentence 1 and sentence 2 have different meanings.
— In artificial intelligence propositional logic is a relationship betwee
n the truth value of one statement to that of the
truth value of other statement.
Scanned by CamScanner
7-Comp)
3.$C (Mu-Sem.
ion —
y and Contradict :
qautolod ions. For example
means valid sentence. It t isi a sentence which is true for all the interpretat
autolo
“it is hot or It is not hot” ions. For
i nterpretat
-
ces of the
“N= Cc’.
o wi ng th e se t of senten
can be create by logically foll
says that new sentence
jn short inference rule
se.
knowledge ba
Table 3.5.3 : Inference Rules
Modus Ponens
Substitution
Chain rule
AND introduction
Transposition
ented as : KB |-Q.
as: KB |= Qa nd Derivati on is repres
ented
Entailment is repres
of inference rules :
There are two types
1, Sound inference
2. Complete inference
cols
dge base ” using given set of proto
Sound inference s de ri ved from the knowle
_
TechKnowledge
Pusticarions
Scanned by CamScanner
Al&SC(MU-Sem.7-Comp)
In general,
For atomic sentences pj, p;', and q, where there is a substitution © such that «
SUBST (@,p) = SUBST (©, p;) for alli,
Example :
A : itis rainy.
Modus Tollens
When Bis known to be false, and if there is a rule “if A, then B,” it is valid to conclude that A is also false.
2. Complete inference
— Complete inference is converse of soundness. Completeness property of inference says that, if “X is entailed by
knowledge base” then “X can be derived from the knowledge base” using the inference protocols.
— Completeness property can be represented as: “ If KB |= Q then KB |- Q”.
Clauses are generally written as sets of literals. Horn clause is also called as horn sentence. In a horn clause a
conjunction of 0 or more symbols is to the left of “>” and 0 or 1 symbols to the right. See following formula :
A,AA,AA; «.. A,B, where n>=0 and m is in range{0,1}
There can be following special cases in horn clause in the above mentioned formula :
o Forn=0 and m=1: A (This condition shows that assert A is true)
o Forn>0 and m=0:AA Bo (This constraint shows that both A and B cannot be true)
o Forn=0and m=0: (This condition shows empty clause)
Conjunctive normal form is a conjunction of clauses and by its set of clauses it is determined up to equivalence. For a.
horn clause conjunctive normal form can be used where, each sentence is a disjunction of literals with at most one
non-negative literal as shown in the following formula : —-A,v3A,V—A; ... VAA,VB
This can also be represented as : (A > B)= (—A vB)
Horn sentences can be used in first order logic. Reasoning processes is simpler with horn clauses. Satisfiability ofa
propositional knowledge base is NP complete. (Satisfiability means the process of finding values for symbols which will
make it true).
For restricting knowledge base to horn sentences, satisfiability is in A. Due to this reason, first order logic horn ©
sentences are the basis for prolog and data log languages.
Let's take one example which gives entailment for horn formulas. —
Find out if following horn formula is satisfiable?
(true—X) A (XAY—Z) A (Z |W) A (ZAW-> false) A (true Y)
Scanned by CamScanner
reer
eer
;
mp)
AlgSC (MU-Sem. 7-Co nnit
: :
3-1
Knowledge, Reasoning and Pla
rom the above equY,ation,
we entail if that
E true s : if the que TY atom j et Equation shows that there are clauses which state
-
assign X and Y to true y
tne w ane r Sowe mican alue (i.e. . true 3X AY).
that all premis e. After that we
Then we can say pre es of XAY—»z a orm ation we can assign Z to tru
re true , bas ed on this inf
vansee all premises of ZW are true, so truwee can assissign W tot rue,
of ZAW —false are horn
As now all premises ry atom is false. Therefore, the
fro .
Mm this we can entail that the que
,
aimee
.
iable.
fo mula is not satisf
45.7 propositional Theorem Proving
A
sequenc e of sentences form a “Proof”, se mis e or it can be a sen ten ce d erived from earlier
the inf entence can be pre to prove is called as a query OF
@ goal.
sentences in the proof based on ere nce rul e. Wh at ev er we wan t
query/goal is the last sentence of the theorem in the proof OT.
Take Example of the “weather problem” which we have seen ab ove.
5) “It’s raining”
te n ce derived from 4 and
6. | RN Modusponens(4,5)(sen
ositional Logic
35.8 Advantages of Prop language.
||
ple kn ow le dge representation
Propositional log ic is a sim based problems.
artificial intelligence
- |
olving some
a nd ef fi ci en t technique for s r Logic (FOL), etc
.
- Itis suffic ient
hi gh er log ics like First O rde
the foundation for
~ Propositional logic forms decidable.
and reas' oning is
~ Propositional logic is NP complete
d by PL.
can be illustra te
~The process of inference
ositional Logic
45.9 Disadvantages of Prop ence Pr oblems.
rt if icial intellig
express complex a t WUMPUS hunter problem.
Propositional logic is cannot
~
ma ll wo rl ds , think abou
for even $5 problems, it can be very
impr actical complex artificial intelligence
~
Propositional logic can be i c to ex pr es s
try to ma ke use Of P ropositional log .
~ Even if we .
Wordy and lengthy. janguaBe
because :
l e : If there are entiti
es like : Priya,
t a t i o n ’. Fo r e x a m p
PLis a weak knowledge repre se n
d entity is “indivi du al
if the use
With PL it is hard to ide nt TechKnowledge
ify
-© Publications
Scanned by CamScanner
AI&SC (MU-Sem. 7-Comp) 3-18 pen itoige, Reasoning anid Plann
© PL cannot directly represent properties of individual entities or relations between individual eNtitieg Fat
example, Pooja is tall.
© PL cannot express specialization, generalizations, or patterns, etc. For example: All rectangles have 4 Sides.
Scanned by CamScanner
aigsc (MU-Sem. ..
‘y’ Reasoning and Planning
uantifier Knowledge,
universal Q
pronounced as “for all” and it is applicable
to all the Variable
wyx A” means A is true for every replacem s in the predicate
ent of x,
Example: “Every Gorilla iIs Black”
can be repr
esented as
“/x (Gorilla(x) > Black(x)) :
ality : term; == termais true under a given interpretation if and only if tennene termyrefer tot the same obje
1. | PLeannot represent small worlds like vacuum cleaner | FOL can very well represent small worlds’ problems.
world.
= 2
Ps2 | PLis a weak knowledge representation
. language — | F OL is a strong way of representing language.
3. | Propositi I La e uses propositions in which FOL uses predicated which involve constants, variables,
Positiona nguag ions, relations.
|__| the complete sentence is denoted by a symbol. Functi
«| PLcanne erties of individual FOL can directly represent properties of individual entities
-| gp cannot directly represent oh idual entities. e.g. | or relations between individual entities using individual
“ntities or relations between indivi predicates using functions. E.g. Short(Meera)
| Meera is short. . ; hat nace
3 ly A fla eralizations, or | FOL can express specialization, generalizations, or
"| PL cannot express specialization,gen patterns, etc. Using relations.
Patterns, etc
:a . f sides(rectangle,
si 4
&B. Allrectangles have 4 sides. E.g. NO_Of_ wee : )
6 icherens FOL is a higher level logic. .
: isa
tationj i
level logic. esent complex .
FOL can represent complex statements.
'S Not sufficiently expressive to repr
| Statements.
ne er :
oo“ — Wines
Scanned by CamScanner
‘ 3-20 Knowled ge, Reasoning and Plann
W TachKnowledga
Fublications
Scanned by CamScanner
—_—4, issi were sold to Nono by Col
Allthe missiles 3.2 1 Knowledge: Peascniniy ani Plann
Missile is a Weapon. Nel West, ,
5.
g, Colonel West is American,
we have to prove that West is a criminal,
et’s see how to represent these facts by FOL.
4, Itisa crime for an American to sell we
ponst
American(x) A Weapon(y) A sell (x y 0 the enemy nations.
)A enemy(z, Amer
2. Country Nono is an enemy of America, ica) => Criminal
(x)
Enemy (Nono, America)
3, Nono has some missiles,
oO Owns (Nono, x)
o ~~ Missile(x)
4, Allthe missiles were sold to Nong
by Colonel West
Missile(x)A owns(Nono, x)=> Sell(
West, x, Nono)
5. Missile is a weapon.
Missile(x)=> weapon(x)
6. Colonel West is American.
American (West)
rican Wea
ning
Fig. 3.8.2 : Proof by forward chai
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) 3-22 Knowledge, Reasoning and Plann
For example, If while Boing out one has taken umbrella. Then based on this decision it can be guessed that it
is Fainin
Here, “taking umbrella” is a decision based on which the data is generated that
“it's raining”. This process is backwarg
chaining.“Backward chaining” is called as a decision-driven or goal-driven inference technique.
j 4
Let us understand how the same example used in forward chaining can be solved using backward chaining.
Scanned by CamScanner |
——
—
——= =: ————————
_z—
: Knowledge, Reasoning and Planning
2
90" ch Conservative/Cautious
vacticl if Number of possible final answ i torial explosion creates an ‘infinite
set of known alternatives is Sidiatie reasonable or a | Combina
number of possible right answers. —|
aoe Diagnostic — —
? Tiptio ~
pen and debugging application Planning, monitoring, control and
ApP
- - Interpretation applicatio 7
reasoning qopo™ i reasoning
~ Botto m rees e ”
Type of Search — Depth-first search= ” Breadth-first search
who determine | Consequents determine search
reach Antecedents determine search
ex.381: Using predicate logic find the course of Anish’s liking for the followin :
(i) Anish only likes easy courses. .
(i) | Computer courses are hard.
(iii) All electronics courses are easy
(iv) DSP is an electronics course.
Soln. :
Step1: Converting given facts to FOL
h, x)
(i) Wx: course (x) A easy (x) — likes (Anis
(x)
(ii) Wx: course (x) A computes (x) —> hard
(x) — easy (x)
(iii) Vx : course (x) A electronics
nae (x)
course (x)
x / DSP
electronics (x)
wil veo (DSP) course (x)
| x18
| x / DSP
electronics (DSP)
course (DSP)
True
True
True
Fig. P- 3.8.1
rse.
Hence proved that Anish likes DSP cou
WF TechMnawledga
PuBli¢akiotes
Scanned by CamScanner
BF Al&SC (MU-Sem. 7-Comp) 3-24 ___ Knowledge, Reasoning and Planning -
1. Identify the task: This step is analogues to PEAS process while designing an agent. While identifying task, the
knowledge. engineer must define the scope of knowledgebase and the range of questions that can be answered
through the database. He also need to specify the type of facts that will be available for each specific problem
instance. ‘
2. Assemble the relevant knowledge: Assembling the relevant knowledge of that particular domain is called the process
- of knowledge acquisition. In this the knowledge engineer needs to extract the domain knowledge either by himself
provided he is the domain expert or needs to work with the real experts of the domain. In this process knowledge
engineer learns how the domain actually works and can determine the scope of the knowledgebase as per the
identified tasks.
3. Defining vocabulary: Defining a complete vocabulary including predicates, functions and constants is a very important
step of knowledge engineering. This process transforms the domain level concepts to logic level symbols. It should be
exhaustive and precise. This vocabulary is called as ontology of the domain. And once the ontology is defined, it
means, the existence of the domain is defined. That is, what kind of things exist in the domain is been decided.
4. Encoding of general knowledge about the domain: In this step the knowledge engineer defines axioms
for all the
vocabulary terms by define meaning of each term. This enables expert to cross check the vocabulary
and the contents.
if he finds any misinterpretations or gaps, it can be fixed at this point by redoing step 3.
5. Encode the problem: In this step, the specific problem instance is encoded using the
defined ontology. This step will
be very easy if the ontology is defined properly. Encoding means writing atomic
sentences about problem instances
which are already part of ontology. It can be analogues to input data for a computer
program.
6. Query the Knowledgebase: Once all the above steps are done, all input
for the system is set and now is a time to
generate some output from the system. So, in order to get some
interested facts inferred from the provided
knowledge, we can query the knowledgebase. The inference procedure will operate
on the axioms and facts to derive
the new inferences. This lessens the task of a programmer to write apptication
specific programs.
7. Debug the knowledgebase: This is the step in which one can prove or check the
toughness of the knowledgebase. In |
the sense, if the inference procedure is able to give appropriate answers to
all the queries asked or it stops in between
because of the incomplete axioms; will be easily identifi
ed by debugging process. If one observes the reasoni
stopping in between or some of the queries could not be answere ng chain
d then, it is an indication of a missing or a weak —
axioms. Then the corrective measures can be taken by repeating
the required steps and system can be claimed
have a complete and precise knowledgebase. to
We TethKnowledge - : ” : ~
‘ Fupiications
Scanned by CamScanner
put, ae
on “inl - atoms Out
King(Ram)
Brave(Ram)
We get an ‘x’ where, ‘x’ is a king and ‘x’ is brave (Then x is noble) then ideally what we want is O= {substitution set}
ie. O= {x/ Ram}
3.10.2 Lifting
The process of encapsulating inference rule is called as Generalized Modus Ponens.
Generalized Modus Ponens
For atomic sentences pi, p', and q, where there isa substitution @ such that
SUBST (©,p)) = SUBST (©, pi) for all ;
oo os PL q)
P, iP, 1 Py Py (DA BAA PaAee
SUBST (Oo, =
+ one implication,
Ne Premises = N atomic sentences
the conclusion we seek.
“plying SUBST(8 , q) produces
Ps King(Ram) pie Bravely)
Te
Ww, PschKnowledge
ations
Scanned by CamScanner
-3- 26 -___ , Reasoning and Pig,
edge,e, Reason
KnowlKnowledg
Pp,= King(x) P= Brave(x)
(i) Wx Vy: person (x) A policy (y) A buys (x, y) > smart (x)
(ii) Wx, Vy: person (x) A policy (y) A expensive (y) 3~ buys (x,
y)
(iii) Wx: person (x) A~ insured (x)
TechKnowtedg
G7 Techinewtedgie
Scanned by CamScanner:
mee? Knowleage, Neaouiiey —_
3B
jon is! a valid inference rulule. Resolution prod
resolution ich ic implied by two clauses containining
. : uces an ew clause which is implied by
t a r y .
liter TI
als. : vered by Alan Robins on in the nid 1960' s .
compleme n ary his resol ution rule was disco .
0
;
we have see n that a lititeral isi an atomic symbol or a Negation of the atomic symbol (i.e.A, ~A)-
ce
resolution is the only interference rule you need, jIn order to bui means that every senten
» i
produced by a procedure will; be “true” ) and com CUE eat aR
“rue ” sentence can be
produced
P ene
plete (complet "tri!
ss means every
bya procedure) theorem proof maker.
Take an example where we are given that :
o Aclause X containing the literal : Z
Given:
with =P VO
Replace P 30
Elimination of implication i.e. Eliminat eall‘’:
) with =P A=Q and so on.
Distribute negations:Replace ~-P with P, (PV Q
TechKi on
=
ee
¥ Pub en
;
Scanned by CamScanner
es ESC IMU-Som.7-Comp) Knowledge , Reasoning and P| lanning
3. Eliminate existential quantifiers by replacing with Skolem constants or Skolemfunctions:
e.g. V XA Y(PA(X,Y) v(P2(X,¥)) VX (P4(X,f(X)) V_ (P2(X,F(X)))
4 Rename variables to avoid duplicate quantifiers.
Soln. ::
FOL: A (Bc)
Normalizing the given statement.
(i) A (B>CAc>B)
(ii) (A—(B—>C)) A (A (C— B))
Converting to CNF.
Applying Rule, a8 = ~avB
~Av (~BvC)A~Av (~C vB)
1. H=> Win(X)
2. T= Loose(Y)
3. =H=> T
3, {H,T}
{—Loose(Y), Win(X)}
Scanned by CamScanner
_ Knowledge, Reasoning and Plentirs
¢ aise (MU-Sem.7EOMP)
—
twin
6 ih Wwin(X)}
(From 2 and 4)
1. {T, Win)
— (From 1 and 3)
5 win(X)}
tease (From 6 and 7)
9. tee (From 5 and 8)
411.4 Example
the same exam learn how to write proofs for resolution.
let stake ple of forward and backward chaining to
step 1:
o Owns (Nono, x)
o Missile(x)
Colonel West.
4. Allthe missiles were sold to Nono by
x, Nono)
Missile(x)A owns(Nono, x)=> Sell(West,
5. Missile is a weapon.
Missile(x)=> weapon(x)
American (West)
Step
2;
V Criminal (x)
“Missile(x) V weapon(x)
.
=.
Ameri C‘an : Tech Knowledge
Publications
(West)
Scanned by CamScanner
meen SC (MU-Sem.7-Comp)
Step 3:
z/Nono
et
~ weapon (y) v owns (Nono, y) Weapon (x) v ~ Missile (x)
\
Hence our assumption was » Wrong. Hence
proved that West iis criminal.
Ex. 3.11.2: Consider following statements :
(a) Ravi Likes all kind of food.
(b) Apple and Chicken are food
(c) Anything anyone eats and is not killed
is food.
(d) Ajay eats peanuts and still alive.
(e) Rita eats everything that Ajay eats.
Prove that Ravi Likes Peanuts using
resolution. What food does Rita
Soln. : eat?
Scanned by CamScanner _
GC
anning
oning and Pl
(U-Sem. 7-Gomp) : Knowledge, e, Reas
F food (Chicken) °
my sg xv yreats (x, y) A~ killed (x) —> food (y)
CNF
Step 3: Converting FOLs to
food (y)
(d) ~eats (x, y) ¥ killed (x) v
(y)
killed (x) V food
~ eats (x, ¥) ¥
~food (Peanuts) " y/Peanuts
tou .
alive (Ajay) |
a
e
~alive e
s out assumption Is
wr on g. He nc e pr ov ed that “Ravi likes
Peanuts”.
|
As the result of thi s re so lu ti on is NIL, it mean
Wy pometent
|
|
|
Scanned by CamScanner
~ Knowledge, Reasoning and Planni
eats(Ajay, x)
x PPoanutiy
True True
Hence the answer is Rita eats peanuts.
Ex. 3.11.3: Using a predicate logic convert the following sentences to predicates and prove that the statement “Ram did
not jump” is false.
(a) Ram went to temple.
(b) . The way to temple is, walk till post box and take left or right road.
(c) The left road has a ditch. .
(d) Way to cross the ditch is to jump
(e) A logis across the right road.
(f) | One needs to jump across the log to go ahead.
Soln.:
~jump (Ram)
(b,) > x: At (x, temple) ——sAt (x, PostBox) a take left (x)
Scanned by CamScanner
7-Comp)
a7 _Al&SC (MU-Sem. _ Knowledge, Reasoning and Planning
(c) ~take left (x) v cross (x ditch)
ie
te
\
~ At (x, temple) At (Ram, temple)
g
Hence proved.
_
Ex. 3.11.4: Consider following statements.
1. Rimiis hungry.
2. If Rimi is hungry she barks.
Raja is angry.
3. _ If Rimi is barking then
CNF form.
pr edicate logic. Convert them into
Explain statements in
ry using resolution.
Prove that Raja is ang
Soln. :
to FOL.
Step 1: Converting given facts
1. Hungry (Rimi)
2. Hungry (Rini) — barks (Rimi)
3. Barks (Rimi) > angry (Raja)
to CNF.
Step2: Converting FOL statements
1. Hungry (Rimi)
(Rimi)
2. ~ hungry (Rimi) v barks
3. ~barkes (Rimi) V angry (Raja)
Step3: Negate the stmt to be proved
T.P.T. Angry (Raja)
YechKnowledge
Negation : ~ Angry (Raja)
Publications —
Scanned by CamScanner
AI&SC (MU-Sem. 7-Comp) Knowledge, Reasoning and Plannin.
~ Barks (Rimi)
Barks (Rimi)
ry (Rimi)
v~ hung
a
~ hungry (Rimi)
hungry
a (Rimi)
9
Me
This shows that our assumption is Wrong. Hence proved that Raja is Angry.
Ex. 3.11.5: Consider following facts.
1. If maid stole the jewellery then butler was not guilty.
2. Either maid stole the jewellery or she milked the cow.
3. If maid milked the cow then butler got the cream.
4. Therefor if butler was guilty then he got the cream.
Prove the conclusion (step 4) is valid using resolution.
Soln. :
Step 1 :Converting given facts to FOL.
To prove that
Scanned by CamScanner |
35C (MU-Sem. 7-Comp) =. 3
Al&S
sa Knowledge, Reasoning and Planning
git aiproof by resolution Avgot_cream (butter)
guilty (butler)
~ milk (maid, cow)
Y Bot_cream (butler)
4
Hence proved.
TUEUISS GRP
Soln. :
Step 1: Converting axioms to FOL.
Proof by resolution
Se : WienPublicatipnas —
Scanned by CamScanner
_Al&SC (MU-Sem. 7-Comp) et
Knowledge, Reasoning and Plannin
~ smile (x3)
~ happy (x,) v
smile (R,)
X3 |X
~ happy (x,)
~ graduating (x)
V happy(x)
x, | x
~ graduating (x) graduating | (x,) .
x |x,
b
Hence our assumption is wrong.
Hence proved.
3.12 Planning
Ge TechKnowledya
Scanned by CamScanner
¥ Al&SC (MU-Sem. 7-Comp) _
- Aim ofan agent is to find 3-37 Knowledge, Reasoning and Planning
. J
the Proper se uen H :
—<—= a as ST
an efficient solution. Guence of actions which will lead from starting state to goal state and produce
Fig. 3.12.2 depicts a general diagrammatic representation of a planning agent that interacts with environment with its
sensors and effectors/actuators. When a task comes to this agent it has to decide the sequence of actions to be taken
and then accordingly execute these actions.
Sensors
af
“~)
Environment
Effectors
C-E)-S
nt
Fig. 3.12.2 : Planning age
mning problem
BN erage? torrmuat
information Is avallable while formulating a planning problem and what results
tin
We have seen in above section pia ble here that, states of an agent correspond to the probable surrounding
are expected. Also it Is understanc gent are specified based on logical formalization.
Is ofan a W TechKnowledge
environments while the actions and goa Publications
Scanned by CamScanner
_ 3-38
Al&SG (MU-Sem. 7-Comp)__ in chapter 1. Whic! h sh“hows that, to achi
eve any Boal an
various types of intellige nt age nts ow it will affect the UPCOMing a
Also we have learnt about its act ion s ”
be t he effect of
agent has to answer few questions like “what will reasoning about it s
future actions, states.
be able to provide a proper
actions”, etc. This illustrates that an agent must
of surrounding environments, etc. ofactions
ste p, he /s he has to follow sequence
in one
Consider simple Tic-Tac-Toe game. A Player cannot win a game
er old steps and has to imagine the probable future -
to win the game. While taking every next step he/she has to consid time he/ she should also consider the
move and at the same !
actions of an opponent and accordingly make the next
consequences of his/her actions. ~
’ A classical planning has the following assumptions about the task envir'
‘onment:
environment.
o Fully Observable :Agent can observe the current state of the
.
© Deterministic: Agent can determine the consequences of its actions
Finite: There are finite set of actions which can be carried o ut by the agent at every
state in order to achieve the
o
goal.
© Static : Events are steady. External event which cannot be handled by agent is not considered.
of time.
© Discrete :Events of the agent are distinct from starting step to the ending(goal) state in terms
So, basically a planning problem finds the sequence of actions to accomplish the goal based on the above
assumptions.
Also goal can be specified as a union of sub-goals.
Take example of ping pong game where, points are assigned to opponent player when a player fails to return the ball
within the rules of ping-pong game. There can a best 3 of 5 matches where, to win a match you have to win 3 games
and in every game you have to win with a minimum margin of 2 points.
Generally problem solving and planning methodologies can solve similar type of problems. Main difference between
problem solving and planning is that planning is a more open process and agents follow logic-based
representation.
Planning is supposed to be more powerful than problem solving because of these two reasons
|
Planning agent has situations (i.e. states), goals (i.e. target end conditions) and operations (i.e. actions performed) .All
these parameters are decomposed into sets of sentences and fi urther i ” a
in sets words dependi the need of the |
system. . Pp ding on
Planning agents can deal with situations/states more efficiently because of its explicit reasoning capability
communicate with the world. Agents can reflect on their targets and we can minimize the compl P t Y
also it caf
wg
.
problem by independently planning for sub-goals of an age plexity of the plann
nt. Agents have informati
actions and the important point is that it i
can predict the eff ect of actions by inspecting the
ithe op
opeer
ratat
ions. re
Planning is a logical representation, based on situation, goals and operations, of
problem solvi
’ olving.
Planning = Problem solving + Logical representati
on
Ww TechKnowledge ~
Poblications ” ’
° : ~
Scanned by CamScanner
tg algsc (MU-Sem. 7-Comp)
3-39
at super Knowledge, Reasoning and Planning
take example of a grocery shopping rksi
et,st» er
su SUPPose You want to buy milk, brea an eg
then your initial state will be — “at home” a 1. i d d g from supermarket,
nd g a will be ~ “get milk, bread an
d ege”.
and that branch i fact
ing or can be enormous dependin upon the se
of actions, for e.g. Watch TV, read book etc » aVailable at th ; g t
at .
go
: io Kear; point of time.
attend lecture
change channel
increase/decrease.
eos Volume
on calculus/ plan
ng situati
Planni l anning.
Conditiona pl e TechKnowledge
Oo
Publications
Scanned by CamScanner
WF Alas (MU-Sem.7-Comp) ___Knowledge,Reasoning
and Plann
o Planning with graphs.
o Planning with propositional logic.
o —- Planning reactive.
- Out of these major approaches we will be learning about following approaches in detail :
© Planning with state space search
Partial ordered planning and
0
Conditional planning.
0
— Planning graph is a special data structure which is used to get better accuracy. It is a directed graph : and is usefultp
Also
accomplish improved heuristic estimates. Any of the search technique can make use of planning graphs.
GRAPHPLAN can be used to extract a solution directly.
- Planning graphs work only for propositional problems without variables. You have learnt Gantt charts. Similarly,jn
case of planning graphs there are series of levels which match to time ladder in the plan. Every level has set of literals
and a set of actions. Level 0 is the initial state of planning graph.
Set of literals
e All Literals which are true at that point of time
Depends. on executed actions at previous time step:
Set of actions
e Ail actions which meet prerequisite
at that point of time
Jepends upon which literals are true
Example
= Init(Have(Apple))
— Goal(Have(Apple) AAte(Apple))
Action(Eat(Apple), PRECOND: Have(Apple)
— EFFECT: -Have(Apple) AAte(Apple))
Action(Cut(Apple), PRECOND: - Have(Apple)
— EFFECT: Have(Apple))
Ly : Ao Ly Ay 35
Cut (Apple)
Have (Apple) ————---}—- Hava (Apple) Have (Apple)
— Have (Apple) — Have (Apple)
Ate (Apple)
- Ate (Apple).
Ate (Apple) Ate (Apple)
Scanned by CamScanner
E———=—————
yr
OO—COCLLz———————————————
Conflicts
exclusion between
links. lite rals which can n ot occur together (as a effect of selection action) are represented by m utual
o°
L1 de ines iultiple stat es and the mutual j k s are th e constr ain ts h that d aetin
exclusision lin fine
i th i sseto states .
| ‘ |
oO
i
Continue until two
_ consecutive levels are the same or contain the same amount of literals.
omrer
NO Ee ae tion of the a
effect of — fone literal is the nega
One action cancels out the
-—
another action. literal OR
- one of the effects of action is negation — |f each possible action pair that could
0 preconditions of other action. achieve the literal is mutually exclusive.
Scanned by CamScanner
. :
:
eo
If it finds some input and output locations nearer on state space grid for example in case of printing task then the
probability of performing that task will increase.
But to do this it should be aware of its own current location, the locations of people who are assigning tasks and the
locations of the required devices.
State space search is unfavourable for solving real-world problems because, it requires complete description of every
searched state, also search should be carried out locally.
Scanned by CamScanner
AlaSC (MU-Sem. 7-Comp) Planning
Knowledge, Reasoning and can be a
=
oe
. Because of this it
— Drawback of these types is that it does
ether two statesnot ar explic i ly
plicit specify eae “What holds in every state”
difficult to determine wh i e same.
Complete World
Rule set
jug
(xy) — (4,y) fill the 4- gallon
1.
If x<4
jug
2. (xy) (x,3) fill the 3-gallon .
the 3-gall t
s until
neneteons
Wy eaiiic
=3 an d x2 0 ju
If xty>
OF punarl
$
an
Scanned by CamScanner
Knowledge, Reasoning and Plan
(x+y,0) pour all the water from the 3 -gallon jug into.
9. ~ (x,y) ——
12. (2,y) ——»(0,x) empty the 2 gallon in the 4 gallon on the ground.
0 0
0 3 2
3 0 9
3 3 2
4 2 7
0 2 Sor 12
2 0 Sor 11 ©
(0, 0)
(4, 3) (1, 3)
4,0 43) 0) 1)
40)
—7\™.
0,3) (0,0) 1)
—{/\~
(4,3) (4) OR
Scanned by CamScanner
ig_AlSC (MU-Sem.7-Comp) ae 345 Knowledge, Reasoning and Planni
ng
a at Location A
“Flight
Scanned by CamScanner
Knowledge, Reasoning and Plannine
~ _ If preconditions are satisfied then the actions are favoured i.e. if the preconditions are satisfied then Positie
effect literals are added for that action else the negative effect literals are deleted for that action. aS
— Perform goal testing by checking if the state will satisfy the goal.
— Lastly keep the step cost for each action as 1.
Progression planner algorithm is supposed to be inefficient because of the irrelevant action problem and requirement
of good heuristics for efficient search.
“Backward state-space search” is also called as “regression planner” from the name of this method you can make out
that the processing will start from the finishing state and then you will go backwards to the initial state.
So basically we try to backtrack the scenario and find out the best possibility, in-order to achieve the goal to achieve
this we have to see what might have been correct action at previous state. 3
In forward state space search we used to need information about the successors of the current state now, for
backward state-space search we will need information about the predecessors of the current state.
Here the problem is that there can be many possible goal states which are equally acceptable. That is why this
approach is not considered as a practical approach when there are large numbers of states which satisfy the goal.
Let us see flight example, here you can see that the goal state is flight 1 is at location B and flight 2 is also at location B,
We can see in Fig. 3.19.1. If this state is checked backwards we have two acceptable states in one state only flight 2is
at location B, but flight 1 is at location A and similarly in 2nd possible state flight 1 is already at location B, but flight2
is at location A.
As we search backwards from goal state to initial state, we have to deal with partial information about the state, since
we do not yet know what actions will get us to goal. This method is complex because we have to achieve a
conjunction of goals.
Flight
i
|
Location:B
j Flight 1 at
\f
In this Fig. 3.19.1, rectangles are goals that must be achieved and lines shows the corresponding actions.
Regression algorithm
- To do this we need to find out which states will lead to the goal state after applying some actions on it.
— We take conjunction of all such states and choose one action to achieve the goal state.
- If we say that “X” action is relevant action for first conjunct then, only if pre-conditions are satisfied itworks.
Scanned by CamScanner
Knowledge, Reasoning and Planning
b eee 7
Actions must be consistent it sh ould r
not undo preferred literals. If there are positive effects of actions which appea
in goal then they are deleted. o
ed. Ot ; dy appears.
herwise Each precondition literal of action is added, except it alrea
Main advant age
geo this me thod l
is only I Y r elevant actio ns
i ns are taken i into co side
i ;
backward search method has much lowe bra cl ing factor
Wearing Shoe :
0.1 Total order planning of . :
Fig, 3.2
" Tt
° Puolications
Scanned by CamScanner
o
. : - :
3.21 Partial Order Planning
7 MU = May 13, May 14, Dec. 14, May 15, Dec. 15
-
LeftSockOn
LeftSockOn,
RightShoeOn
o Here, wearing a right sock is the precondition for wearing the right shoe.
— Once these actions are taken we achieve our goal and reach the finish state.
TechKnowledgé 2
“PF Publications | - 6 sea
Scanned by CamScanner
——— _ _ Knowledge, Reasoning atand Planning
as
-If we auisieree POP a search Problem, deen We say that states are
small I pla
pl ns.
,
States are generally unfinished actions. If of only starting and finishing.
we take an empty plan then, it will consist
actions.
Every
very p plan has fo ur maini components, which can be given
as follows:
Set of actions
Eat Apple
Lin k Par tia l Ord er Pla nni ng (b): Causal Link Example
Fig.3. 91.2: (a) Causal an apple and the
bu y an apple it’s effect can be eating
ca n un de rs ta nd that if you
you
- From Fig.3.21.2(b)
cutting apple.
an app le is
precondition of eating
an effe ct -E and, according| to the ordering constraints it
ion C that has
flict |if there is an act
— There can be con tion can be
action B.
s af te r ac tion A and before wa nt to ma ke a de co rative apple swan. This ac
come d of that we
’t wan t t o eat an apple instea ehintkon 3
-— Say we don "E". ct , Leftsock
t have effe _» Right-shoe
n d It does no
between A and Ba nt se ck ei n
- -sock- scig
Links = {Right }.
Seet of Ca us al
, leftshoe > leftshoeon — Finish
o For example: Rightshoeon _5 Finish
e > links.
conflicts with the causal
ts ho
Leftshoe, Righ sh ou ld not be any
the re
nsistent plan
o Tohavea co
ditions in the plan. Least commitment strategy
Set of open precon cannot beachieved by some actions
It
led open if
~ Preconditions are © al choice dur
ing search
.
he precondition
can be used by delaying ¢
pen
5 fouls not be an
y © TechKaouledy’
nt plan there
euotic ations
To have a consiste
Scanned by CamScanner -
3-500 Knowledge, Reasoning and Planning -
if you are planning to take a trip, then first you have to decide the location. To decide location we can search for
various good locations from internet based on, weather conditions, travelling expenses, etc.
Say we select Rajasthan, with one level planner, first we switch on PC, then we open browser, after that we open
Indian Railways website booking site for ticket booking, then we enter the date, time, etc details to book the railway
ticket. After that we will have to do hotel's ticket booking and so on. ,
This type of planning is called one level planning. If the type of problem is simple then we can make use of one level
planner. For complex problems, one level planner cannot provide good solution.
Scanned by CamScanner
— ee
ing and Planning”
AlaSC (MU-Sem. 7-Comp
) _Knowledge, Reason
nning example
Fig. 3.22.3 : One Level pla
ons
3.22.2 Hierarchy of Acti would cover more precis
e
chy of act ion s can be decided. Minor activities
or actions, hierar king, Hot el
In terms of major and min have railway Ticket Boo
acti viti es. In case of above example, we can
the major es.
activities to a ccomplish oying there, coming
back are the major activiti
g and enj
Booking, Reaching Raj
asthan, Sta yin are the Minor activities.
ligh t din ner in palace, Take photos, etc
t ion, Have candle
to reach railway sta
While take a taxi
.
complex problems of a test match(180 overs).Numbe
r of
In real wo rid there can be of 4 bowlers i n.2 days
am plans the order
am pl e : A ca pt ai n of a cricket te
For ex
. try out a large
probabilities gi? = 16° uce the size of sea rch spa ce. For plan ordering we have to
primitive
be hi nd thi s plannin g is to red hav e lim ite d way s in whi ch we can select and order
Motivati on chie s we
ible plans. with plan hierar
number of poss empt to
steps are decided we att
operators.
giv en mor e imp ortance. Once the major
s are
ing major step
In hierarchical plann d to return
the mi no r det ail ed actions. at a min or ste p of plan. In such case we nee
solve n may run into difficult ies
th at ma jo r steps of pla ordered sequence
to devise the plan.
It is possible oduce a pp ro pr ia te ly
ep agai" to pr
to the major st
3.22.3 Planner
nditions.
ti fy a hi er ar ch y of maj or co ps), so we postpone the
details to next level.
First iden mi no r ste
teps then
wr Pr
Ep Techtn ealedes
lic ations
a a
Poo
ee
ee
~
Scanned by CamScanner
Knowledge, Reasoning and Plan
; La)
| EXTERNAL
| |NTERFACE |
“Criticality" I
1
!
_, Preconditions
of...
PLANNING y
EXECUTIVE |
I
I
|
!|
4|
I
!
| |
| I
; y |
I
Ber < First step of |
_ "Skeleton Plan’ Aertically™ > I &
I
I < minimum
I
I
1
|| | Plan to Achieve a State !
| in which Preconditions of
2 T |
I
| I
| I
| I
I |
! I
J I
|
| I
!
|
|I
| I
I I
7 |
|
| I
!
|
Example:
Actions required for “Travelling to Rajasthan” can be given as follows :
Opening yatra.com (1)
Finding train (2)
Buy Ticket (3) : : a
ler TachMnowledaa
Scanned by CamScanner
AI&SC (MU-Sem. 7-Comp)_
eee Reasoning and Planning_
Get taxi(2)
— Reach railway station(3)
— Pay-driver(1)
— Check in(1)
— Boarding train(2)
- Reach Rajasthan (3)
1* level plan
Finding train (2), Buy ticket (3), Get taxi(2), Reach railway station (3)
Boarding train(2), Reach Rajasthan (3).
3" level plan (final)
Opening yat
pening yatra.com (1), Finding train (2), Buy ticket (3), Get taxi(2), Reach Railway station (3), Pay-driver(1), Check
in(1), Boarding train(2), Reach Rajasthan (3).
Language should be expressive enough to explain a wide variety of problems and restrictive enough to allow efficient
algorithms to operate on it.
- Planning languages are known as action languages.
j Only allows positive literals in the states, Can support both positive and negative literals.
in STRIPS is expressed | For example : Same sentence is expressed
For example: A valid sentence -Stupid A -
as => Intelligent* Beautiful. é as => -Stupi Ugly _
Makes use of Open World Assumption (i.e
Makes use Oe care -worldfalse)
assumption (i.e. e
unmentioned literals are unknown)
Unmentioned litera
We can find quantified variables in goals.
in goals.
We only can find ground literals
For example :3x At (P1, x) A At(P2, x) is the goal of
nt A Beautiful
For example: Intellige having P1 and P2 in the same place in the example of
the blocks
Scanned by CamScanner
ing and Planning
Knowledge, Reason
Al&SC (MU-Som. 7-Comp)
disjunctions
| Goals are conjunctions Goals may involve conjunctions and
4.
For exam| ple: (Intelligent A (Beauti
ful V Rich)).
For example : (Intelligent A Beautiful).
allowe d: when P:E meansE is
5. Effects are conjunctions Conditional effects are
an effect only if P is satisfied
built in.
Does not support equality.
Equality predicate (x =y ) is
6.
Supported for types
7. | Does not have support for types
For example : The variable
;
p: Person
Fig. 3.23.1
Standard sequence of actions is
4
Tool
Start
Fig. 3.23.2
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
Knowledge, Reasoning and Planni
Rule 1
hand empty, on(X,table), holding(X)
: clear(Xx)
Rulmie putdown(x) . x)
holding( hand empty, on(X, table),
- .
clear(X)
Rule 3 stack(X,Y) holding(x), on(X,Y),
clear(Y) ; _clear(X)
Rule 4 unstack(X,Y) on(X,¥), “holding(X),
_ . _| Clear(X)
; clear(Y) .
- Based on the above rules, plan for the block world prob
lem:Start > goalcan be specified as follows:
1. unstack(Z,x) 2. putdown(Z)
3. pickup(Y) 4. stack(Y,Z)
5. pickup(x) 6. stack(X,Y)
Execution of this plan can be done by making use of a data struc
ture called "Triangular Table".
1 on (C, A) clear (C) unstuck
handempty (C, A)
2 holding (C) putdown (C)
3 on (B, table) handempty | pickup
__ (B)
4 clear (C) Hoiding | stack
(B) (B,C)
on (A, table) clear (A) Handem | pickup
5
ply (A)
With the help of triangular table at ree is formed as shown below to achieve the goal state :
Fig. 3.23.4
c arm) can ha ve some amount of fault tolerance. Fig. 3.23.5 shows one such example.
An agent (in this case roboti
TEN
ee
Scanned by CamScanner
_Knowledge, Reasoning and Planning
Not allowed
Tool : | |
ry os : Ey
Start Wrong
. move
Fig. 3.23.5
3.23.2 Example of the Spare Tire Problem
— Consider the problem of changing a flat tire. More precisely, the goal is to have a good spare tire properly mounted
onto the car's axle, where the initial state has a flat tire on the axle and a good spare tire in the trunk. To keep it
simple, our version of the problem is a very abstract one, with no sticky lug nuts or other complications.
There are just four actions: removing the spare from the trunk, removing the flat tire from the axle, putting the spare
on the axle, and leaving the car unattended overnight. We assume that the car is in a particularly bad neighborhood,
so that the effect of leaving it overnight is that the tires disappear.
— The ADL description of the problem is shown. Notice that it is purely propositional. It goes beyond STRIPS in that it
uses a negated precondition, -At(Flat, Axle), for the PutOn(Spare, Axle) action. This could be avoided by using Clear
(Axle) instead, as we will see in the next example.
—.. EFFECT: - At(Spare, Ground) A = At(Spare, Axle) A - At(Spare, Trunk } A - At(Flat, Ground) A- At(Flat, Axle))
Scanned by CamScanner
:
aigssc (MU-Sem. 7-Comp)
it should be not
eS Knowledge, Reasonin 2fanning
ed that real world
itself is not uncertain 5 uncertain. In
ion related to world i
and hence machine als o receivgin
artificial intelligence, we try to give human Perception ability to machine, ond es the
tke
ity vs icine,
perception of uncertainty about the real world. So machine has to deal with incomplete and incorrect infornis
human does. 7
| they Iways
Determining the condition of state depends on availabl kno
e wle dge . In real worl d, kno wle dge avai lability son ’
limited, so most of the time , conditions are non deterministi
c. ‘
The amount or degree of indeterminacy depends upon the knowledge available. The inter determinacy 's. calle
“pounded indeterminacy” when actions
can have unpredictable effects.
Four planning strategies are there for handling indeterminacy :
(i) Sensorless planning
(ii) Conditional planning
(iii) Execution monitoring and replanning
(iv) Continuous planning
Sensorless planning is also known as conformant planning. These kinds of planning are not based
on any perception.
The algorithm ensures that plan should reach its goal at any cost.
Conditional plannings are sometimes termed as contingency planning and deals with bounded indeterminacy
discussed earlier. Agent makes a plan, evaluate the plan and then execute it fully or partly depending on the
condition.
Scanned by CamScanner
Knowledge, aNd Plann
(i) Co-operation
In co-operation Strategy
agents have joint goals and plans. Goals
combined to ach
can be divided into sub goals but ultimate
ieve ultimate goal.
(il) Multibody planning
Multi body planning is the strategy
of implementing correct joint plan.
(ili) Co-ordination mechan
isms
These strategies specify the co-ordination
between co-operating agents. Co-ordinatio
Co-operating plannings. n mechanism is used in Several
(iv) Competition
Competition strategies are used when agents
are not co-operating but competing with each
to achieve the goal first. other. Every agent Wants
Left Suck
we
1 {2 e[ |= >| [2
GOAL Right Suck LOOP Left Suck
4) &
<A)
2 lee [as
ad) 7 ad) 5
é
af)
és | as}
1 lt)
§
TA
GOAL
LOOP
Fig. 3.26.1 : Conditional Planning
- vacuum world example
— In conditional planning we can check what is
happening in the environment at predetermined
deal with ambiguous actions. points of the plan t
’
It can be observed from vacuum world example ,Cond : ;
itional Planning needs to take some actions at every state a. d
must be able to handle every outcome for the action
it takes. A State node is represented with a squar
node is represente e and chan¢
d with a circles.
For a state node we have an option of choosing
some actions. For a chance node agent has to
handle every outcome:
Conditional Planning can also take place in the Partia
lly Observable Environments a
track on every state. Actions can be Uncertain because o f (POE) where, we cannot keeP
the Imperfect sensors,
EF TechKnewietss
Publications
ase Ay
|
Scanned by CamScanner
2 = Knowedge, Reasoning and Pih stcasea s
e if the dirt rtisi i Then in suc
In vacuum agent exampl ut Righ t, but not about Left.
th abo
pirt might be left behind when
ws ef
t and agent
agent, nenleav
kno
called as a state set OF a beli
|
e , a clean square. Initial state is also
€s
state. :
sing can be
environments. Automatic sen
role in Con diti ‘
sensors play important onditi
ng for part iall y obs erv abl e
agent Pla nni
useful; with automatic sensing an
d is Active
-
sy
en eee
wee Soo
CleanL 7 CleanL
em Em
cate ie ET 7s.
ee
sh tae
gah
.~ ee
7 nae
~ és.
és, “s eee
ween
(condition 2)
Pl an ni ng - va cu um world example
onal
Fig. 3.26.2 :Conditi
ent?
Q.1 What is Knowledge Based Ag nt for the same.
s pec ify PEA S pro perties and type of environme
wo RLD Environment.
Q2 Describe WUMPUS ques.
esen tation techni
c? Ex pl ai n va ri ou s knowledg e repr
Q3 What is Logi and exa mpl e sentences for propositiona
l logic.
ax and sem ant ics
Q4 positiona | logic? Writ
e synt examples.
What is pro nal logic with suitab
le
opositio
process i ncase of pr
Q5 Explain the inference
with example.
Q6
Explain Hom Clause ics of FOL with
example.
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) 3-60 ____ Knowledge, Reasoning and Plann
Q. 11. What is planning in Al? :
Q. 14 What are the major approaches of planning? Explain conditional planning with example.
Scanned by CamScanner
Fuzzy Logic
Unit IV
i
Introducti
to Fuzzy Set:
.
Fuzzy set theory, Fuzzy set versus crisp set, Crisp relation & fuzzy relations,
uction
membership functions,
4,2 Fuzzy Logic: Fuzzy Logic basics, Fuzzy Rules and Fuzzy Reasoning
4.3 Fuzzy inference systems: Fuzzification of input variables, defuzzification and fuzzy controllers.
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
as
For example, a classical set A of real numbers greater than 6 can-be expressed
A = {xIx>6}
number, then x belongs to the set
Where there is a clear, unambiguous boundary ‘6’ such that if x is greater than this
eae
A, otherwise x does not belong to the set.
Although classical sets are suitable for various approximations and have proven to be an important tool for
mathematics and computer science, they do not reflect the nature of human concepts and thoughts, which are
Saas
abstract, imprecise and ambiguous.
igen
height is more
For example, mathematically we can express the set of all tall persons as a collection of persons whose
than 6 ft.
A = {x|lx>6}
The problem with the classical set is that it would classify a person 6.001 ft. tall as a tall person, but a person 5.999 ft.
tall as “not tall”. This distinction is intuitively unreasonable.
The flaw comes from the sharp transition between inclusion and exclusion ina set.(Fig. 4.2.1)
1.0 i
i
‘
t———->tall (= 1.0) :
Degree 2
of j
membership
“a a
a
| Not tall (1 = 0.0) . 44 a
Fuzzy logic uses the "degrees of truth" rather than the usual "true or false" (1 or 0) Boolean logic.
Fuzzy logic includes 0 and 1 as extreme cases of truth but also includes the various states of truth in between so that, 3
for example, the result of a comparison between two things could be not "tall" or "short" but "0.38 of tallness."
1.0 Continuous __, definitely a tall
membership person (p = 0.95)
- Degree of function for
membership, p TALL -
_» leally not very
0.0 tall at all (1 = 0.30)
Height .
Scanned by CamScanner
&SC (MU-Sem. 7-Comp)
4-3 2, Fuzzy Logic
as shown in Fig 4.2.2 the fuzzy logic defines a aly A person’ss height
hei may now
Smooth transition from ‘not tall’ to ‘tall’.
op to both the groups “taly’ and
pelong Not all’ but now it will have the degree of membership associated with it for each
group.
A person has 0.30 membership in ‘Not tall’ group and ‘0.95 ip in ‘tall’ group, so definitely the per:
categorized as a tall person. membership in ‘tall’ group, s : son is
3. | Fuzzy set elements are permitted to be partly | crisp set elements can have a_ total
accommodated by the set (exhibiting gradual | member
or non-membership.
ship
membership degrees).
4, | Fuzzy sets are capable of handling uncertainty and | Crisp set requires precise , complete and finite
vagueness present in the data data.
- IfXisa collection of objects denoted generally by x, then a fuzzy set A in Xis defined as a set of ordered pairs :
A = {Pg
Ixe X)
Where [1g(x) is called the Membership Function (MF) for the fuzzy set A.
and 1.
The MF maps each element of X to a membership grade between 0
Ifthe value of membership function [;' (x) is restricted to either 0 or 1, then A is reduced to a classical set.
Here, X is referred to as the Universe of discourse or simply the Universe and it may consist of discrete objects or
, eferre
Continuous space.
Scanned by CamScanner
YF AI&SC (MU-Sem. 7-Comp)
For example,
Let X = {San Francisco, Boston, Los Angeles} be the set of cities one may choose to live in.
The fuzzy set “desirable city to live in” may be described as follows:
A = {(San Francisco, 0.9), (Boston, 0.8), (Los Angeles, 0.6)}
— Here, the universe of discourse X is discrete and it contains non-ordered objects, in
this case three big cities h
United States. 7
2. Fuzzy sets with a discrete ordered universe
A = {(0,0.1), (1, 0.3), (2, 07), (3, 1), (4, 0.7), (5, 0.3), (6, 0.1)}
Here we have a discrete ordered universe X.
1.0. f-----------+- ®
0.9
0.8
0.7 }-------- @---+---
0.6
- .
ig
0.3 ;---@---4----------}---
0.1 Q---f----nnnpnn
nnn -f
fone Ai
0 1 2 3 4 5 6G
X =no of children
B { (%, Hg(x)) Ix © X)
Where, Ha(x) = 1+ |
Scanned by CamScanner
A&SC (MU-Sem. 7-Comp)
2.
This is illustrated in Fig. 4.3.
>X
0 10 20 30 40 50 60 70 80 90 100
55 56 58 | 60
discourse and
fication of a ‘suitable universe
te membership function.
Specification of an appropria membership functions.
ber shi p fun cti on is sub jec tive, which means that the
The “specification |of mem . Therefore, the ‘subjec vity-
sa me co nc ep t by ‘different persons may vary considerably
; ified for ihe a izzy sets and probe ty
sets is theprimary, aifference., between the
2 function :
) Using membership function.
Its membership
statin‘ing
A fuzzy set can be represented by 10”.
ly Jarger than
“re al nu mb er s o© nsiderab
Eg. To repr es en t
“Public malate
Scanned by CamScanner
F_Al&SC (MU-Sem.7-Comp)
We define,
x(x) = —— ; x>10
1+q= 10"
3) Using + Notation :
as,
Fuzzy set for “comfortable type of house for a four person family” may be descr ibed
0.2 05 08 1,07 03}
Pi = (Pepe stat 5 + 6
i.e. we define A as
= > Mg! x
i=l
wax)
0 = x
a~ _= [01{803,96} 41
or
03 06
4.3.4. Linguistic Variables and Linguistic Values
aged” and “old” that are characterized by &
Suppose that X = “age”. Then, we can define fuzzy sets “young”, “middle de
MFS Hyoung (x), Hmiddle aged (x) and Hoig (x).
aged” and “gid” in this
A linguistic variable (“age”) can assume different linguistic values such as “young”, “middle
case.
old) and
Note that, the universe of discourse is totally covered by these MFs (MFs for young, middle aged and
a
transition from one MF to another is smooth and gradual.
Scanned by CamScanner
a support of a fuzzy y s set Ai
A is the set of all points xin X
such th at, x(x) > 0.
Core A = ~(y\)=
(x1 HAG) =1}
3, Normality:
that
must be at least one point x € X such
A fuzzy set A is normal if its core iis non-empty. In other words there
bg(x) =1.
4, Crossover points :
isa point x € Xat which [,(x) = 0.5.
Across over point of a fuzzy set A
5, Fuzzy singleton : .
1is called a fuzzy singleton
e sup por t is a sin gle poi nt in X with L(x) =
A fuzzy set whos
45 year old
Member
ship
Singleton
grade
Age
singleton
Fig.4.3.4: A fuzzy
of a fuzzy set.
po rt an d crossover points)
sup
ee parameters (core, Middle aged
Fig. 4.3.5 shows thr
1.0
Member
ship
0s
grade
g
Age
— Core"
N Crossover
point
Support =
fuzzy set
points of a
and Crossover
4.3-9
4.3.5: : Core ' Support
Fig.
Ag = {xt HA 7%
Peblications
Scanned by CamScanner
WE _Alasc (MU-Sem. 7-Comp)
7.: Strong a-cut/ strong a-level set :
Al = (xIPg(x)>0}
support and core of a fuzzy set Aas,
Using the above notations, we can express
8. Convexity :
ANAL
Two convex membership
functions
Non convex
membership functions
A fuzzy set A is convex if and only if for any x, and x,¢ X and any Ae [0, 1].
or
9. Fuzzy numbers:
A fuzzy number A isa fuzzy set in the real line (R) that satisfies the conditions for normality and convexity.
11. Symmetry:
Scanned by CamScanner
Pr
Al&SCL (MU-Sem. 7-Comp
)
Fuzzy Logic
42. open left, Open right and cleéed ars :
left if,
A fuzzy set A is open
lim
Ux (x) = 1 and
xO-2
Bx (x) =0
_
A(x) *
Young
= —» x age
-
Fig. 4.3.7 : Open Left MF
lim
x $00 HK OD =1
Old
BAX)
—> x age
a
MF
Fig. 4.3.8: Open right
x age
Closed MF
Fig. 4.3.9:
"8. Cardinality :
A is defined as
Cardinality of a fuzzy set
Al = D pA
TechKaeuledge
Puptications
Scanned by CamScanner
WY _Alasc (MU-Sem. 7-Comp)
14. Relative cardinality :
~
WAM =
Al
degree Um
The height of a fuzzy setA in X, is equal to the largest membership
~ Sup
hgt(A) = ye x ha)
hgt
Vv
Fuzzy sets follow the same properties as crisp set except for the law of excluded middle and law of contradiction.
AUA #U ; AnA#6
The following are the properties of fuzzy sets,
1. Commutativity :
ANB = BNA
2. Associativity:
AU(BUC) = (A UB) UC
AN(BNC) = (ANB)nC
Scanned by CamScanner
7 ag gigSC (MU-Sem. 7-Comp)
ZY Logig q
a4
= pistributivity : Fuzzy Log jic
t
ie ?
‘
AUBnS ~ R n AUC)
f
~ on
i ~ Jd: d= (AUB)
=(,ano ~ on
AN Buc)
tity :
* AUd =A : Auuseu
A = ~ ~
=o 5 ANU=A
5, Involution:
A =A
6. Transitivity :
ANB = AUB
Kk cBeRx@SPa®
ne
en t or subset
Fig. 4.3.1 1: Containm
= at whose FS
2 Union (Disjunction) :
a
Aunion of two fuzzy sets
publicatigas
Scanned by CamScanner
IESE (MU-Sen
7 one)
—p X
>X ‘ »
The intersection of two fuzzy sets A andB isa fuzzy set C, such that whose MF is defined as
AnB
ne
Fig. 4.3.13 : Intersection of two fuzzy sets
4. Complement (Negation) :
Me(x) = 1-Hx()
Mx.a(x) = Wg(X)-Mg(X)
3. Bounded sum :
Scanned by CamScanner
| | winty vole
pgs Mu-Sem. 7COm)
a
4_crisP Relation and Fuzzy Relations
i crisp Relation
4
ann-ary relation over My, Mz, Ms, ... My is a subset of the Cartesian product Mx M2 --- Mp, where n= 2, the relation
ear We 2 Mp
M 1X Mp. This is called a int, nelotion
isa subset of the Cartesian product
aa
tet Xand Y be two universe and X x Y be their Cartesian product
Y can be defined as, Prone
Then XX
1et XxY = {@ y)Ixe X, ye Y)
every element in X is related to every element in Y
X and
_ wecan define characteristic function f that gives the strength of the relationship between the each element of
Y.
1, &yWEXxY
fxxy Oy) = | &y)
0 ? (x, y) éXxXY
Xx¥={(a, 1), (a, 2), (a, 3), (b, 2), (b, 2), (b, 3), (c, 2), (c, 2), (c, 3}
From the above set, we may select a subset R, such that
The relation between set X and Y can also be represented as coordinate diagram as shown in Fig. 4.4.1.
m of a relation
Fig. 4.4.1: Co-ordinate diagra
a | : TW Tekaeaieaes
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
- A characteristic function is used to assign values of relationship in the mapping of X x Y to the binary values and jg
given by, /
fan R(x, y) =| 0
1,@%yeR
; (x, y)ER
ds =],
oa
Ea 111
111
1. Union
- 3. Complement
Scanned by CamScanner
(N&SC (MU-Sem. 7- Comp)
5. Identity
goa and XE,
Be:
Let X = {aj,a-4s)
Y = {b,, be, bs}
Z = {c1,C2,¢sh
A and B
4. 3: II tu st ra tion of rel ations
Fig. 4.
written a5,
Then Ao B can be A oO B = {(ay ’ Cy)» (a2 ?
C4)s (a2 ’c3)}
nat
is given as,+ by bs
i matrixi form |S
Bin
i c
‘on of° Aand
: Th erepresentason bh, 7100
a, f110
b 001
Be
9 10|;
A =
0 1 0
b3
0 0 0
a3
Scanned by CamScanner
WW alasc(mu-Sem.7-Comp) 4G
Then, composition A 0 B is represented as,
Cy C2 C3
af1o0l
AoB = »|001
a, L.0 0 0
3. Inverse : (AoB) =A OB
Definition:
R= J yp Cusvy Ay)
uxv
R= 2 pate, v)Mu,v)
UxV
Scanned by CamScanner
i
Al&SC (MU-Sem. 7-Comp)
4-17 ney “Fuzzy Logic,
we can expr ess fuzzy relatio NR=UXVin
VR= a
Matrix form as
1. when x=y
2o |
oo
o
p
x
nN
0.3 |} 08] 1
Ww
yyy
interaction between variables.
Fuzzy relations are very impo ant in fuzzy controller because they can describe
relation.
on fuzzy
Four types of operations can be performed |
(2) Union
(1) Intersection
ion
Cylindrical extens
(3) Projection
Intersection
x x Y. The interse ction of R and S is defined by, :
lations defined on
S be binary re
L R and
, ~ Let
p(X, y)) Yb
5 (x, y)= min (palX
Vv :
n be used.
wr
Ky) EX: Han any T- Norm ca
~ > Instead of the minimum,
Scanned by CamScanner
Al&SC (MU-Sem,. 7- Comp)
2. Union
— The union of Rand Sis define
d as,
V(% V)EXXY: Meus (x, y) = max (Hal, Y),
Hel y))-
Instead of maximum, any S—norm can be used.
Given two relations R andS
Y2 =Y2~—sY3 Y1 Y2 °Y3
RUS =
S (a,b) = a+b-—a-:
bis used then,
RUS =
1 | 0.68 | 0.84
0.44 | 0.44 | O7
— This operation is more optimistic than the max operation. All the membership degrees are at least as high asin
the max operation. ;
RNS =
0.3); 0 | 0.1
0.4 | 0.2 | 0.1
0.2 | 0.2 | 0.4
b
— Suppose a simple T-norm T (a, b) = at+b-ab is used then,
RNS
0.20} O 0.1
0.4 | 0.17 | 0.10
0.13 | 0.13 | 0.28
— The above operation is more optimistic than the min operation. All the membership degrees are less than in the mi
operation.:
Scanned by CamScanner
Fuzzy Logic
“Sem. 7-Comp)
| 0/08) 0} 0
x,|09| 1 | 0.7] 08
X means that
Then the projection on
of the first row.
X, is assigned the maximum
the second row.
x, is assigned the maximum of
m of the third row.
- .%3is assigned the maximu
Thus,
1 08 1
Proj. RonX = Z-+y + X3
Xy X2
Similarly
09 tT 1,07 08
j- Yous y4
Ran Y1 y2 * ¥3
Proj.
Ce (B)
WW
TechNanuledga i.
Publications
Scanned by CamScanner
¥ Al&SC (MU-Sem. 7-Comp) 4-20
1. Commutativity
RUS SUR
RNS SOR
Associativity
RU(SNT) (RUS)
N (RUT)
Idempotency .
RUR R
ROAR R
Identity
R Udp R , R Nop
= op
RUER ER» ROER=R
y.
Where 0, and Ep are null relation (null matrix) and complete relation (unit matrix of all 1s) respectivel
Involution
R
De-Morgan’s law
| Ras RUS
RUS RNAS
are not satisfied.
Law of excluded middle and law of contradiction
i.e. RUR x Er
and RAR # Op
Composition operation can be used to combine two fuzzy relations in different product spaces.
Scanned by CamScanner
Fuzzy Logic
Al& gc (MU-Sem_Z-Comp) 4-21 ————
LX x - Min Composition
1. etMA Ri be? fuzzy relation defined on X x Y,
and Rz be a fuzzy relation defined on Y x.
then the max - min composition of two fuzzy relations R, and R, is denoted by R; 0 R2 and defined as,
RoR, = {[(x, Dry vey canin (Hpi(x, Y)» PR2C¥> pyle X ye¥.2€ 2}
OR Prior2 (XZ) = yet y {min (Hpi % y), poly, 2))}
Max - Product Composition
2.
The max - product composition is defined as,
45 Membership Functions
MU)=May.12;.Dec.12,Dec:.13, Dec.14
One way to represent a fuzzy set is by stating its Membership Function (MF). MFs can be represented using any
mathematical equation as per requirement or using one of the standard MFs available.
Hi
GEIST:
2.Decreasing MP (L ibviction),
5. Gaussian. MFs
7. Sigmoial MES ,
my
' TechKnowleags
ual
Pubticaria
se
Scanned by CamScanner
Increasing MFs (T Function)
An increasing MF is specified by two parameters (a, b) as follows:
0 : x<a_
T(x;a,b) = 4(x-aj/(b-a) aSxSb
: 1 ; xz2b
palx)
1
3. Triangular MF (A function)
0 ; xSa
(x-a)/(b-a) ; asSx<b
(x ; a, b,c, d)=%) 1
Trapezoid » bsxSc
(d-x)/(d—c) ; cSxSd
0 ; x2d
a(x)
1
a b cq ™*
Scanned by CamScanner
E ‘a gC (MU-Sem. 7-
Comp)
( 4-23 Fuzzy Logic
vapen0id (X35 & Dr Cr Y= MAX(min X-a
baal ld
a? 4)
rd—g)>0
The parameters {a, b, c, d} (with a <be .
-
. the x coordinates of the four corners of the trapezoidal
c <d) determine
MF.
; <
efficiency,
ers. However due to the simple formulae and computational
Paramet
they are used extensively.
~ Some smooth and non -line ‘ar MFs (Gaussian
i and Generalized Bell) 1.0
are discussed below :
0.8
5 Gaussian MFs 0.6
;
_ AGaussian MF is specified by two parameters {c,0}
; -1/2 (S)
Gaussian (x;¢, 6)=e 9G
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
1.0 Dibgsigae
Slope= — b/2a ae
0.5
The bell MF is direct generalization of Cauchy distribution used in probability theory; so it is also referred to as the
Cauchy MF. The bell MF has more parameter than Gaussian MF, so it has more degree of freedom to adjust the
steepness at the crossover point.
Although Gaussian and bell MFs achieves smoothness, they are unable to specify asymmetric MFs.
Asymmetric MFs
Asymmetric and close MFs can be achieved by using either the absolute difference or the product of two sigmoidal
functions.
Sigmoidal MFs
Depending on the sign of the parameter a, a sigmoidal MF is open right or open left and thus is. appropriate for
representing concepts such as “very large” or “very negative”.
— They are widely used as the activation function in artificial neural networks.
-— In Boolean logic we express everything in the form of 1 or Oi.e. true or false respectively
Fuzzy logic handles the concept of partial truth, where the range of truth value is in between completely true and
completely false, that is in between 0 and 1. In other words , Fuzzy logic can be considered as multi-valued logic
In other words, fuzzy logic replaces Boolean truth-values with some degree of truth.
The basic elements of fuzzy logic are fuzzy sets, linguistic variables and fuzzy rules.
Usually in mathematics, variables take numerical values whereas fuzzy logic alinWs the non-numeric Lingus
variables to be used to form the expression of rules and facts.
WF TechKnawledga
Pobricatians
Scanned by CamScanner
aigsc (MU-Sem_7-Comp) _. 1 Fuzzy Logic
4-25 ——
aq;
the linguistic variables are Words, specifically
————
—
so on. A fuzzy set
-
v adjectives like “small,” “little,” “medium,” “high, "and
isa collection of couples of elements
ynguistic Variables and Lingulstic Values
wh
Alinguistic variable is a variable +x
Ose values a Te words or sentences in: a natural or artificial language.
consider the variable X = “age ”
Alinguistic variable (“age”) can assume diff aged”, “Mature” and “old”
ifferent linguistic values such as “young”, “Middle
in this case.
wt
Then, ‘age’ can be considered iddle aged” “Mature” and “old”
tneen variable whose values can be “young”, “M
and these values can be characte young (x), Umiddle aged (x) » Umature (xjand Hold (x).
. by MFs
;
and old) and transition from
The universe of discourse is t Otally covered by these MFs (MFs for young, middle aged
one MF to another is smooth and gradual
Grade
1 Young Middle Age Mature old
35 5 35 65
ns
Fig 4.6.1 : Linguistic variable “age” as membership functio
; |
4.6.2 Fuzzy Rules and Fuzzy Reasoning
new knowledge from an existing knowledge.
Fuzzy inference is the process of obtaining a
be repres' ented in some form.
= Toperform inference, knowledge must
represent its knowledgebase.
- Fuzzy logic uses IF -THEN rules to
according to W hich, we can infer the truth
infe renc e in tradi tiona | two-valued logic is modus ponens,
- The basic rule of — B.
h of Aan d the implication A
ofa proposition B from the trut
ato is ripe” then if it is true that
A is iden tifi ed with - “the tomato is re d” and B with “the tom
Ex. If
is ripe”.
true that “the tomato
“the tomato is red” it is also
XisA .
ie. Premise 1 (fact)
if X is A then Y is B
Premise 2 (rule)
clusion) :YisB
Consequence (con an approximate manner.
easoning, modus ponens is employed in
However, in most of the huma n r
~ is red, then it is ripe” and we know that,
“if the tomato
n rule,
. : if we ha ve th e same implicatio
For e.g infer that
red” then we may!
the tomato is more or less
a
. “a”
Scanned by CamScanner
Fuzzy Lo Gig :
re t&SC (MU-Sem. 7-COMP)
: ; cedure is called “a
- When A, B, A’ and B’ are fuzzy sets of appropriate universes, the inference pro PProximat,
(GMP).
reasoning” or fuzzy reasoning, it is also called Generalized Modus Ponens
Definition : Approximate reasoning / fuzzy reasoning ; ey
Let A, A’ and B be fuzzy Y sets of X, X and Y respectively. e that the fuzzy implication A B is expressed as a fy,,,
P Y Assum “ifx is A theny
wit age
is B” is defined by,
relation R on X x Y. Then the fuzzy set B induced by “x is A” (fact) and the fuzzy rule
| pip (y) = max min [p1y? 0), Hr Ok y)] = Vx Ear () Adie O 91
or ff = A’o(A>B)
ifxis A, yis B
Fig. 4.6.2 : Graphic interpretation of GMP using Mamdani’s fuzzy implication and max-min composition
Here Wg can be defined as ;
‘ Consequence : zisC’
min
A Cc
wy
| Scanned by CamScanner
' tp 6) 1 on and G2 Arte onan woman nee ean ys oe
W2
= (WA Ww) Alc (2)
firing strength
when en wW, Wy and w 2 are the maxima
i of the MFs of AQ, A’ and BB’ respectivel
? ectively.
us, W, denotes t .
Th 1 he degree of Compatibility between A and A’, similarly for w2
Since the antecedent parts parts of the fuzzy rule ‘is constructed using and connective, WiAW; is called firing strength or
degree of fulfilment of the fuzzy rule. .
The firing strength represents the degree to which the antecedent part of the rule is satisfied.
The MF of the resulting C’ is equal to the MF of clipped by the firing strength w (when w = W1AW2)
Consequence : zisC’
s
ltiple antecedent
ple rules with mu
ms Ep etneeletes.
Puotications
Scanned by CamScanner
|
7 _ABSC(MU-Sem.7 -Comp) Fuzzy Logie
Lg
— Here C,’ and C,’ are the inferred fuzzy sets for rule 1 and rule 2 respectively.
.
When a given fuzzy rule assumes the form “if x is A or y is B” then firing strength isis given
of as the maximum of degr;
Bree of
match on the antecedent part for a given condition.
Ex.
If xis Ay or y is By then zis Cy.
min
By By
i rooo nee
7 ‘
Fig. 4.6.5
— In the above example, because two antecedents are connected using or, we take
maximum of w, and wy as a firing
strength.
— Since w.> wy, we take W? as a firing strength and then we apply min implication operator
on the output MF C;.
Crisp
input
PF"Y Publicat
leataeutions.
engs : . ; ~
Scanned by CamScanner
_ Fuzzy Logic
Output of «
rule n
(Fuzzy)
r
Final output y
Data Base
Atule base
rules.
Itcontains a number of fuzzy IF-THEN
A
A databa se
used in the fuzzy rules.
mem ber shi p func tions of the fuzzy sets
~ Data Base def ine s the
Tech m
Fupticattane
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
|
— The information in the database includes:
© Fuzzy Membership Functions for the input and output control variables
zed values alon g with the scaling facto
rs.
i o The physical domains of the actual problems and their normali
5. Defuzzification Unit -
output into a single crisp value.
— It performs defuzzification which converts the overall control
— The rule base and the database are jointly referred to as the knowledge base.
Working
The input to the FIS may be a Fuzzy or crisp value.
any of the fuzzification methods.
1. Fuzzification Unit converts the crisp input into fuzzy input by using
knowledgebase.
2. The next, rule base is formed. Database and rule base are collectively called
3. Finally, defuzzification process is carried out to produce crisp output.
Methods of FIS
1) Mamdani FIS
2) Sugeno FIS
Mamdani
h
Mamdani fuzzy inference is the most commonly seen inference method. This method was introduced by
and Assilian (1975).
Another well-known inference method is the so- called Sugeno or Takagi-Sugeno—Kang method of fuzzy inference
process. This method was introduced by Sugeno (1985). This method is also called as TS method.
The main difference between the two methods lies in the consequent of fuzzy rules.
.
1. Mamdani FIS
ation.
Mamdani FIS was proposed by Ebahim Mamdani in the year 1975 to control a steam engine and boiler combin
|| ‘ To compute the output of this FIS given the inputs, six steps has to be followed.
Scanned by CamScanner
wie
ist
AI&SC (MU-Sem. 7-Comp) _ ,
The outputs of all the fuzzy rules are then combine d to obtain the aggregated fuzzy output. Finally, defuzzification is
applied on to the aggregated fuzzy output to obtain a crisp output value.
consider two inputs, two rule
Mamdani fuzzy inference syst
em
Assume two inputs are crisp value x and y
- Fig. 4.7.2 (a) shows Mamdani fuzzy inference system using min — max decomposition.
x and y. In the
Fig. 4.7.2 (a) illustrates a procedure of deriving overall output z when presented with two crisp inputs
above Mamdani inference system, we have used min as T - norm and max as T - conorm operators.
The T-norm operator is used for inferencing antecedent part of the rule. And co-norm operator used to aggregate
outputs resulting form of each rule.
Mamdani model also supports max - product composition to derive overall output z. Here the algebraic product is
used as T-norm operator and max is used as T-conorm operator. ,
Ay
: z
ZMOM
omposition
» sy:systems using max - min dec
|
Inference
Fig. 4.7.2 (a) :7 Mam
dani fuzzy
Be , soPMR
Me eine
Scanned by CamScanner
Zz
N—>
Fig. 4.7.2 (b): Mamdani fuzzy inference systems using max - product decomposition
2. Takagi-Sugeno-Kang (TSK) FIS
— Takagi - Sugeno FIS was proposed by Takagi, Sugeno and Kang in the year 1985.
— Atypical fuzzy rule in TSK model has the form, .
IF xis A and y is B then z = f(x,y)
Where,
x, y and zare linguistic variables.
A and Bare fuzzy sets in the antecedent part of the rule.
Z = f(x, y) is a crisp function in the consequent part of the rule.
Usually f(x, y) is a polynomial in the input variables x and y.
In this step, the fuzzy operators must be applied to get the output.
First order Sugeno fuzzy model
When f(x,y) is a first order polynomial (e.g. z = ax + by + c) the resulting FIS is called , first order Sugeno fuzzy model.
Zero Order fuzzy model
In zero order fuzzy model, the output z is a constant (i.e. a = b = 0).
The typical form of the rule in zero order FIS is
IF xis Aand y is B then z=c
Scanned by CamScanner
a e. uzzy Logic
gpscmuemtoms
7-Comp)
Al&SC (Mu-Sem.
average
nt
where cisa consta
|
is obtained vi?! welentod
in this case the output of each fuzzy rules Si i and hence the overall output
s a constant
method-
p level 2,0,of each rule isi weighted
the ou tput i by the firing strength w; of the rule . _
wa min or
product
pA
Wo Zp = Pox + Gay + 12
average
J ]peones
wy421 + W222
wy +Wo
Z=
.
no FIS
oning in Suge
Fig. 4.7.3 : Reas
Model ip function.
i Sy s te m an d the Sugeno outpu t membersh
th e Ma md an he ba si s of
bet ween them is on tl
Comparison T he main d iffe
rence betwe en
:
ship Function er linear 0 r co
nstant. of fuzzy rules
Output Member ip fu nc tions are eith lie s i n th e consequence
tput member sh em al so
The Sugeno ou e between th
oc ed ur e : The difference
ation Pr e also differs.
at io n an d Defuzzific de fu zz if ic ation pr ocedur
- Ag gr eg tion an d mdani rule.
e sa me their agereBe th e Su ge no rule tha n the Ma
and due to th exi st fo r co ntroller.
at he ma ti cal rules me ter s th an the Mamdani
m ra
Rules : More re adjustable
pa
- Mathema tical no co nt roller has mo
The Su ge
ameters :
~ Adjustable P ar s
ic at io n © f In put Variable t.
47.2 Fuzzif into a fuzzy se
nv er ti ng @ crisp set
s of co e uncertain,
i 5 the proces uistic variab
les. te Tr ather they ar
- . Fuzzification in to li ng prec is e an d ac cu ra
transfo rmed are not very
He re th e crisp value is th e in put values
- mes be
many 4 ti cases, variable may
rd problems,
Inreal wo ness of data. In such
~
un kn ow n. gu en es s an d incomplete
imprecise and may arise du
e to the va
membership
f unction.
The un ce rt ai nt y as fuz zy
ted
n be represen
-
a5 fu zzy and ca
nt
represented
p yalue of assignme
membershi
Methods of
W TePuotbttaivonatleiodenss
Scanned by CamScanner
_AI&SC (MU-Sem. 7-Comp)
1. Intuition
— As the name suggest, this method is based upon the common intelligence of human. The human develops
membership functions based on their own understanding capability.
010 20 40 60
—> Temperature
2. Inference
— In inference method we use knowledge to perform deductive reasoning. To deduce or infer a conclusion, we use the
facts and knowledge on that particular problem. Let us consider the example of Geometric shapes for the
identification of a triangle.
Now, we can infer: membership values for all those types of triangles through t the method of inference because we
possess the knowledge about the geometry of their shapes.
The membership values for five types of triangle can be defined as,
pwR(A,B,C) = =e 1A-90°|
HI(A,B,C) = a ia-ma-o)
pE (A,B,C) = 1-3 551A-Cl
ul-R(A,B,C) = pINR(A,B,C©)
= min {WI (A, B, ©), uR (A, B, C)}
wT (A,B,C) = (RUIUe) =RAINE
Scanned by CamScanner
- algSC (MU-Sem..7-Comp)
; gx
H(A, B,C)
:
= (80, 65,35}
Mr (A,B,C) ) == 121495 180-901=3
mi 1 (A, (A,B,B,CC)
) = 1-61-4 minmi ( 15, 45} =5
wr = RNIN ES
min} i i
= Ale
9°4°4).
|
4, Rank ordering
nion methods
es are ass ign ed by a sin gle ind ivi dual, committee, a poll and other opi
_ |nrank ordering method, preferenc
ues to fuzzy variables.
can be used to assign membership val determine ordering of the
isons and they are used to
by pair wise compar
Here the preferences are determined
membership.
Example: orange,
e and thei r pair wise pre fer ences among the colors red,
pond to a questionnair
Let’s suppose 1000 people res
yellow and blue is given as below.
nateded pond.pond.
aminat
a contam
Wi We know that,
¢ water samples are taken from
lev:
rample
°
et’s consider that pH values
cod
i | solution.
.
a a
FP, value is 7 means it’s 2 neutra
Scanned by CamScanner
F _Al&SC (MU-Sem. 7-Comp)
- Levels of P, between 14 and 7 are labelled as Absolute Basic (AB), Very Basic (VB), Basic (B), Fairly pai (én
Neutral (N) drawn from 6 = F to 6==
- _ Levels of P, between 7 to 0 are called neutral, Fairly Acidic (FA), Acidic (A), Very Acidic (VA), Absolutely Acidic (AA), arg
drawn from 6 = 0 to a=—F.
— Linguistic values vary with 8 and their membership values are given by equation,
Eg = ttan@
+ (0)
AB VB B FB N FA A VA AA
v
&
O= + 1/2
a AB §VB= 3n/8
4.7.3 Defuzzification
Scanned by CamScanner
geet mate
etn
AN&SC (MU-Sem. 7-Comp) :
a
Methods of de
fuzzificatio n
~| 1. Max membersh
; ip principle — |
; oe
2. Centre of Qravity or cen
troid
3. Weighted average |
—> 8, Bisector
Scanned by CamScanner
Taken
only once
This is faster than many defuzzification methods that are presently in use.
This method involves the algebraic sum of individual output fuzzy sets, instead of their union.
The idea is to consider the contribution of the area of each output membership curve.
In contrast, the centre of area/gravity method considers the union of all output fuzzy sets.
In COS method, we take overlapping areas. If such overlapping areas exist, they are reflected more than once.
0 2 4 6]
8 10 oh
ke
x
b x Hcy (x) dx
k=1
x* = x*=
N
kz He; (x) dx
k=1
DL Wg (Xj)
° xi
i=l
x* =
n
DX pe (x)
i=l
Scanned by CamScanner
mp)
‘a Alasc (MU-Sem. 7-Co .
Fuzzy Logic’
H 4-39
:
{1
\'
{|
by,
b x
method
Fig. 4.7.11: Weighted average .
put by its respective
ighted a
.
ing eac h me mb er sh ip function in the out
verage method Is formed by weight
m_— ue.
maximum membership val
ral form of defuzzification.
fun cti ons sho wn in Fig. 4.7. 11 would result in the following gene
_ The two
ye (ax 0.5) + (b x 0.9)
0.5 +0.9
maxima)
5, Mean-max membership (Middle of ; except that the
max -me mbe rsh ip prin cipl e (height defuzzification) method
d to the
This method is closely relate be non-unique (can be more than
one).
bership can
locations of the maximum mem mizing MF.
havi ng max imu m membership value of Maxi
rage of th e elements
In that case we take the ave
mum method
Fig. 4.7.1 2: Mean of maxi
oe
on ,
- Algebraic expressi is
x* = 72
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) 4-40
> MF with
largest area
— This method uses the overall output (i.e. union of all individual output MF).
First of maxima is determined by taking the smallest value of the domain with maximized membership degree.
Last of maxima is determined by taking the greatest value of the domain with maximized membership degree.
x" | Bisector
UP Techtnowlengé : oe
*PUDTICatians ns
, * ‘ Eu
Scanned by CamScanner
J
Option
obligato:
3, Defuzzification module
more components
_ Inaddition to this, it uses two
= Data base and
} Knowledge base
—Rule base
Fuzzification module :
1.
functions.
forms the following two
Fuzzification module per
Scanned by CamScanner
Defuzzification module :
(a) Defuzzification
It performs defuzzification which converts the overall control output into a single crisp value.
This block maps the crisp value of the control output into the physical domain. This block is optional. It is used only if
normalization is performed during the fuzzification phase.
me 5S Euzeification
to a, Defuzzification.
THT
Identification of variables : Here, the input, output and state variables must be identified of the plant which is under
consideration.
Fuzzy subset configuration : The universe of information is divided into number of fuzzy subsets and each subset Is .
assigned a linguistic label. Always make sure that these fuzzy subsets include all the elements of universe.
3. Obtaining membership function : Now obtain the membership function for each fuzzy subset that we get in the above
step. |
Fuzzy rule base configuration : Now formulate the fuzzy rule base by assigning relationship between fuzzy input and
output.
Fuzzification :The fuzzification process Is initiated in this step.
Scanned by CamScanner
m. 7-Comp) 4-43
ass, (MU Se : =
l ying meh fuzz = . =
g fuz:zy ou tp uts : By app
ppl
im at e rea son ing , locate the fuzzy output and merge the
m, =. <=
mbinin zy. approx
?é
ly, initi iate defuzzification
jon : Finally, i Process to fi orm a crisp output
é pefuazificat
Ss
advantages of FL
48.
for reasoni ning.
“It uses VEY simple Mathematical concepts
j
an fis can be modified by y just addini g or deleting rules due to flexibility ofof fuz y fuzzy logi
logic.
orted, noisy input inf rmation.
y logic systems can take imprecise, dist , ; info
Fuzz
struct and understand
FLSS are easy to con . it resembles.
human re as
oning,
ne, 2s
solution to co mplex problems in all fields of life, including medici
ruzzy !0! g ic is a
making.
and decision
ntages of FLSs
4 8.3 Disadva
ch to fuzzy system designing.
There is no systematic approa
le only when simple.
They are understandab
h do not need high accuracy.
They are suitable for the p roblems whic
g Solved Problems
SEONG ENCES
1.0
i
09
0.8
07
08
0.5
1
0.4
0.3
0.2
Scanned by CamScanner
‘ Fuzzy Lj .
Al&SC (MU-Sem. 7-Comp) 4-44
r close to 10". °=! ss
Ex. 4.9.2: Model the following fuzzy set using the suitable fuzzy membership function “Numbe
; i
Soln. :
X = Integers .
: Fig. P. 4.9.2 shows the plot of degree of membership for each element. :
Table P. 4.9.2 : x and corresponding By (x)
pa 5. | 0.03
6 | 0.05
0.9 ry
i 8 | 0.2
0.8 I ‘
0.7 { 1 9 | 0.5
i
0.6 i i 10 | 1
Dela ce aman
aga ee oO 11 | 0.5
é
o4
0.3 !/ ‘\ 7 12 | 0.2
--- 2-2 nnn nnn @ Jenne nena e nn nnn nn @------
0.2}------------------ 13 | 0.1
0.1: _® , Sa 14 | 0.05 _
0 1 2 8 4 5 67 8 9 10 1 12 13 14 18 |15]0.03 |
Fig. P. 4.9.2 : Plot of x 5 (x)
Ex. 4.9.3: Model the following fuzzy set using suitable membership function. “Integer number considerably larger
than 6”. :
Soln. :
Here universe of discourse is set of all integer numbers.
xX = Integers
Then fuzzy set for “Number considerably larger than 6” can be defined as.
1
Wy @) = 1 +*q_oy
~6)
baw =) —L—
1
»X>6
+o o>
Scanned by CamScanner
-Sem. 7-Comp)
0.94 6 {0
4'
0.8
7 105
i.
0.7 hu
0.6 i 8 | 08
0.5
9 | 0.90
0.4
0.3 | 10 | 0.94
1
0.2
i } 11 | 0.96
0.1 '
dee 12 | 0.97
Soln. :
The following are a-level sets
(1,2,34,5.6)
{2, 3,45,,6}
{2, 3,45}:
{3, 4,5}
3.4) "
(4) .
level sets.
Following are strong a- (23,4, 5,6)
(2, 3,4,5}
(3,45) tan
(3, 4) hy
@eeid bieladuidads
set
fo r ih e flowing fuzzy
levelsets
and strong©ol
Find ou t all a-level
set
8) (1 0, 1) (12, 0.8), (14, 0.6)}
F.4.9.5: (7,0 6) , (8 ,, 0.
(5 , 0 - 3 ) , (6: 0.4)
-2):
A=((3, 0.1), (4, 0
Soln, :
G-level sets
Scanned by CamScanner
Al&SC (MU-Sem. 7-Com
Ex. 4.9.6. —A realtor wants to classify the houses he offers to his clients. One indicator of comfort of these houses is the
number of bedrooms in them. Let the available types of houses be represented by the following set.
U={1, 2, 3, 4, 5, 6, 7, 8, 9, 10} a 7
The houses in this set are specified by the number of bedrooms in a house. Describe comfortable house for
4-person family’ using a fuzzy set.
Solin. :
The fuzzy set for “comfortable type of house for a 4-person family” may be described as,
A = {(1, 0.2), (2, 0.5), (3, 0.8), (4, 1), (5, 0.7), (6, 0.3)}
yt bo. .
1.0 +
0.9 +
0.8 7
0.7 T
0.6 +
0.5 +
0.4 7
0.3 +
0.2 + 2 A
0.1-+ a
—> U
12345687
91
Fig. P. 4.9.6: Plot of up, (u)
Scanned by CamScanner
1 l&SC (MU-Sem. 7-Comp)
O . ,x10
Wy (x) TT, x> 10
1+——
(x- 10)
Then,
min (1 + (x10), +@-11y'] »x> 10
Hx ng &) 0 ,xs10
approximately 11”.
and
= {max((1+@- ior +(x-11)y'] x€ X
ppg
1.0 7
0.9 7
0.8 7
0.7 7
0.6 T
0.5 7
0.4 T
0.3 7
0.2 7
0.1 4
ere
Scanned by CamScanner |
Al&SC (MU-Sem..7-Comp)
pL a
17 = 4
0.9 Prrrnn nner nnn nents nnn 0g 1
nn nnnc cen ft dann ——
0.8 pannnnnnnennnnnnnn 1
O77
0.6
0.5 [n nnn nanan nn nnn :
0.4 i
0.3 i 4
0.2 4--------------------¢
--4 choo
D1 fannaannnonnnays plintenn enn pann fanaa ' 1
Soin. : ,
“Number close to 10” can be represented by,
IG Pa : 0 nt oi VX s a
(x=a)/(b=a) , aSx<b
0 » x>I15
=12
We have selected a =5, b=8,candd=15
“O os, x<5
(x=5)/3, 5<x<B8
id 12,15) = 4 >. 16,
(x 35,8,
Trapezo 8Sx<12_
(I5—xy/3 , 12$x<15
0 > x>15
1
'
'
'
1
I
Il
'
t
'
1.
!
'
'
!
1
'
'
t +
4567 8 9 10 111219141516 |
Scanned by CamScanner
AlgSC (MU-Sem. 7-Comp)
bee 4-49
ex. 4.9.9: LetA={a,, @},B = (b,,b,,
a} C = fo, C}. Let R be a relation from Ato B defined by matrix.
[Tey bz | bs |
04/0510
a | 0.2 | 0.8 |0.2
Let S be a relation from B to C
defined by matrix
Cy Cy
b, | 0.2 |-0.7.
b, | 0.3 | 0.8
b3 | 4 0
T Cy Cy
az | 0.3 | 0.8
T (a,c) = max (min (0.4, 0.2), min (0.5, 0.3), min (0, 1) )
max (0.2, 0.3, 0) = 0.3
T (a1, C2) =
max (min (0.4, 0.7), min (0.5, 0.8), min (0, 0) )
max (0.4, 0.5, 0) = 0.5
T (a,c) = max (min (0.2, 0.2), min (0.8, 0.3), min (0.2, 1) )
max (0.2, 0.3, 0.2) = 0.3
T (a2 C2) =
max (min (0.2, 0.7), min (0.8, 0.8), min (0.2, 0) )
max (0.2, 0.8, 0) = 0.8
a, | 0.24 | 0.64
T (a cy) =
max (0.4 x 0.2, 0.5 x 0.3, 0 x 1)
Scanned by CamScanner
P_Al&SC (MU sSem.7-Comp)
= max (0.04, 0.24, 0.2)=0.240 © 8.08ys
x 0)
x 0.8, 0.2
T (ay C9) = max (0:2 0.7, 0.8
max (0.14, 0.64, 0) = 0.64
- i,
Ex. 4.9.10: High speed rail monitoring devices sometimes make use of sensitive sensors to measure the deflection
of the
earth when a rail car passes. These deflections are measured with respect to some distance from the rail cay
and, hence are actually very small angles measured in micro-radians. Let a universe of
deflection be
A=[1, 2, 3, 4] where A is the angle in micro-radians, and let a universe of distance be D =[1, 2, 5, 7] where p
is distance in feet, suppose a relation between these two parameters has been determined as follows :
0.3
0.7} 1 0.2
_forlo4a}1
Now let a universe of rail car weights be W = [1, 2], where W is the weight in units of 100,000 pounds.
Suppose the fuzzy relation of W to Ais given by, ,
Scanned by CamScanner
o, | 05]
o, |} 03} 03]
D,)02| 04
position
fa) Using max product com
Tw, [w2
p, {1 | 04
Ts Db, {os |i |
0, |03 | 03)
0, | 0.06 | 0.1 |
03,0%0) = max (1, 0.10 ,= 1
0. Q)
T (Dy Wd max (1 «1,02% 05,0"
O.1,0%0)
max (1 «04, 0.2 * 1.0%
(DI, W2)
= 0.4
max (0.4, 0.2. 0, 0)
O)
mar (0.3 1,1 «x O05, 0.7 O35, O&
(D2, WD
#3
ayar (0 3,0.5.0.21, 9)
7 © O04, 0.1 80)
mar (3 «OA, LX 1.0.
‘7 (D2, We)
t
ryan (0.92, 1. O07, 0) #
bx OS, OA & OF
mar (lx ta x OS,
(p3, WD)
404
ajar (8, 0 13,04, 07
OF EL ROL, Ob & OF
mas (OE id,
y(D3, W2)
Op ad j
evar (UO, O Gb
EL OLAG GPK OR ERO
mas (Ox
Ty (DA, Wt)
OD WOR
pian QUA, 13.00, AN AI,
2e Ot, bk eO)
man (me Od, Oh «ft G@
‘yps, Wa) Ob
" pan Gb, 4nd, Gp
oneet pest SAE OED RHE
a ° Posters —
—
Scanned by CamScanner
0 fil ete
Let X be a reasonable
age interval of human bei
ng.
X = {0,1,2,3,..., 100}
Then a fuzzy set “Middle age” can be represented using Trapez
oidal MF as follows.
0 ; x $30
(x-30)/10 , 30<x<40
Trapezoid (x; 30, 40, 60, 70) = 1 , 405x560
(70-xV/10 , 60<5x<70
0 » *&>70
BH A
1.04
0.9 +
0.8 + '
O77
0.6 + !
0.5 + '
0.4 +
0.34 !
0.2 +
0.1 +
+ r ey
0 10 20 30 40 50 60 70 @
-a--rb c ohU6Ud
Fig. P. 4.9.11 : Trapezoidal MF
for “Middle age”
Ex. 4.9.12: Represent the set of old people as a
fuzzy set using appropriate membershi
Soln. : p function.
65
Let X = (0,120) set of all possible ages.
“Ages
A
yp
0 20 40 60 80 100 qho x
Fig. P. 4.9.12: Mem
bership function
for “old People”
0, O<x<6@@
Hod (x) = ) (x-60)20, 69 Sx <9
1, x2 80
Scanned by CamScanner
A | Fuzzy Logic’
Develop graphical representatio 4-53
1993" .
ft “not”. The temperature ran Nn of Membership tuner
linguistic variables “cold”, “warm” and
temperature. 9€S from © to 109°, UNction to describe
and “warm OF hot”
Oc. Also show plot for “cold and warm”
:
ei
_
0 10 29 30 70, 80 90 100 »x (temp)
Fig ba 40 50 60
:
4.9.13 : MF for
cold, 3 warm and hot
- P.
, temp.
rm”
b Plot for “cold and wa
p
—» x (temp)
0 10 20 30 i
40 50 60 70 80 90 100
x (temp)
co. 60. 70 80 2%
or hot”
: MF for “warm
Fig. P. 4 9.13(b)
ee
(2) Intersection
_ .
(3) Set difference WwW [ukrnomielas E
Re n's law.
Scanned by CamScanner
Al&SC (MU-Sem.
7-Comp)
Soln. ;
1. Union
2. Intersection
=| 0.1 0.2 03
ANB = IT +y+3 I
3. Set difference
~ ~ ~ ze
AIB = AN
~ 0.1 02 03
A = pty +>}
= O04 OS 06 0.5
B= iT +7 +57 +7)
B | A = B nT A
os 0.6 0. O4 0
B = TT +5 +> +23)
= 06 05 O04 }
BOA = +> +
" 01 02 03
AS Up y+ +)oO
~ 06 05 04 05
Betpty epee)
LH.S:AUB
~~ 06 05 04
AUB = (pey +e)9,
=——=
AUB =
04apse
05 06 0.
R.H.S:AAB
= 09 08 07 1
e
A = {498,071} |
} ms Ps _———
Fubticatians
Scanned by CamScanner
-Sem. 7-Comp)
poresRa
= i
B . {%4 05 06° i
SU ee Ob 05 P ; if
AMB = {9405 06 05 (2) ‘
since LH.S. = R.H.S. hence proved, rtp+gty} :
:
yu . _ u _
1 ,
0.7
| 1
1 Ay | | 4 mx
J | —x 1 2 3 4 5 6
0 1 2 3 4
Fig. P. 4.9.15
Soln,:
~ .
Wrsneen
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
=,
(1+3)*0.7 *
4,
Therefore, Area of A; = 2
~ 2+4)*1 6
Similarly, Area of Ap. =. baat 5.3" 3
Centreof Ay = 2.5
Centre of Ay =4
(2.5 x 1.4) + 4x3) 15.5 _
X* = 1443 =44 = 3.52
Ex. 4.9.16: Consider three fuzzy sets C, , C, and C, given below.-Find defuzzified value using :
0.75
0.5
0.25
1.0
0.75
05+
~ 0.25 +
Scanned by CamScanner
(MU-Der._+ -vomp)
Ae
golf *
First find aggregation of all MFs (union)
55 3.6
set of C, ,C, and 6;
Fig. P. 4.9.16(c) : Aggregated fuzzy
4) Using Mean of max
membership value In
we take the mea n (ave rage ) of all the elements having maximum
since C3 is the maximizing MF,
~
Cy
64+7 13
x* = 2 = 2 =6.5
+ t. . oe
a
individual fuzzy S&
Then find centre of each
of C1 = 2.5
Centre
vive sieve
Centre of C2 5
t OVATE RMS eA
Centre of Cs J BS
— : OS
Scanned by CamScanner
, 4-58
AN&SC(MU-Sem. 7-Comp)
+(2x 6.9)
(2.5 x 1.2) +. 5x5)
= 12+15+2
Centre of C = 2.5
Centre of C 5
Centre of C; 6.5
~ = {9+
f01 03,0811,
Ba Grargt
08
sf
-
0.2 | 98, Oe +4
0.5 }
~ f0.1 02
B= {eS
1
02 a}
A= {+
Seaweed
Scanned by CamScanner
2) Algebraic product
MA-BO = BZ®- ns @]
— |_|
3) Bounded sum
4) Bounded difference
A 03 07 1 ~ 0.4 09
A ={ So art and B = {f2+08 }
XX % My Ya
4
0. 0.2 04,06 08, 1) — g
High temperature= 4 734 * 135 * 136 * 137 * 138 139
— . WH Techhnewtedys
Publications
:
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) 4-60
FUZZY Logi
01 O02 04 06 O08
High pressure= 4 409 + 600* 700* 800* 300 +7006 }
Temperature ranges are 130° F to 140° F and pressure limit is 400 psi to 1000 psi. Find the following
membership functions :
Verylow = low2
Scanned by CamScanner
dg aigsc (MU-Sem- 7-Comp) Fuzzy Logic.
goln. :
“ ' using max-min composition
= 07
Xi 0.6 05 0.3
T= ROS xy l 08 04 an |
2, Using max-product
= max (0.6x1,0.3 x 0.8)
T(X1, 21)
= max (0.6, 0.24)
= 0.6 |
x 0.4)
T(X1, Z2) = max (0.6 x 0.5 , 0.3
= max (0.30, 0.12)
= 03
= 0.21
Scanned by CamScanner
AIBSC (MU-Sem. 7-Comp)
= max (0.1, 0.36)
= 0.36
T(X2, Z3) = max (0.2x0.3, 0.90.7)
= max (0.06
, 0.63)
= 0.63
Zl Z2 Z3
X1 fF 06 03 0.21
T= ROS= xo [ 0.72 0.36 0.63 ]
Ex. 4.9.22: Given two fuzzy relations R, and R, defined on X x Y and Y x Z respectively, where
R,; = 04 03 07 09
0.6 0.1 08 0.2
Z = {a, b}
Similarly Rz is defined on Y x Z where Y= {a B, y, 5}and
So, : a b
0.1 0.9
a
02 03.
R = &
27" YT! 05 06
51 o7 03
and Z = {a, bh.
So composition of Ry and R2 will be defined on X x Z where X = {1, 2, 3}
Scanned by CamScanner
wy nese U-Sem. 7-Comp)
Mri oR2(2, b) = max (min (0.4, 0.9), min (0.3, 0.3), min (0.7, 0.6), min (0.9,
= max (0.4, 0.3, 0.6, 0.3)
= 06
0.7))
Bri or2 (3, a) = max (min (0.6, 0.1), min (0.1, 0.2), min (0.8, 0.5), min (0.2,
= max (0.1, 0.1, 0.5, 0.2)
= 0.5 .
Uri oR2 (3,b) = max (min (0.6, 0.9), min (0.1, 0.3), min (0.8, 0.6), min (0.2, 0.3))
= max (0.6, 0.1, 0.6, 0.2) .
= 06
2. Max - product composition
a b
0.35 0.18
—
0.40 0.54
= max (0.1 x 0.1, 0.2.x 0.2, 0.3 x 0.5, 0.5 x 0.7)
Uri oR? (1, a)
= max (0.01, 0.04, 0.15, 0.35)
= 0.35
= max (0.1 x 0.9, 0.2 x 0.3, 0.3 x 0.6, 0.5 x 0.3)
Uri oR2 (1,b)
= max (0.09, 0.06, 0.18, 0.15)
= 0.18
= max (0.4 x 0.1, 0.3 x 0.2, 0.7 x 0.5, 0.9 x 0.7)
rio R2 (2, a)
= max (0.04, 0.06, 0.35, 0.63)
= 0.63
= max (0.4 x 0.9, 0.3 x 0.3, 0.7 x 0.6, 0.9 x 0.3)
Pri o R2 (2, b)
= max (0.36, 0.09, 0.42, 0.27)
= 0.42
Uri oR2 Os a)
= max (0.6 x 0.1, 0.1 x 0.2, 0.8 x 0.5, 0.2 x 0.7)
max (0.06, 0.02, 0.40, 0.14)
‘wie Toth Keemloan:
Scanned by CamScanner
WF AlgSc(MU-Sem.7-Comp)
= 0.40 X 0.6 x 0.2
x 0.3)
= max (0.6 x 0.9,
0.1 x 0.3, 0.8
Hei or2 ( 3,b) 20.54
8, 0.0 6
max (0.54, 0.03, 0.4
maturity
the ‘co lor of aa fru it’ and ‘grade of
: wee n fru it’ , wh er e col or, grade ang
Ex. 4.9.23: Let R be the relation that specifies the relationship bet of
and ‘taste
Relation S specifies the relationship between ‘gra de of maturity’
ly as follows.
taste of a fruit are characterized by crisp sets x, y, Z respective
X = {green, yellow, red}
0.5
0.2
Solin. :
-T (green, sour) = max (min (1, 1), min (0.5, 0.7), min (0, 0))
= max (1, 0.5, 0)
= 1
T (green, tasteless) = max (min (1, 0.2), min (0.5, 1), min (0, 0.7))
= max (0.2, 0.5, 0)
= 0.5
T (green, sweet) = max (min (1, 0), min (0.5, 0.3), min 0,1)
= 0.5
T (green, sweet) = max (1 x0, 0.5 x 0.3, 0x 1)
= max (0, 0.15, 0)
= 0.15
T (yellow, sour) = max (0.3 x 1, 1 x 0.7, 0.4 x 0.7)
= max (0.3, 0.7, 0.28)
= 0.7
T (yellow, tasteless) = max (0.3 x 0.2, | x 1, 0.4 x 0.7)
= max (0.06, 1, 0,28)
= 1
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)_
Ex. 4.10.1: Design a fuzzy controller to regulate the temperature of a domestic shower. Assume that:
{a) The temperature is adjusted by single mixer tap.
(b) The flow of water is constant. .
(c) Control variable is the ratio of the hot to the cold water input.
The design should clearly mention the descriptors used for fuzzy sets and control variables, set of rules to
generate control action and defuzzification. The design should be supported by figures where ever possible.
Soln. :
Step 1: Identify input and output variables and decide descriptors for the same.
— Here input is the position of mixer tap. Assume that position of mixer tap is measured in degrees (0° to 180°). It
represents opening of the mixer tap in degrees. 0° indicates tap is closed and 180°indicates tap is fully opened.
— Output is temperature of water according to the position of mixer tap. It is measured in°C. We take five descriptors
for each input and output variables.
— Descriptors for input variable (position of mixer tap) are given below.
EL - Extreme Left
- Left
- Centre
- Right
ER - Extreme Right
{ EL, L, C, R, ER}
Scanned by CamScanner
Al&SCAME “Sem. /-Comp)
wr - WarmTemperatura Fuzzy Logic
HT - Hot Temperature
° ° x(degree)
45 90° 135° 180°
°
Fig.g. P. P. 4.10.1: ; Membership function for position of mixer
tap
45— ,0<x<45
Hey (x) = =
Xx
a5 «OSKS45
My (x)
90-x
45.” 45<x<90
x-45
He (Xx) =
135 —x
45 , 90<x<135
x-90
45 », 908x135
br (x) = iko—x
“45. °? 135 <x < 180
x- 135
lpr (x) = gg > 135 SxS 180
w ater.
put variable - temperature of
2 Membership functions for out
y(’C)temp.
40 80 100
t ions for water temperature
E ° 0 mbership func
ig. P. 4.1 0.1(a) : Me
E
ns _ W=Y¥ osys10
uvcT (y) = ~ 10
:
. + O<y<10
L
pCT (y) = Oy | 10<ys40
Epe
E : TechKnowledyé
Poollcations
Scanned by CamScanner
LWT (y) =
HT (y)
wt , 80<ys100
Table P, 4.10.1
EL VCT
L CT
c WT
R HT
ER VHT
We can read the rul |
e base Shown in Tab
le P. 4.10.1 in terms
of If-then rules.
Rule 1: i mixer tap position
is EL (Extreme Left)
then temperature
Rule2: of water is VCT (Ve
If mixer tap position ry Cold Temperatur
is L (Left) then temperat e).
ure of water is CT (Co
Rule 3: ld).
If mixer tap position
is C (Centre) then temp
erature of Water is WT
Rule4: If
(Warm)
mixer tap pasition
is R (Right) then temp
erature of Water is
Rule5: HT (Hot)
If mixer tap position
is ER (Extreme Right)
then temperature of
Thus, we have five rules. Water is VHT (Very
Hot).
Step4: Rule Evaluation,
Assume that mixer tap posi
tion is 75° . This va
lx u
=e75° Maps to
90 -x follow; Ww
Rule2: Uy(x) = > ONOwing two MFs of Rule 2 an
45 d Rule 3 respecct
tively.
Rule3: x-45
p(x) =“s
Now, substitute the value
of x = 75 in above two €quations,
90278 we
Strength of Rule 2 =>, (75) = 5 Bet strength of €ach
=3 rule
Scanned by CamScanner
1g8C (MU-Sem. 7-Comp)
oe
ae find the rule with the
maximum strength
2)
max (Strength of Rule 1, strength of Rule
ii}
Max (Hy (x), He (x))
ul
a2) 2
max (3,3) =3
"
Thus, Rule 3 has the maximum strength,
t MFs of
according to Rule 3, If mixer tap position is c (center) thentwowater temperature is Warm. So, WE use Outpu
warm water temperature for defuzification, We have following equations for warm water temperature.
y-10
Mwr (y)
80-y
Lwr (y) 40
2
Since, the strength of rule 3 is;37 Substitute Ly; (y)=F in the above two equations.
y-10 2
30 3 >y= 30
80-y 2
40 =3>y= 53
: Design. a controller to determine ihe wash time. of a domestic washing ‘machine ; Assume. that input isis
grease on cloths. Use three ‘descriptors for input variables and five descriptors for output variables. Derive ‘set
of rules for controller action and defuzzification. The design should be supported by figures wherever possibl
- vice- -vers:
Show that Af the: cloths are solled to a.larger, denice) ‘the wash. time :will ‘be. more “and
‘(MUEIEEE ES ees aE PESRE) Ey PETS
Soln.:
decide descriptors for the same.
Step1: Identify input and output variables and
Here inputs are ‘dirt’ and ‘grease’. Assume that they are measured in percentage (%). That is amount of dirt and —
{Sp, MD, LD }
NG . No Grease
‘MG . Medium Grease
SENSE
> Se TechXnowledas
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
‘| LG -. LargeGrease .
i We use five descriptors for output variable.
H So, descriptors for wash time are {VS, S, M, L, VL}
| VS - Very Short
S - Short
M - Medium ~
L - Large
VL - Very Large
: ariables.
Step2: Define membership functions for each of the input and output v
We use triangular MFs because of their simplicity.
(1) Membership functions for dirt
ae)
| x (dirt in %)
| 0 50 100
; Fig. P. 4.10.2 : Membership functions for dirt
j Usp (x) = Ep»
50—x
OS x S50
a0 (OSKS50
mp (x) = _
a x, 50<x<100
|| bn@® =7=Z- ~50
, 50Sx<100
(2) Membership functions for grease
y(grease in %)
0 50 100
ei
50 » OSys50
Hua (y) = 100 -
50 , 30<y< 100
y-50 /
Mig (y) = » S0Sy<109
Scanned by CamScanner
Al 25C (MU-Sem. 7-Comp)
25 -7z
15 , 10<z2<25
z-10
15.» 10s2<25
MM (z) =
40-z
15. > 2<z<40
Z-25
LL (z)Z= 60-2
7 +» 40<zs60
LD M iL ML
The above matrix represents in all nine rules. For example, first rule can be “If dirt Is small and
No grease then wa
sh
time is very short” similarly all nine rules can be defined using if --- then.
Hep 4: Rule Evaluation
Assume that dirt = 60 % and grease = 70%
ditt = 60.9 maps to the following two Ms of “dirt” varia
“dirt” variable
100-—x _xX=-50
bX) =~ 59 md Peo) ="5Q
Amilanty grease= 70 % maps to the following two MFs of “greas e” variable.
100-y 50
~Y=50
ume(Y) =~ 59. 20d Pua (¥)=""5q
bE Waluate Maio (x) and [yp (x) for x = 60, we get
100-60 _
o
Ww Tech Knowledge
feptieen ene
Scanned by CamScanner
SF _Al&SC (MU-Sem, 7-Comp
) : . RwyLoy
ashe
, . ~
Hip (60)
50 = ~ 5
Similarly evaluate Hc (y) mo
and j,¢ (y) for y = 70, we get
. 100-70 3 e
Hac (70) = “Sp 5 Q)
, 70-50 2
Mig (70)
5 = “39 + (4)
The above four Equation leads to
the following four rules that we are
(1) suppose to evaluate
Dirt is medium and grease
is medium.
(2) Dirt is medium and
grease is farge.
(3) Dirtis large and grease
ismedium.
(4) Dirtis large and greas
e is large.
Since the aitedens aoe of each of the
above rule is connected by and oper
strength of each rule. ator we use mini operator to Evalu
ate
Strength of rule 1: S, = min (tian (60), f4y4q (70) ) =
min (4/5, 3/5) = 3/5
Strength of rule 2:S, = min (Ump (60), Hyg (70) )
= min (4/5, 2/5) = 2/5
Strength of rule 3: S3_
= min (yp (60), {tyg (70)
) = min (1/5, 35) = 1/5
Strength of rule 4:5, = min (Mip (60), Hh (70)
fi ) = min (1/5, 2/5) = 1/5
Grase
| Dirt HMa(70) tg (70)
'
x Jos] x Max ME xl xlx
| |
3/5
Ul
WS = TzZ~E-10 3/5 =A
Z = 49
2=3]
oe 19¢ t 31
= 25 min
Scanned by CamScanner
25 40 60
min(4/5},3/5)
4 L
)
H 4
®
MG
cetlsonaneen2! 1
aan Nae 15
~ ap -b -------- -
: ULLAA.__—-+
50 100 y 0 10 25 40 60 z
min(i/5,3/5)
LG 4
100 y 0 10 25 40 60 z
50
min(1/5,2/5)
j
Aggregation
grease = 70%
dirt = 60%
:
1
| ‘
4 . Max MF
x
Union of all 3/5
3/5 output MF 2/5
2/5 1/5
1/5 2z
0 10 25 40 60
10 25 40 60 Zz
9
Aggregate output
MF
ion
evaluation and defuzzificat
4. 10 .2 (d ) : Process 0 f rule
Fig. P.
r the same.
nd de ci de descriptors fo
@
ten a, .
ut variables r.
1: Identify inp ut and out P
a nd grade of wate
pera tu re
e water tem
input variables ar
Ww TechKnowledge
Here Publications
Scanned by CamScanner
_Al&SC (MU-Sem. 7-Comp) _
74 ————=
2 — centage-'
1 in per
— Water temperature is measured in °C. grade of water is. me asured in P
- Descriptors for water temperature are
{C, M, H}
Cc - Cold
M - Medium
Hl
H - High
et — Descriptors for grade are
:
1 {L,M,H} ,
1
L - low
M - Medium
Hl H - High
in — Amount of purifier is measured in grams. Descriptors for amount of purif
ier are
{S, M, L).
S - Small
M - Medium
HH
L - Large
; + Step2: Fuzzification /
Define membership functions for each of the input and outpu
t variables.
We use triangular MFs because of its simplicity.
(1) Membership functions for water temperatur
e
Hy
Hy) L M H
0 50 109 Y(Grade in %)
Fig._ P. 4.10.3(a) : Membership
Qe TechKnowladge P funf, ctio
Publications = ns for Grade of was_
Scanned by CamScanner
NaSC (MU-Sem. 7-Comp)
sey
50 , 50<y<100
-50 —
BH (y) = 2220
50° 50<y<100 is
ga) Membership i
yh
3
wd
4
z-5 5<z<10
pL (z) 5°
Cc L M/S
M L M M
H MiSs {5
Uc =
“grade” variable.
wo Mes of "Bt
a . y
Similarly rade = 30 maps to the following t
76 iy) = Aaz~ and bu ) = 50
py ~
: TochKnowledye
Puptications
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
We get,
Grade
14, (30) Hy (30) Temp Grade
Temp
Hg(8)5 | 0.4 x Mek SHE RO
1yy(5) | 0.1] 0.4] X tlmin
x |x |x misls
Step 5: Defuzzification
gth
hnique, we first find the rule with ma: ximum streng
a
ion
Since, we use “mean of max” defuzzSx,ifi53,catSa) = tec
max (0.4 , 0.6, 0.1, 0.1) = 0.6
= max (5,
Scanned by CamScanner
4g AlaSC (MU-Sem. 7- Comp)
Fuzzy. Logic
Hy(z)
MIN
Il
“06 =
WIN
N
it
_—
w
NV
° Grade = 30 . Aggregation
:
Temp =5
1 Max MF
Unlon of all 9 ‘
output MF 4
4
A |
|
Aggregate output ; |
MF :
Scanned by CamScanner
FUZZY Logie,
y Al&SC (MU-Sem. 7-Comp) I ee
4. Train Brake Power Controller
. i st ation.
Ex. 4.10.4 : Design a fuzzy controller for a train approaching or leaving a
se,
and speed of the train. The output is brake power used. U
(i) Triangular membership functions
(ii) Four descriptors for each variables
(iii) Five to six rules.
(iv) Appropriate deffuzication method.
Soin. :
Step1: Identify input and output variables and decide descriptors for the same.
- Here inputs are
Distance of a train from the station, measured in meters and
Speed of train measured in km/hr.
- Output variable is brake power measured in %.
As mentioned, we take four descriptors for each of the input and output variables.
For distance => {VSD, SD, LD, VLD}
VSD Very Short Distance
sD Short Distance
LD Large Distance
VLD Very Large Distance
Low Speed
HS High Speed
HP High Power
VHP Very High Power
Step 2: Define membership functions for each of the input and output variables,
Hix)
V8D sD
: u t t
0 100 ~~ xX
400 600 (Distance In Meters
: )
Fig. P. 4.10.4 : Member
ship functions for distan
ce (distance In meet
teer
rss)
Bo waked
Scanned by CamScanner
/alasc (MU-Sem.7-Comp)
BVSD{x) ~ 100~x
¥
i
po
|
|
1
100»
5
ee
I ; i
b 2
BSD (x) =
I
100° O<x<100
i
400 - x .
|
300” 100 <x< 400 :
.
|
[
L
x~400
BMLD (x) = To9 400 <x <500
f
.
>y
0 10 50 60 Speed in km/hr
tions for speed
Fig. P. 4.1 0.4(a) : Membership func
10-y ev et0
= “49 ° O<y
uVLS(y)
o<yy<10
Ls = >
y
S0-40’ 10<y<50
y=10 10Sy<50
pHS(y) = 40 ’
- 60=¥ 50<y $60
10 ’
y= 30 50<y<60
pVHS(y) = 10°
Zz ;
(Breakpower In %)
brake power
“Puptications
Scanned by CamScanner
. | . Fuzzy Log;
WF Al&SC (MU-Sem. 7-Comp) 4-80 —— —e “=
os, 20<z<80
a go<2 100
For example, First rule can be “If distance of a train is Very Short (VSD) and speed is Very Low (VLS) then required
brake power is High (HP)”.
Similarly all 16 rules can be defined using If then rules.
pys(52) = P= Lo | 8)
Myys(52) = a =0.2 - ip.
The above four equations leads to the following for rules that we needs to evaluate
Scanned by CamScanner
f nistance is short and speed is high
pistance isIs short
é:- pistance and speed is very high
large and speed is high
pistance is large and speed is very high
Speed
VHS VL§ LS HS VHS
Distance. VLS LS HS
x | xX | xX] xX vsD| —
vsp}|
sel ~=—Ss SD VHP
sp| x | xX
LD Lp | HP
Lo} X | X{0.03}0.03
x| VLD
VLD | x | x | | x | |
(b) Rule base table
table
(a) Rule strength ponding output MF
ta bl e an d it 5 mapping to corres
Rule streng th
Fig. P. 4.10.4 (c ) :
fep5: Defuzzification
echnique-
We use “m ea n of ma x” defuzzification t
~
gth
fin d the rul e wit h ma xi mu m stren
Wefirst max (S152 S3, S4)
M
0.8
2, 0.33, 0.2) =
= max (0.8, 0.
rength 0.8.
This corresponds to rule 1. h” has maximum st
~ Thus rule 1- “If dist is short and speed is a el?) This mappi |s shown in Fig. P. 4.10.4(c).
ng
‘
ds to the output mom f MHP ypl2)-
"The above rule. corresponds OT
100 =z
ke avera be
Too a pap(2) 20
=.Mpute the final defuzz! z _ 100-2
BaP
se 0. = z = 84
:
z = 68
68+84 76%
a* = 2,
zfs pa Ros
Scanned by CamScanner
Fu: Logic
KT SD
kf sD
0 100
te LP ©
Zz
0 20 80 100
CS ay
Union of 08, / .
all oulputMF / Yj /
===> / LE: VY tj p
0.03 mt PO»
0 20 80 100
Fig. P. 4.10.4(d) : Process of rule evaluation and defuzzification
5. Water Tank Temperature Controller
Ex. 4.10.5: Design a fuzzy controller for maintaining the temperature of water in the tank at a fixed level. Input variables
are the cold water flow Into the tank and steam flow Into the tank, For cooling, cold water flow is regulated and
for raising the temperature steam flow Is regulated. Define the fuzzification scheme for input variables. Device
a set of rules for control action and defuzzification. Formulate the control problem in terms of fuzzy inference
rules incorporating the degree of relevance for each rule. Design a scheme which shall regulate the water and
steam flows properly,
TechKnowledga
Pubfications eee
Scanned by CamScanner
=
Here inputs are,
SUVS
2. Amount of valve Opening for steam
Descriptors for valve opening of steam flow are {ELSV, LSV, CSV, RSV, ERSV}
ELSV Extreme Left Steam Valve
LSV Left Steam Value
|
VCT Very Cold Temperature
{
[
}
{
ct Cold Temperature
wt
||
i Warm Temperature
|t
HT Hot Temperature
|?
| VHT Very Hot Temperature
ature
WHT Very Very Hot Temper
| t and outpu t var
iables.
functions for each of the inpu
Step2: Define membership
ini g for cold water flow
sh ip fu | tions for valve open
nc
| 1. Me mb er
|
acy ,cc¥ RCV
3 erev
|
45 90 435180)
0 cold water
cti o! ns for valve opening of TechKnowledyé
fa ; Fig. P. 4.10.5 : Me mbership fun Wey Pustications
Scanned by CamScanner
Fuzzy Logic: .
WF Alasc (MU-Sem. 7-Comp) 4-84
45-%x <x<s45
MELCV(x) = ~g57 e
= 90 <x< 135
-90 90<xs 135
HRCV(x) = “a5 >
t t T T » y (In degrees)
0 45 90 135 180
i
Fig. P.4,10.5(a) : Membership functions for value opening for steam flow
| ; A5-y
| HELSV(y) 45° O<sys45
ae
MISV(y) = 45, O<ys4s
90-y
45’ —45<ys90
_ y=45 .
HESV(y) = “G5 45<y<90
'135~y
eS 90<y<135
RSV(y) = 7-90
MRSVIy) = “GS .90<ys135
180~y
45 -. : 135 <y<180
LERSV(y) y= 135 . :
45” 135 <y<180
Scanned by CamScanner
ns membership functions for temperature of a water in tank
3. .
z
t t — T T i re of
0 10 30 50 70 90 100 (Temperature
waterin ©)
. ter in tank
Fig. P.4.10.5(b): Membership functions for temperature of Wa
10-z o0<z<10
pVVCT(z) = to"
z 0<z<10
pVCT(2) = 49°
30-2 10<z<30
20 '
z-10 10 <2 $30
uct(z) = “20 "
50-2 30<2<50
20 ’
2-30 30<z<50
pwi(z) = ~20 ’
TO, 50<z<70
1-70 70<2£90
wVHT(z) = 20 '
100 - <100
90<2z<1
2-90 90<25100
wHTl2) = “0 ’
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
Hoov(95) = 135-9
ae 5 = 0.88 sa(l)
rev) = ==? - 0.11 0
— Evaluate p<y(y) and Lesy(y) for y = 50°, we get
0.88
UW
Scanned by CamScanner
AI&SC (MU-Sem. 7-Comp) _ Fuzzy Logic
z-10
WCT@) = “3-= 0.88
‘z-10
= 99 2 2=27.7
50 - 0.88
MCT) = “=*—
50 -z
= Tp 25323
2k =
27.7 + 32.3
2
= 30°C
Q.2 Model the following as fuzzy set using trapezoidal membership function : “Numbers close to 10”.
Q.3 Using Mamdani fuzzy model, Design a fuzzy logic controller to determine the wash time of a domestic washing
machine. Assume that the inputs are dirt and grease on cloths. Use three descriptors for each input variables and five
descriptor for the output variable. Derive a set of rules for control action and defuzzitication. The design should be
supported by figures wherever possible.
nd$
Find (i) Max-min composition of Ra
R and S.
(ii) Max-products composition of
Qs Define Supports, Core, Normality, Crossover paints and o.— cut for fuzzy set.
QQ.6 High speed rail monitoring devices sometimes make use of sensitive sensors to measure the deflection of the earth
when a rail car passes. These deflections are measured with respect to some distance from the rail car and, hence are
actually very small angles measured in micro-radians. Let a universe of deflection be A = [1, 2, 3, 4) where A is the
angle in micro-radians, and let a universe of distance be D =[1, 2, 5, 7] where D is distance in feet, suppose a relation
between these two parameters has been determined as follows : ;
Scanned by CamScanner
W is the weight in units of 100, 000 pounds. Suppose the
Now let a universe of rail car weights be W = [1, 2], where
fuzzy relation of W to A is given by,
Design a fuzzy logic controller for a train approaching or leaving a station. The inputs are the distance from the station
and speed of the train. The output is the amount of brake power used. Use four descriptors for each variable use
Mamdani Fuzzy model.
Determine all « - level sets and strong a - level sets for the following fuzzy set.
A= {(1, 0.2) (2, 0.5), (3, 0.8), (4, 1) (6, 0.7), (6, 0.3)}.
Design a fuzzy controller to determine the : wash time of a domestic washing machine - Assume that the inputs
tu are dit
and
al action
each /P variable. Device a estates % a
and grease on clothes. Use three descriptors for lor contro
defuzzification. The design should be supported by. figures wherever possible. » C Clearly indi s are
, . y indicate that iif the clothe
soiled to a larger degree the wash time required will be more.
Q.14 Explain different methods of defuzzification.
Q. 15 Explain cylindrical extension and projection operations on fuzzy relation with example
Model the following as fuzzy set using trapezoidal membership function “Middle age”
Scanned by CamScanner
Fuzzy Logic
Comp)
ig nigsC (MU-Sem.7-
OO . by any 2 methods.
; he given memb ershipi functio .
Q. 17, det erm ine the defuzzified output value
q.17 Fort n as shown in Fig.
Fig. Q. 17
fuzzy models
Q.18 Compare Mamdani and Sugeno ter as the
ature of wa
purif icati on plant. Assu me the grade of w ater and temper
Design fuzzy logic controller for water va riables. Derive
g.19
: as the output. Use three descriptors for input and output
inp uts and t amount of purifier should be supported by figures.
Clearly indicate that
inputs and the required on and deff uzzi fica tion . The desi gn
set of rules for control the acti
fier required is large.
temp erat ure is low and grad e of water is low, then amount of puri
if water
mple.
techniques with suitable exa
Q.20 Discuss fuzzy composition
given by
Q.21 Two fuzzy relations are y2
yl
x1 lo 0.3 |
R =
x2 LO2 09
zl z2 23
| 1 05 0.3 |
yl
S = 0.4 0.7
y2 L08
the fuzzy relations.
ma x- pr od uc t co mposition between
osition and
Tas amax-min comp
Obtain fuzzy relation zzy inference sy
Fu
stem.
s in a Mamdani
the f ‘o rm at io n of inference rule
Q.22 Describe in detail
ooo
wW TechKnowl edge
Puotications:
Scanned by CamScanner
f
‘
i
‘
Network
fr
Artificial Neural
i
t
———SS=
:. Important Terminolog
inologies of
Introduction : Fundamental concept : Basic Models of Artificial Neural Networks
ANNs : McCulloch-Pitts Neuron
Single layer Feed Forward ANN, Multilayer Feed Forward ANN,
5.2 Neural Network Architecture: Perceptron,
ation algorithm.
Activation functions, Supervised Learning : Delta learning rule, Back Propag
5.3 Un-Supervised Learning algorithm : Self Organizing Maps
As shown in Fig 5.1.1, There are two input neurons and one output neuron.
Each neuron has an internal state of its own called ‘activation of a neuron’,
The activation signal of one neuron is transmitted to another neuron.
— This Activation of a neuron can be considered as a function of the inputs a neuron receives.
— Consider a set of neurons; say X, and Xz, transmitting signals to another neuron, Y,
They transmit signals to output neuron Y
- As shown in Fig 5.1.1 there are two input neurons X; and X,.
li . swan
— Input neurons X; and X2 are connected to the output neuron Y, over a weighted interconnection
ction links (W; an
the net input is calculated as :
For the above simple neural net architecture,
Yin = Ww, Xy+ Wy X2
Scanned by CamScanner
Comp) __
(MU: -Sem. 7-
|
of the input neurons x;_ and Or espective
A there x32and xjare activations : Le.
over the net input,
the output neuron Yca
nb by applying activations
. sre output y OF . obtained
ey 2 Fin)
tion function
y ov er the net input is called activa
ap pl
thee function that we
Networks s.in a
f giological Neural a neuron. Th er e are ap pr ox imately 40neuron
el em en t in the human brain is called
l computin g ere are
+ The fu ndamenta d sy rapses- Th
ns an
‘human brain. ug h a co nn ec ti on network of axo
ens thro
communica tion betwe
en neurons happ
neuron.
10° synapses per
approxim ately al impulses.
a t e wi th ea ch ot he r by means of electric
nic
_ Neurons commu s the network
of neurons.
ironmen t surround
chemical env complicated
stem and has a very
rm at io n fl ow in nervous sy em el y co mp lex interconne
ctions
in fo made up of ex
tr
ne tw i the brain is
ork in
ur al
- The huge ne
. within the
structure.
e hu ma n nervous system li ve r st im ul j both from
th de
shows inform
atio n flow in ork. Receptors
- Fig. 5.1.2 s pr ov id e in put to the netw
nsor y recept
or d.
e ext ernal worl
- Asshownin
Fig. 5.1.2, se
th e st im ul i originate in th twork o f neur
ons.
se ns or y or gans, when in fo rm at io n into the ne
body and from e s that convey
e fo rm of el ectrical impuls
are in th
-- The stimuli
Internal
; .
External
;
feedback
system
rm at io n in the nervous ous System
Fl ow of in fo ss ing in the Central Nerv
Fig. 5- 1. 2: io n pr oc e
accoun nt of informat
olled on ou s actions.
e s in t he form of vari
espon s
otor organs.
ctors then pr ‘tted to the m
" (CNS), These effe feedback.
command an id external
When necessary, te rn al
both in stem.
ni to re d in the CNS by cs of a closed loop sy
mo racteris ti
Motor organs are has the ¢ ha
ucture
~
ous system str a typical
The overall nerv te s. A schematic of
th e de nd ri
B on ~ | the axon and
ological Neur e cel l bo dy , ca lled the soma,
th
3 major regions: .
Atypical cell has 7 body.
1.3- neuron’s
og ic al ne ur on is shown in Fig.5- bush of th in fi br es around the
_ bi ol is @ very fine TechKnowledy s
at, which
Dénendridrtecites fo£2 rm a dendrite tree, mete Pudticacipns
Scanned by CamScanner
| Al&SC (MU-Sem. 7-Comp) Artificial Neural Network
Dendrites
Cell body
Axon
\ Impulse
Axon
Synapse |
=3 =
Terminal receiving neuron
Incoming impulses can be excitatory if they cause the firing, or inhibitory if they hinder the firing of the response
the inhibition by the amount called the threshold of the
The condition for firing is that the excitation should exceed
40 mV.
neuron, typically a value of about
by neighbouring neurons and by thesneuron itself
The incoming impulse to a neuron can only be generated oon 7 at
are more likely to cause
spaced in and arrive synchronously
Hence, impulses that are closely
The characteristic feature of the ; biological neuron Is that the signals generated do not differ.
e significantly
ie "
either absent or has the maximum value
magnitude; the signal in the nerve fibre is
The information is transmitted between the nerve cells by means of binary signals
a state of total non-excitability ' in cciee of
After an axon fibre finishes generating a pulse, it goes Into or a certai over| if the
conduct any signals during the refractory period
time, called the refractory period. The nerve does not yp 1 OE Tae
excitation intensity is very high.
milliseconds. However the ref act get not
The time for modelling biological neurons Is of the order of owever, ractory period.”
uniform over the cells. “ mes NSB atet
Scanned by CamScanner
“ nan prain an
s). They do
Low compapared
re to ANN. The speed is in milliseconds. Human (in nanosecond
eed 7 Speed is » High30 teatigue’
may y fati fatigue ifi too much informatio : Fatigue -
afr Brain n and stress is | not experience
presented. i
on the
inci
The no. ; of neuronsi in Brains is 10"
1
i
and total | The size of ANN depends
——————
7€ are 10°> Thos the sami (usually hundreds = OF
no. of connections | application
; us the complexity of human and the developed
who-
brain is very high thousands) chosen
—
develops the application.
ae
. also perform Massive parallel
i simultaneously. | Can
i
complexity | © an perform Massive parallel computation but much
computation simultaneously
brain.
more faster than Human
=
=
=
the information in continuous
- Stores
a in the synapse.
Storage Stores the information memory locations.
are not modelled
capacity
their topology are fault- | Artificial neural networks
due to
Biological neuron networks tolerance or self-regeneration.
Tolerance so minor failures | for fault
stored redundantly
tolerant. Information is
loss.
will not result in memory unit for controlling
There is a control
external to the computing
No specific control mechanism computing activities
Control
Mechanism | task. Nvidia GPU runs on 250.
body's | A single
The brain consumes about 20% of all the human and requires a power
Power watts alone,
on about 20 watts. supply. Machines are way
less efficient
An adult brain operates
consumption | energy.
than biological systems.
other
to connect Artificial neural networks ‘in the
grow and reach out where no
learning in human brain, Brain fibres to | hand, have'a predefined model,
allows new connections further neurons or connections
can b e
to other neurons; ‘ neuroplasticity or weaken based
.
and synapses may strengthen iaadded
or removed.
be created ynap
al
etic | on their importance
NN
saci
jbetween Biological Neuron and A . Cell Body
Comparison
: Dendrites
v -
Threshold
: Vv
Summation
icial neuron
uron and a rtif
b jological ne
logy petween ; ,,
Efe. 5.1.4 sh
_Y Publica
Scanned by CamScanner
Artificial Neural Netwoik
Al&SC (MU-Sem. 7-Comp) a 5-5
: inputs based on their SyNaptic
synapy
inpu
— The dendrites in the Biological Neural Network are analogous to the weighted
interconnection in the Artificial Neural Network. :
; ‘ summation and t hresholq
in the ANN which also comprises of
— The cell body is comparable to the artificial neuron unit
unit.
is modeled using
of Artificial Neural Netw ork. So, ANN
- Axon carries output that is analogous to the output unit in case
the working of basic biological neurons.
1. The best-known disadvantage of Neural Networks is their “black box” nature. This means that you don’t know how
and why your NN came up with a certain output. For example, when you put in an image of a cat into a neufal
network and it predicts it to be a car, it is very hard to understand what caused it to came up with this prediction
than traditional Machine Learning algorithms, as in at least
2. Neural Networks usually require much more data ; 7
thousands if not millions of labeled samples.
computationally expensive than en
Expensive : Usually, Neural Networks are also more
3, Computationally
tch. Most
algorithms. State of the art deep learning algorithms can take several weeks to train completely f rom scratch.
traditional Machine Learning Algorithms take much less time to train.
1. Forecasting
Scanned by CamScanner
. As it, th transmis.
) ae .
re qu ir e a larggee amount of memory for storage. . a wres
d ult
t h , the ‘
ages d band i
pigigital im h e r ca n b e v
e n s i v e in te rm s of ti me an
re qu ir ed . ves
ano t ery e x p
andwidth at remo
ommpputer to
co . a t e c h nique th
is e
osion of In imeageswi. th. | Imaggee comcomprsre
ssion reducing th
ith the explre
+ th e dunddant inf ternet, timo
on
:
re pres est arine
siten thuseingimag g its pe rc ep ti bi lity, thus,
wi un or ma ithout affectin
5 ome 0 e the image inB
quired to stor f-organiz
storage size re es ch as K o h o n en’s sel
chniqu su
ss the image. Several NN te
an be effectively ly us used to compre ompressi
on.
NN ¢ ; network etc. can be used for imag e c
s, Ba ck proopagatioi n algorithm, Cellular neural
pr
ma p’
l
rocess contro
4, industrial p
s-
l of dyn amic system
sfu lly in ind ust rial process contro modelling
or ks ha ve been applied suc ces
e be st choice for
ne tw th
Neural oved to be eir univer
sal
lt i- la ye r pe rceptrons) hav e been pr, ro ll er s, due to th
mu nt
ks (especiad lly lementing general — purpos e non-linear ure machine
co
Neural networ sy st em s an imp
non-n-li
near st em
ae ple contro land management
of agri cult
. atio”n capabilities. For exam
approxim
are
Recognition ) tools that
Ch ar ac te r ct er Re co gnition (OCR
Optica l l Char a
4, on is the Optica
ic ati on us in g image recognititi
pl me computer.
Well known ap ftware for the ho recognizing
both
e st an da rd sc an ni ng so
d sy st em for correctly
th e ba se
available with NN with a rul
su cc es s in combining
d grea t
Scansoft has ha el of accuracy.
an d wo rd s, to g¢ t a high lev
characters
nagement
5. Customer Rel
ationship Ma ip Ma na
gement (CRM). ch
me r Re la ti on sh collected for ea
io n fo r NN is Custo
be de ri ve d from raw da ta
pular applicat y informatio
n to
- Another po requires ke data infor mation
.
ip Ma ne ge me nt
ls us ing historical
Custome ; Rela
tionsh eved b y bu
ilding mo de
si nes s processes. Th
ey are doing
- an be ac hi da y to ele v. bu .
. This c
stomer in their
help ty.”
individual cu nology to sed productivi
g neural tech lo pme' nt and increa ©
Many compan ies are NO
W us in
ht, fa st er de ve
e different types
-~
e, gr eater i nsig er ns ca n be identifie d for th
e better pe rformanc s, pa tt
this to achiev ning i n
the database :
fo r da ta mi he company. mand and
etwork s mation tot centre loading, de
> By using Neura | N le custom
er in for
ch as fo recastin g call
to CRM, su
d en hancing databa
ses, clustering and
customers, th
us givi ks related l e t i n g an
, comp
for validating
d be useful em arket, .
~ Also, NN coul a" analysing th ; .
ination,
sales levels,
monitoring ;
sa le s of ti ck et s in relation to dest
base etc. predict
profiling client io n gy ‘ em AM
st T, which could
rese rv at
e is| the airline
- One exampl d ticket price: itechs
time of year an dad techno; logies
.
an advanc e
e la te st °
6 nce ken benefits
i from th science.
Be Medical scie ys ta in te re st In medical .
d that has alwa
of ms in
omising area omedical proble
B:e > > Medicine is the fiel
: .
the next pr e application to bi
‘
cu rr en tl y ha ve ex te ns iv
is NN) s) will work
al Ne ur al Networ k Ss (A Iso DeeP ne
t
bio chemical
~ Artifi ci : systems,
that neural such as diagnostic
Itis believed ars: P lications
\> w ye in me dical ap
the next fe lly applied elopment.
ready be h Knowledge
ANN has al
; Tec
n, Pudlications
> ase detectio
analysis, dise
Scanned by CamScanner
‘ eee Anificial Neural Network
y _AI&SCiC (MU- -Sem. 7-Comp) _
5.2.1. Connections
There
defines the n eural network architecture.
The arrangement of neurons in layers and connection between them
are four basic types of neuron connection architectures. They are:
5.2.2 Learning
Scanned by CamScanner
ro
5-8 Artificial Neural Network,
S e ired
As shown in Fig. 5.2.1, the distance
function fd, oft kes two values as an input. Actual network output 0 and desi
output d for an input X and
then computes error inca
sure,
hat is to
since we have assumed adjustable weights network performance t
reduce the error.
the weights can be adjusted to improve
Input Ai Network
|
xX = —————> output
. ial
0 ery
desired
<— response
pfd,0]
error/distance
function
Fig. 5.2.1 : Block diagram of supervised learning
if needed by
- This is analogous to classroom learning with the teacher's questions answered by students and corrected,
the teacher.
x o| ‘ Xy
Scanned by CamScanner
Artificial Neural Network
AI&SC (MU-Sem.7-Comp) 5-9
S| uitable weights self-adaption
mechanism |'
— Since no external instructions regarding potential clusters is availabl e,
ning
5.2.2(C) Reinforced Lear thé staan
s not pre sen t the exp ect ed answer but aiy'tnaidSted iF
ugh available, doe
In this method, a teacher tho
orrect.
output is correct or inc
rning process.
helps the network in its lea
— The information provided
is a ve'Y
A reward is given for a correct answer computed and a penalty for a wrong answer. Reinforced learning
d learning is not
general approach to learning that can be applied when the knowledge required to apply supervise
available.
sed learning, because they are
— However, it is usually better to use other methods such as supervised or unsupervi
8,
more direct.
this case may not be indicated as the desired
Reinforcement training is related to supervised learning. The output in
output, but t he condition whether it is ‘success’(
. ndicated, Based on this, error may Pe
1) or ‘failure’ (0) may be indi mo
Scanned by CamScanner
5-10
z “0 (MU-Detl sowvuip)
Network
Artificial Neural
this learning attempts to lear
the system kn
sically, . N the input-ou ows
Correct or not, but does not kno tput mapping through trial and error. Here
whether the output is . W the correct out put.
table 5.2.1 shows the difference between Supervised and Unsupe
rvised learning
‘23 activation Functions
t response of a neuron i '
_ The outpu P is calculated using an activation function (also called transfer function).
ighted in i :
_ Thesum of weighted inputs (net input to the neuron) is applied with an activation to obtain the neuron’s response-
Neurons placed in the same layer use same activation,
_ There are basically two types of activation functions: linear and non-linear
| The non-linear activation functions are used in a multi-layer net
1 Unipolar Binary
- {t's unipolar in nature meaning it generates only two values 0 or 1.
- {tis used only at output layer.
- Itis computed as,
1 net >O
f(net) = 0 net < 0
4f(net)
el —___—_—_—> net
0
function
Fig. 5.2.3 : Unipolar binary activation
2 Bipolar Binary
s two values +1 or -1.
- It’s bipolar in nature meaning it generate
It is used only at output layer.
Its computed as,
+1 net>9
f(net) = sgn (net) Pe net <0
4 {(net)
+1
ee net
——
-1
activation function
Fig. 5.2.4: Bipolar binary
ting activation functlons.
polar binary are called hard limi
The two functions bipolar binary and uni
Tech Knowledge
Pubtications
Scanned by CamScanner
Artificial Neural Network:
Al&SC (MU-Sem. 7-Comp)
t » net
30-20-14 0 1 2 3
Fig. 5.2.5 : Sigmoidal or Unipolar continuous activation function
4. Tanh Function / Bipolar Sigmoidal
- Also called Bipolar Continuous (or Bipolar Sigmoidal).
— Therange is between +1 and—-1.
-— It is computed as,
n
2 .
f(net) = Teel Where, net = » w,X;
i=l
- This function is related to the hyperbolic tangent function. is called the steepness parameter.
“Usually used in hidden layers of a neural network as it’s values lies between -1 to 1.
— {t's output is zero centered because its range in between -1 to 1. Hence in Practice it is always preferred over
Sigmoid function .
f(net)
Scanned by CamScanner
Artificial Neural Network
Se 5-12
limitation is that it should only be used within Hidden layers of a Neural Network Model.
‘w=
ReLU
Fig. 5.2.7 : ReLu
Linear
>
Scanned by CamScanner
Al&SC (MU-Sem, 7-Comp) Artificial Neural Network
Piecewise linear
, .
1 if mi>l
0
O=‘jgl if |mi<1| +1
-1 if ml>-1
g=tang _ I
-1
Sigmoidal functions. “(unipolar continuous SoD emo) aare” ‘used dine multiiay
back-propagation networks, Radial Basis Function Networks (RBFN) etc :
The reason is, in multilayer network, the actual Inputs are effectively masked off from the outp
the intermediate layer. e .
The hard- -limiting | threshold functions8 (unipolar binary and bipolar binary) remove the informa on
needed if the network i is to successfully learn.
Hence the network. iss unable to determine which of the input weights should be increased ‘and whic
Ww =| W, |= .
(
W,, War Wa oe Mam
ft o.04 te
Where W, = [We Woe e+ Wim) for i= 1, 2,3, «+ mis the Weight vector for processing element
W, is the individual element of weight vector that represents the weight on the communication link from i” neuron to
h
j neuron. _ ee
TachKnewledge
Publicakions
Scanned by CamScanner
2 Bias
riate
successful learning.
th ro ug h the ori gin . This may not be approp
(or plane) passes
i a s i s not use d, 4 the separating line us to shift the separating line (or
plane) either to the
= ustaabl
: an adjjust
e bias help s
Usually, when a bias r's Pproblem. Adding
for solving a par tic ula
the origin.
right or to the left of 7 5.3.2.
eke work shown in Fi w
~ Consider the following net
Output 0
ba bias
weight
k with a bias
Fig. ;5:3-2: Networ
Iculated as,
Be. utput 0 15
then P
d | activation function,
tw e use sigmoida
gd xXx + b)
= sigmoid ;
oO
Scanned by CamScanner
Artificial Neural Network
Sigmoid (1 x x +- 5)
1.0 f
0.8 + Sigmoid (1 xx +0)
os + ——> Sigmoid (1 xx +5)
047
027
—>x
-10 -5 0 5 10
5.3.3 Threshold
Threshold is a set value based upon which the final output of the network is calculated.
Usually a net input to the neuron is calculated and then compared with the threshold value.
If the net value is greater than the threshold, then the neuron fires, otherwise, it does not fire.
The learning constant is used to control the amount of weight adjustment at each step of training. It controls the rate
of learning.
The effectiveness and convergence of learning algorithm-depends significantly on the value of the learning constant.
However, the optimum value of learning rate depends on the problem being solved and therefore there is no single
learning constant that can be used for different training cases. ‘ ce
For example, to solve a problem with broad minima, a large value of learning constant will result in a more rapid
convergence. However, for problems with steep and narrow minima, a small value of learning
constant must be
chosen to avoid overshooting the solution. But choosing a small value of learning constant also
increases the total
number of steps in training.
Convergence is made faster if a momentum factor is added to the.weight update process. This is generally done in
B y
back propagation algorithm.
The method involves supplementing the current weight adjustment with a fracti
Q , ; raction Plof th Uy mos’ recent weight
weigh
adjustment.
AW(t) = -nV E(t) + ocAW(t- 1)
where, argument t and t — 1 indicate the current and the most recent training step respectivel
: ; ively.
1 is a learning constant and a is the user defined selected positive momentum constant
In the above equation, 2 the term : adW/(t — I)indicates
‘ a scaled most
; re cent ad : ust . : the
momentum term. Justment of weights and is called
Typically a is chosen between 0.1 and 0.8.
Scanned by CamScanner
N etwork
Al&SC (MU-Sem. 7-Comp
) Artificial Neural
ter
3.6 Vigilance Parame
quired
the vigilance parame ter denot ed by i the degree of | similari, ty re
pis control
: in ART network s. It is used to
to the same clus er uied
unit. The range of vigilanc e parameter is usually 0.7 to 1.
for pattern to be assigned
uron Model
54_McCulloch Pitts Ne
*:
8 ae
biological
simplified considerations of the
5
synthetic
.
nal is denoted by o.
- The neuron’s output sig
to neuron.
con nec ted by direct weighted paths inhibitory (weight is
- The inputs are
lin ks ma y be exc i tatory(weight is positive)or
unication
ated with the comm
- The weights associ
the threshold, then the
negative). to the neuron is greater than
a
i s les
input to the neuron
e net
neuron fires. If th
inhibition state. as follows,
fo r th e ne ur on is defined
rule
~ Accordingly, the firing n
k
1 if x wx,2T
i=l
ke1 n
oO = k
0 if wx, <T
Fhidt 1
Sr 1 <!
k = 1, 2, 3, ue
no te s di sc re te tim e instances,
t k de signal will have
on, superscrip ctiv e synapse which: transm
its. a stronger
In the above equa ti
oo
, an ef fe
neuron aller weights.
biological nap se will have sm
By analogy with the ile a we ak sy
weights, wh
correspondingly larger :
model,
~~ In McCulloch Pitts’ tatory synapses, -
ci
= +1 for ex napses- ;
: 5
—1 for inhibitory
Scanned by CamScanner
Artificial Neural Netwoik
Al&SC (MU-Sem. 7-Comp) 517 —areew
i
by the weighted s um of input
1p ignals f forthe
signals
And T is the neuron’s threshold value, which needs to be exceeded
neuron to fire.
}
~— Although, this neuron model is very simplistic, it has subs tantial ‘computing potent
ial. It ¢ an perform’ basic logi,
lected . =
threshold are app ropriately se
Operations such as AND, OR, NOT etc. provided its weights and performed to
- The McCulloch Pitts neuron does nat have any particular training algorithm. An analysis has to be
determine the values of weights and the threshold.
Ex. 5.4.1: Design two input AND logic using McCulloch Pitts Neuron model.
Soln.: Consider the truth table for.the AND logic function shown in Table P. 5.4.1.
Here we need to find the appropriate values of w,, w, and threshold T such that, they satisfy the truth table of AND
logic.
W,X,+w,xX, 2T ~(l)
And inhibition rule is,
W, Xt W2X) <T ---(2)
From the truth table of AND logic, we get following four inequalities.
0 <T
w, <T
w, <T
Ww, tw, 2T
0] 0 0
x
o}1 0 '
1/0 0
Xo
1] 1 1
Scanned by CamScanner
ee a
T =2
Soln.:
OR logic function
j shown iin the Table P.5.4.2.
Consider the truth table for
Table of OR
Table P.5.4.2 : Truth
PIiPlolol
PilPilPilo
rlojrRio
Output y
X2@ We
Fig. P.5.4.2 :MCP Neuron Model for Logical OR
ew opriate values of Wy w, and T such that they satisfy the truth table of OR logic.
€ need to find the appr
Here, the firing rule is, Ql)
WX, + WX = T
and the inhibitory rule is, .++(2)
W,X,+W xX, <T inequalities.
get the following four
we
From the truth table of OR logic,
0 <T
> T Vale Teoh rents
Scanned by CamScanner
Artificial Neural Network
WF _Alasc (MU-Sem. 7-Comp) 5-19
w, 2T
Output y
xg ewe”
OR logic
Fig. P.5.4.2(a) : Final weights and threshold for
Soln. :
Table P.5.4.3 : Truth table
Scanned by CamScanner
go poo L— — m.
(mu-Se
From the truth table , we get the following four inequalities,
0<T
w, <T
w, 2T
wytw, <T
is,
go for Wy = 1, W2 =~ 1 and T = 1, all of the above inequalities are satisfied. So, the solution
wi =]
w, =-1l
T =]
XQ We* .
given logic
Fig. P.5.4.3(a) : Final weights and threshold for
neuron model.
1.5.4.4: Design NOT logic using McCulloch Pitts
Soln.:
function.
Consider the truth table of NOT logic
NOT
Table P. 5.4.4: Truth table of
T=?
t w=? © Output y
of * .
1/0
l for Logical NOT
Fig. P.5.4.4 : MCP Neuron Mode
y the truth table of NOT logic.
app rop ria t e values of w and T, such that, they satisf
out the
Here, we need to find
The firing rule can be,
w, 27
0 2T
Tech|Knowled
Publteatin
Scanned by CamScanner
° : , spear
W_AlaSc (MU-Sem. 7-Comp) 5-21 “eee itificial Neural Network
T=0
xe—W==1 ()
Output y
— Fig 5.5.1 shows the architecture of a‘single layer feed forward network.
-— Asingle layer feed forward network consists of neurons arranged in two layers.
— The first layer is called the input layer and the second layer is called the output layer.
— The input signals are fed to the neurons of the input layer and neurons of the output layer produce output signals.
— Every input neuron is connected to every output neuron via synaptic links or weights.
Output
layer
Input layer
x; : Input neurons
y; : Output neurons
_ Wy: weight
— The transformation performed by each of the m neurons in the network, is non-linear mapping and expressed a5,
O, = £(W,X), fori=1,2,3,..m
- teers, : ee
TechKncwladge ‘ at
ny Funlscations
Scanned by CamScanner
am. 7-Comp)
Activation function
Output
Hidden
Input
layer layer
layer
feed forward network
Fig. 5.5.3 : Multilayer
za aries
ee
layer Output layer
input layer Hidden
ee
Piel 23,0409
% Input neurons
|_
jul, 2,3, en
Hidden neu rons
¥ wt He ——— e
ke hLadiuoe
qh Output neurons TT eaeeeateeaneanmemammmamecenimm
mmaniiamll
—
|
weight ee e
y4 Input hidden layer oO nemas
_
-output layer veight |
wi Hidden
EO
a ; k
forward nelwior’
cae
Scanned by CamScanner
_Al&SC (MU-Sem. 7-Comp) ’ Artificial Neural
Se
. .
Similarly, the hidden layer neurons are connected to the output layer neurons ard.th e corresponding
P 8 weighWeights are
referred to as “Hidden-output layer weights”.
.
A multi-layer feed forward network with m input neurons, n; neurons ,in the first t hi hidden layer,
yer, n, nz neurons in the
second hidden layer and k output neurons is written as m—,-n,—k network.
Ease
= { ie |
Oo(t)
o Won ®
Lag-free
® neurons
Wi
° e
Wm2
Ww. .
X,(0) eo mn p—> 0,(t+A)
o,(t)
See
Delay elements
Fig. 5.5.4 : Single layer discrete time feedback network
—— O(t+A)
oft + A).
The present output, say oft), controls the output at the following instant,
logy to the refractory period of
: a
Scanned by CamScanner
tg Al&SC (MU-Sem. 7-Comp)
ce Artificial Neural Network
" 5-24
, 5.5.4 Multilayer Feedback Networke
class) is known.
e main supervised algorithms.
Following section discusses thre
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) 5-25 Artificial Neural Network
Sensory unit Association unit © | Response unit
(S) (A) (R)
ME
> Output
weights O Adjustable
weights
Photodetectors
— Here the weights are adjusted using following weight update formula.
Aw, = ¢ [d.-sen ( W, X)] x
AW; y c [4,-sen (w, x)| X;; forj=1,2,.
c(learning constant)
Scanned by CamScanner
ee
YCY,, d ed,
Otsgn (W'Y)
Step4: Weights are updated
1
Wew+s c(d-9o)Y
Scanned by CamScanner
—,
°
¥ Al&SC (MU-Sem. 7-Comp) : Artificial Neural Network
————— SS
5.6.1(D) | Model for Multilayer Perceptron
a. An input layer
¢. One (or more) layers between input layer and output layer. These layers are called hidden layers.
ion and for the perceptr ons in the hidden and output
For perceptrons in the input layer, we use linear transfer function p
layers, we use sigmoid (continuous) function.
;
Here, since the single layer perceptron’s model is modified : changing the transfer
by adding a hidden nl “ vie ee So'tiw
(activation) function from linear function to a nonlinear function, we need to alter the learning :
the multilayer perceptron network should be able to recognize more complex things.
The input-output mapping of multilayer perceptron is shown in Fig. 5.6.3 and is represented by,
My | 2 | Output (y)
1 1 -1
1 |-1 1
mt 1 1
-1}~-1 ~4
- The task of the network Is to classify a binary Input vector to output el
the inputs +1 Or both the Inputs -2) or assign It to output class = 4 othe S=— 1 ’ ifthe v ector has similar inp’inputs (Both
wise,
— Let us try to solve this problem using a single-layer perceptron Model, Consider the follo eptron
model shown in Fig. 5.6.4, wing g single-layer
single-layer p perceP ae
EF Techinowledgi Se , _—
Scanned by CamScanner
AlaSC (MU-Sem. 7-Comp) Artificial Neural Network
a:
— =
xy
7
a j
0.45 Sin
Fig. 5.6.4: gle layer recent . a
dary is preceptron model
ror the above model, the decision boun ~
_
ie.net =0
1
if net > 0, y (the output)= 1, else y=-
following four inequalities.
- 5Sofrom the truth-table and above conditions, we have the
b+w,+w, <0
b+tw,-Ww, >0
b —w,+w, >0
b-w,-w, <0
AND gate
Output
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) 5-29 Artificial Neural Network
— To achieve this, a step p function can b e ivation fu
used as an activation function. That is, the output is + 1, if the net inputt>
the net input to the neuron ig
the neuron is positive, and -1 if the net input to the neuron is negative. Since
computed as i
n
net =b+ > Xi,
i=l
i is determined by the i
— Itis: clear that the boundary between the region where net > 0 and the region where net < Oi
relation. {
n
b+ > XW, = 0
i=l
é:) Class 1
p
;
Fig. 5.6.7 (b) : Network to solve linearly separable problem In Fig. 5.6.7 (a) ;
iy — The region where the output is positive is separated from the region where it is negative by the line,
- b
hy
24,
= W, xy
omW2
~ Here,
net is computed as
n
net = b+ WX
jel
- Here,
net = b+ WyX, + W)%,
~ So, the decision boundary Is
b+ WyxX,+W2x, = 0
Scanned by CamScanner
. ‘nto two classes-
;
in the above example, there are man diff ;
erent lines that will serve to sepa rate the inpu t vect ors In
ces ofy
there could also be many choi i
Wy, W, and b that gigive exactly the same line.
i .
_ Toun derstand the conc ept of linear separability further, consider simple logic gates AND and OR.
The AND gate can be represented by the truth-table shown in Table 5.6.2.
1 1 1
1 )]-1 -1
-1/ 1 -1
-1|-1 -1
nted as shown in Fig. 5.6.8(a).
_ Then, the desired response for the AND function can be represe
Decision
boundary
decision boundary
Fig. 5.6.8 (b) : AND function
separable.
~ Hence it is linearly is clear from the Fig. 5.6.9.
is als o lin ear! y separable which
Similarly, the O R function Xo
x4
Decision
boundary
ution
Fig. 5.6.9: OR Function sol
f Tech Knowledge,
Peoiter ere
Scanned by CamScanner
a Artificial Neural Network
— Fig. 5.6.10 shows EX-OR function plot. The EX-OR is nonlinearly separable since there is no line that can partition
: the 2D input space into two regions (class1 and class2). This can easily be observed from the plot given in
Fig. 5.6.10.
— Fig. 5.6.11 shows examples of linearly separable and non-linearly separable patterns.
Up Techknawledye
Scanned by CamScanner
Se
0" ea
-pelta learning is a supervised form of learning which uses continuous activation functions.
Fe jearning signal for this rule
is called delta and is defined
as
= [d,-f0W; 014" (WX)
t
The term f (wx) is the derivative of the activation function f(net) computed for,
Set
net = WX
xy
0, = fw)
r (d;-0})
rning model
Fig. 5.6.12: Delta lea
ween 0, and d,.
of least squared error bet
e d from the condition
- The learning rule can be readily deriv
uared error defined as
with respect to W, of the sq
Calculating the gradient vector -.(5.6.1)
B =4(d-o)
aw; .
take
the negative gradient direction, we
~ Since the minimization of the error red vires the weight changes to bein
aw _ VE .. (5.6.5)
; t
. ~
Scanned by CamScanner
Artificial Neural Network
AI&SC (MU-Sem. 7-Com P)
5.6.2(B) Proofs
Proofs
Proof:
2
(net) 1 + exp(—Anet) — .
Assume A = 1
3 _ 2
0 = f(net) = TT exp(cnet) -1
LHS = f (net)
- d { 2 d 2°
d(net) (1 + exp(— net) — 1 } = d(net) iz
exp(— = me :
_ d 1
= 2 d(net) iz exp(— =a |
‘S
ele Tachiinewledy’
a
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
5-34 Artificial Neural Network ~
d
2 ; d(nety (1 + exp(- =|
=|)an.
(1 + exp(— net))
— —2exp(— net
(1 +exp(- net))
1 2
al
= Trace
“ (- net)— i} ]
1 ;
=3 Stree
+ aor net) — 1|
-14
272
241]
¢ + =e ned)? 2) “+ exp net)) ~
2
"(+ nt net) 1 +exp(- net)
[- 14d tener)
= 2
(1 + exp(- net))
2 exp(— net ...(5.6.11)
~ (1 +exp(- net)
(5.6.11)
From Equations (5.6.10) and
LH.S = R.HS.
i _ )
1
es
1 + exp(- Anet)
u
= 1
Assume ©
0 f(net)
W
1 + exp(- net)
At}
f’ (net)
L.H.S
u
” ae
d ——
+ exp(- =|
(- net))
ii
! Gnet) adi
* a+ exp(-net))
(—exp\= net
wat
—
= a + expe net))
Scanned by CamScanner
° rtificial NeuralN rk
GF _AisSc (MU-Sem.7-Comp) 535 Eee
_ . LHS... = exp(— »(— ne net (5.6.
(5.6.12) — - ae
(1 + exp(- net))
R.H.S. = o(1-0)
__1_f,_-—_ 1
|
net) [ 1—T+ exp(- net)
= 1 + exp(-
1 + exp(— net) -1
(1 + exp(— net))
_ — expe net -» (5.6.13)
(1 + exp(— net))
LH.S. = R.H.S.
i = 1,2,3...,P
- In the following, k denotes the no. of training steps and p denotes the no. of steps within a training cycle
Enax> 0 chosen .
Step1: n>1, A=1,
oe
f (W'Y)
2
With f(net) = Tre-anet~ 4
Scanned by CamScanner
Ӣ AlxSC (MU-Sem. 7-Com _ artificial Neural NeMworS
step 4: Weights are updated
Ee£+1/2(d-o)
step 6: lf p< P then
kek +1 |
p <p +1 and go to Step 3
Otherwise, go to Step 7.
hts, kand E.
step 7: The training cycl e i is completed. For E<E,,.,terminate the training session. Output weig
ycle
If E2 Ea, then,
E<0
to Step 3.
p <1 and enter the new training cycle by going
|
oe
the 5 help of a fi wehart.
jon taining galgothn wt ‘
‘ (Dec 11, ‘May 12,
Scanned by CamScanner
———
Consider the layered feed forward neural network with two continuous perceptron layers shown in Fig: 5.6.13,
Layer| of neurons Layer k of neurons
manna
OK
\ 7-1 SJ
°K
zT
yy=-1
(Fixed input Dummy _ (Fixed input)
z;=-1) neuron
ith th
th j column k” column
i column
of input nodes of hidden nodes of output nodes
In the above network, | input neurons are connected to each of the J” hidden layer neuron.
— Weights connecting the input layer to the hidden layer are represented by matrix V. Similarly, ’
outputs of J hidden
neurons are connected to each of the K output neurons, with weight matrix W.
— EBPTA uses supervised learning mode, therefore the training patterns Z should be arranged in pairs with desired
response d provide by the teacher.
oO =T (WI [VZ]]
where, I [VZ] Is the internal mapping and relates to the hidden layer mapping 2 > Y
= Vand Ware adjusted so that error value proportional to | |d—-o| lis minimized,
Scanned by CamScanner
Al&SC (MU-Sem. 7- Comp)
on
a rt Error Back propagati
Fig. 5.6.15 : Flowch
Tech Knowledge
WF Teck Knomioays
Scanned by CamScanner
__Artificial Neural Network:
6-39 oes
- Consider the flowchart for EBPTA shown in Fig. 5.6.15.
ing begin s with the feed forwa rd recall phase (Step 2). After a single pattern vector Z is submitted at the
— The train
input, the layers response Y and O are computed.
— Then the error signal computation phase (Step 4) follows. The error signal vector must be datertnined! in the output
input nodes.
layer first and then it is propagated back towards the network
— Next, KxJ weights are subsequently adjusted within the matrix w (Step 5).
been augmented.
of value (- 1),
output Y. Note that the "component of Y is also
There are total J — Jneurons in hidden layer having
also been augmented.
since hidden layer outputs have
1)
— — Thus Yis (J x 1) and O is (K x
Step1: n>OandE,,.. chosen.
at small random values.
Weights W and V are initialized
qe1pe1LEO
Scanned by CamScanner
a (MU-Sem. 7-¢1) _ Artificial Neural Network
ee ') .
Vector 6,, is (K x 1), dis (1x 1)
1 .
6, ok =7>2 (4k - 0)
(1~0F) fork=1,2
, F1,2,0,K
I| and 6,yi => (1
| | 7y, 2j x 8 okWy, forjj =1,2,..J
i=1, 2,3,...,1
Step7: | Ifp<Pth
p ' “a P <p +1, q —q +1 and go to step 2, “other
otherwise go to steps.
Step8: The training cycle is completed.
For E<E,,,, terminate the training and
Output weights W, V, q and E.
step 2.
IfE>E max?
then E — 0, p< 1, and initiate the new training cycle by going to
Organizing Maps
5.7 Un-Supervise d Learning Algorithm: Self
Forse Oey eon PEW a ee
Scanned by CamScanner
541 autficial Neural Networks
¥ Al&SC (MU-Sem. 7-Comp)
- Fig. 5.7.1 presents relatively simple Kohonen’s self organizing network with two inputs and 49 outputs.
\SSS
ESS
Input —»X, Xp
units
Fig. 5.7.1 : Kohonen’sself organizing network with two Inputs and 49 outputs
:
foo 0 60 0 0 0}
Pio
o pasencnnanonawnnnnmennacn
fo0 0 0 GO O;0 i
' NO eee me ee 1 1
‘olotfe o ofoio}
poy ro off
fo }oj;o % FO} 0} 0}
'
! . 1 1
Lotoio o ofoio!}
Pop
t '
Tmrenencegeces "4
'
a Re
!o 0
10 OO 1 8o}]o
8880 of i
NB, (t= 2)
NB, (t = 1)
NB, (t= 0)
Fig. 5.7.2 : Nelghbourhood of unit C
eas
The learning process here is similar to asthat of : competitive learning networks. 5 That at (ori is, Scere
imil ar;
nea
oe to be the one with the largest
measure is selected and the winning unit is considered activation function.
wei
not only the winning unit’s weigh ' :
However, for Kohonen’s feature ‘ maps, ‘ we update ght but also all of the weights in a
;
neighbourhood around the winning unit.
in Fig. 5.7.2
The neighbourhood size generally decreases slowly with each iteration, as shown —
neuron, we can also use a nei
— Instead of defining a neighbourhood of a winning neighbourhood function Qc (i) around a.
winning unit c.
Where,
2
Qe) = en Fe )
20
— Here, o reflects the scope of the neighbourhood and P, and P, are the positions of the output ul nitit// and 7 c respectively:
tively.
Scanned by CamScanner
sc (MU-Sem. 7-Comp)
Perform step 3
5 Artificial Neural Network
“Still Stoppni Condition
2: Pping =
. S true,
3; Perform steps 4-6 for
teP:
et each input vector Xx
Compute the sq
uare of the Eucl
id eand istance j ef .
n mm. - for each j = 1 to m (there are m no. of output neurons)
L:
rep 6: i
For all units jj withwithin : :
step a specific neighborhood ofJ and for all i, calculate new weights
Wi = wj(old) +a [x, —w;;(old)]
Soin. :
n network.
ecture of the give
Fig.P. 5.8.1 shows t he archit
5 Mo
4
6 X20 2 output 0
3
7. %3° 4
x49
of given network
‘to p. 5.8.1 = Architecture
net = WX
5
6
= [1234] 7
= 5412421432
70
"
Scanned by CamScanner
Artificial Neural Network
The learning, constant c = 0.1. The teacher’s desired response for X,X_ and Xz are qq
=-1, d2=-1 and
d3 = 1 respectively. Calculate the weights after one complete cycle.
Soln.:
net; i WwW xX
oO, = sign (net;)
AW, = c(d-o0)X
Step 1: Take first training pair X = X,,d=d,
1
-2
X = o|> d=-l
-1
1
t -2
Compute : net=W, X = [1 -1 0 0.5] 0 =2.5
—o-4
0,= sign (net,) = sign (2.5) = 1
AW, =c(d-0,)X
1
= 01C¢1-1) “3
~1J
= 0.2
_| 04
a 0
0.2
W, = W, +AW,
1] f-0.2 0.8
_|-1 0.4 - 0.6
= 0 + o| = 0
0.5 0. 0.7
Step 2 : Take second training pair,
SetX - =X, d= d,
Scanned by CamScanner
_Artificia
| Neural Network
Ee W_Al&SC (MU-Sem.7-Comp)
1.5
X =]_o5| 457!
-l
0
— net, = Ww; X=[0.8 -0.60 0.7] | - fe
Compute =-1.6
Let
O,= Sign (net,) = Sign (— 1.6)=- 1
, 0.8
-0.6
= Ww, = 0
0.7
, Set X =X,,d=d,
-1
J
X =los{> ¢=1
-1
neuron network.
training for perceptron learning rule of a’single cea
Dalar nine the weights ‘itor -four steps of eee ah UPS OIGae ae
@
acid with initlal wolghts+
=(0 of, inputs a X= (2 2),
ite Mae sree bsg : Pn eS
si 2), X= 22)
ag eis ac ta ERE nie : .10 M-
13,
U- Dec
4,20, dy2vandonts.
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) ~ Artificial Neural Network
Soln.:
Here, we use unipolar binary activation function because desired values given are 0’s and 1's for perceptron learning,
net, = Ww; xX
0, = f (net)
AW, =c(d-0)X
1 , net20
Here 0, =
O , net;<0
Step1: Take first training pair,
2
set X = X, and d=d> x=[ 3 | andd=0
net; = W, xX |
Here W, is the initial weight, W, = [0 0]
net, = [0 OQ] [3]
=0
Oo; = fO)=+1
WwW, _[ 0
=- wi+aw, =[5]+[ -2 ]=[72]
75
-2
‘ 1
net, =W,X=62-21[ 1]
=-2+4=2
o = £(2)=+1
AW, = c-(d-o0,)-X
- ma-v[
_
1,1 Jeo
Hence there is no change in weights.
, -2:
Ws = w.=[ 73]
Step3: Take third training pair
net, = WwW; x
Se
Scanned by CamScanner
SSE SS Ea aan
= [2-2] LY]
= 4-4=0
03 = f(0)=+1
AW, = c (d-0)-X
wool 3142 ~2
-2 0
2
2
“= Ws + aw} | (3 l4 ‘]
Step4: Take fourth training pair
“04 = f(-4)=0
AW, =c(d-0,)X
=Ma-9} ATT]
|) |,
W; = W4+ AW,
[E455]
iH
)}—> output 0
Fig. P. 5.8.4
p
Here f (net) = | [Binary
i sigmoid
i i means the unipolar continuous]
+e
Assume 4 =
0.8
Compute o f (net)
1 __1 __. 9.5448
(| =- T+ 0.8352
ite
Scanned by CamScanner
Soln. :
-1/}-1]-1
-1) 41/41
+1/-1)+1
+1) +1) +1
Where x,, x, are the 2 inputs and ‘d’ is the desired output (target)
Fig. P. 5.8.5
Efile)
And, our training pairs would be:
(ons elifo
Assume c = 1 (learning constant)
TathKnowtedge
Publications
Scanned by CamScanner
:
Mi&SC (MU-Sem. 7-Comp) Artificial Neural
Network
: steP 1 ‘
Compute net, = w, x;
t . -1
WX, = [0.5 -0.5 0.5]] -1
; +1
= -0.54+05+05 =0.5
0, = sgn (net,) = sgn (0.5)=+1
Now,d, =-1
“AW, = —-2c¢ x,
-1 2
=-2(1)} -1] =| 2
~ L4l -2,
Aw, =w,tAw
0.5 2 2.5
=|-05/4+] 2 |=] 15
0.5 -2. -15
Step 2:
net, = W,%
-1
= [2.5 1.5-1.5]| +1
+1
= -254+15-15
= -25
0, = sgn (net) = —1
d z+1
1 |=| 2
Awa
= 2cx
?
= 2(1)
il l2
WwW; = w, +A W2
2.5 -2 0.5
15 {=| 2] 3.5
=|
1,5 2 0.5
t
net, = W,%3
1
= [0.5 3.5 0.5) 7
2 0.5-3.5 +05 2-25
0, = fnet)= -1
d, = +l
Ee a
Scanned by CamScanner
WF _AI&SC (MU-Sem. 7-Comp) 46 Artificial Neural Network
1 [=
“AW; 2¢x;=2(1)|-1]=) —2
” of L2
Wg w,+Aw;
0.5 2 2.5
3.5 |=|-2] =| 1.5
0.5 2 2.5
Step 4:
net, W, Xs
+1
(2.5 1.5-2.5] 3
+1
25415425 =6.5
sgn(net,) = sgn (6.5)=+1
+1
No change in weights
ie. Aw,
Ws
Step 5:
net,
-1
[2.5 1.5 28-1]
+1
22.5-154+2.5=-1.5
sgn (net 5) = sgn (— 1.5) = -1
-1
. No change in weights
2.
Aw, We = Ws= 3
0
2.5
Scanned by CamScanner
ror ne cree
k
. Artificial Neural Networ
meat Ne
eee
Ex.5.8.6:. Consider the following set of input training vectors.
_| 15 _| 1
X=!
=| o|
-1
%2=|-05|%=|05- -1
Soin.:
activation function.
:
with bipolar |‘ continuous
’ For delta learning
I ..(1)
net, = W, %
(2)
o, = f(net)= Ty exp (-Anet) ~
i 2
| (3)
: eet) = 2 (2-9) (4)
AW, = ° (d- o)f’ (net,) X
- i
ly andd=d
: 1
Step1:; Take 1° training pair, X =x, and ¢
-2
= . 0 , d = -1
x
-1
t _1 0 0.5) a)
= W, * =(1
Compute— net, -1 =f Rig
2 mate ~1= 9-848
o, = = Tr
flmets)exp (= 2.5) -1=T7 0.0820
1: 2) 2 (4 0,848") = 0.140
f’(net,) = 5 (17%) *2 O co pits 2 RRM
4 *
Scanned by CamScanner
‘Artificial Neural Network
“Al&SC (MU-Sem. 7-Comp) 551 ;
AD re , is _2
ne aS Fat
— 0.848) (0.14) | 9
c
1 — 0.026
: -2} :| 0.052
= -— 0.026 o|= 0
1 0.026,
WwW, = W, +AW,
1 - 0.026 0.974
1 - 0.052 —0.948
= 0 + 0 = 0
0
t 1.5
Compute —net, =W, X=([0.974 -0.948 0 0.526] 05
-1
= -— 1.422 -—0.526
= — 1.948
2
-1=s
O,= finet,) = tap a = 8.0146 ~1 =~ 9.75
f'(net,) = $ (i -0,) =3 (1 -(-0.75)’) = 0.2187
0
AW, = ¢(4~0,)£" (net) X = (0.1) (1 -(0.75)) x (0.2187) | _'¢
-1
0 0
= (- 0.00546)| _5nee ae ‘hee
a 0.00546
W,#AW,= -|-| 094974
8) , ~f008t907 _| -0.£0974
Ws; 956
.00273 |=] 0.0027
L:0.52 0.00546] 0.5315
Step3: Take 3 training pair, X = X, and d=d,
' oy
Compute net, =W, X =[0.974-0.956 0.0027 0.531] 05
Scanned by CamScanner
x
9.0027 | +] 0.0133
=
0.016
0.5315 L-0.0267 0.504.
. Soln.:
For delta learning rule : t
net; = W, X
2
95 f(net;)= T+ exp (— net, ) vA
ul
f’ (net) 31-4) |
AW, «a-oi[z(1-°)]
jes
X= d=4d,.
Step1; Take 1° training pa!" X=X,andd=C
x
i}
Jy ot
e enh et ey
tye a) } Of=2-4
0 4) E
|
-eS Compute —» net, = W, X= / 0.463
__ 2 _ 1=
7 1+ exp 1)
il
= 0.393
f (net)
e ; a- (0.463))
2
,
463) (0.393) E
AW,
‘= (0.25)C 1- 0.
Scanned by CamScanner
samo
2 Artificial Neural Network
|
}
}
| = (-0.1437)} 0] = ao
-l
1 ons)
X =s/-2]|,d=1
-1
Bea boty. 1
Compute TP netgn Wa X= (OTS: 1.1437] |-2
“a
Scanned by CamScanner
hh
-AN&SC (MU-Sem.:7-Comp)
work
esvney) Aftificlal Neural Net
We know that the correction
ha S been P perf tformed at every step this means, actual output and desired outputs were
not same.
.
This means they we exactly opposite of each
i ¥ oth er,
Since d, ° =-1>0,=41
d, =+1>0,=-1
d, =-1>0,=+1 sts ?
-2 44
0 0
“AW, a, = =()¢6
(Q)EG1- 1) 3 = 6
é | -4 2
. W, = W,+AW;
» W3 = W,-AW,
3 4 -1
2 0 2
“!6 || 6 |] 0
1 2 -1
Step2: W; = W,+AW, .
AW, = 0(d,-9)Xo
0 0
Voy 2
= (1)(1-C 1) 2 \F 4
-1 -2
i Now W, = W,-AW,
t “ -1 0 l “1
: =ah = - :
Wi = W2-AW1
4
: 177277 |
4 4 a
=| .4|7| -6 |] 2
{ ait -!
Puolicatigns
A
Scanned by CamScanner
‘ ' Artificial Neural Network
-1
net, = w,X
1
-2
= B26 1] 3
-1
= 3=44+18-1=16
0, = sgn(net,)=+1
AW, = c(d=0,):X
1 2
1
; 2 4
=
=Me1-D} 4 |-MC2D) |=] _g
4 >
-1
3 -2 1
2 4 6
-1
0
-1
net, © = W, t X=[1
6 0 3]
-1
0-6+0-3=-9
Scanned by CamScanner
as
Os = sgn(-9)=-1
AWs =c(d-o,):X
-1 -2
2 { -1 4 4
=(d-C))| 2 =2|
-1 -2
-1
-
oe Ws ‘= W,+AW,
0 1
1
6 -2 4
=|o [7] 4°] | 4
3 -2 1
d==-1 :
X =|o-, B
-1
net, = WX
-2
1=-14
=-2+0~-12-
=f144 1]
-1
o, = sgn(-14)=-1 -2
1-1) 23 =0
A We = ¢(d-06) X=)
-1
1
4
= Wo= 4
OW
:
_1
1
1
1
4 Ww. 4
6 w =|.
4
'
7=
4
=_
=| 0 6T
Hence Ws 1
3 1
MATS
oxy Publications
Scanned by CamScanner
““Al&SC.(MU-Sem. 7-Comp) : Artificial Neural Netwat
Soln. :
Fig. P. 5.8.9(a)
(I) Forward Pass:
Computing forward activations.
Z,)
(i) Computing hidden layer activations (Z, and
netZ) = WipXyt Vy. X2 + Voy
+ 1(1)
= 2( 0) +01
= 041401 —
= 1,1
Z, = f (netz,)
we have
Using bipolar continuous activation function,
m 1 A =e1]
f(net) ©-—
14+” [Assum
ee:
"The™ 7!
Z,= {(netz,)
flac)» eee
Teg Tyg I
2
z, = 0.5005
Scanned by CamScanner
_
Net =
72 = Va X1+ V2 Xo+ Von
= 10)'+2(1) +03
= 2.3
2
= f(netz)= os
2
(net 2) = Tygt!
; Z, = 0.8178
(i) Computing output layer activations (y)
y =-0.0181
5, = 0.5089
;
or sig .
nal ter ms of the hi idden layer (zi)
(i) Computing err 1 2 >
.
Sq. Wid = 1,2, wd
8; 5( = -2,)
:
: k=1
1 2
5 = 3(! -2,) 5, 4
4)
= 4 (1 0.50057) (0.5089) (0.
8, = 0.0763
bn = AC -1,) 5, Wa
= 3 (1 0.8178") (0.5089) (0.2)
by = 0.0169
= Wy ndy Zz (Assume n= »
(iii) Adjusting output lay
Wie
= Wiold + 5,21
|
Cee
Scanned by CamScanner
AI&SC (MU-Sem. 7-Comp) ____ Artificial Neural Network
= 0.4 + (0.5089) (0.5005)
Ww, | = 0.6547
W. = Wot 5,2,
W, = 0.6162
Bias Wo = woi.8y(1)
= -0.4 +0.5089
Wo, = 0.1089
(iv) Adjusting hidden layer weights (Vas Vage Voge Yoo)
vi = vgtn8,xiforj=1,2 Tandi=1,2.. wl
Final weights: wi = 0.6547, w,= 0.6162, wol= 0.1089 , v,, = 2, Vj.= 1.0763
with examples.
Q.1 Explain linearly separable and non-linearly separable patterns
the help of flowchart.
Q.2 Explain Error back Propagation Algorithm with
network.
Q.3 Write a short note on Kohonen self organizing
of an example.
Q.4 Explain McCulloch Pitts Neuron Model with help
on function is linear, that is, the activation
Q.5 Aneuron with 4 inputs has the weight vector w = [1, 2, 3, ay. The activati
7, a} then find the output of the neuron.
function is given byf (net) = 2% net. If the input vector is X = [5, 6,
help of a flowchart.
Q.6 Explain Error back propagation training algorithm with the
trained using the pairs of (X,, d) as given below :
Q.7 A single neuron network using f (net) = sgn (net) has been
X,=[1-23-1]},d)=-1
X,=[0-12-1],d,=1
=[-20-3-1],d,=-1
rule are ;
The final weights’ obtained using the perceptron
Scanned by CamScanner
i
nuo us 2°tivation
laroc onti
|
OF Tetk newietel
eo eae . Pubtications
Scanned by CamScanner
Expert System
- . Hybrid systems are those for which more than one soft computing technique is integrated to solve a real-world
problem.
Neural networks, fuzzy logic and genetic algorithms are soft computing techniques which have been inspired by
biological computational processes.
Each of these technologies has its own advantages and limitations. For example, while neural networks are good at
recognizing patterns, they are not good at explaining how they reach their decisions.
Fuzzy logic systems, which can reason with imprecise information, are good at explaining their decisions but they
cannot automatically acquire the rules they use to make those decisions.
These limitations have been a central driving force behind the creation of intelligent hybrid systems where.two or
more techniques are combined in a manner that overcomes the limitations of individual techniques.
Scanned by CamScanner
dg Al&SC (MU-Sem, 7-Comp)
Outputs
Fig.6.1.1: A’sequentlal hybrid system
2. Auxiliary hybrid systems
. . whatever
In this type, a technology treats another technology as a “subroutine” and calls it to process OF generate
information is needed by it. Fig: 6.1.2 illustrates the auxiliary hybrid system.
Inputs
; cr
!
|' a 1
i) !'
rn ee
.
. 3!
In 1
\
\
ye
<— “Technology
| 13
i It} !
\ Technology
B |
1 J
' !
1. '
7 ‘
\
!
aeee nae 1
See!! ae aeeon eee
leee-eee
Outputs
®
3. Embedded hybrid systom
be intertwined,
5 ys
In embedded hybrie id system? the technlaet negates 7 such a manner that they appear to
~
lete that it would appear that no technologyem.can be used without the other for solving the
~ The fusion is so comP embedded hybrid
syst
hema for an
Fig . 6.1 - 3 depicts the sc
problem.
N-FL (Neural Network Fuzzy Logic) hybrid system that has NN which receives information
of this syste isa N outputs as well.
Example
fuzzy : iginahl
processes it a nd generates
WH team nleitgs
Scanned by CamScanner
AI&SC (MU-Sem. 7-Comp) 6-3 Expert'System
1
Technology A
‘
!
1
!
!
'
le--—
V Technology B
Outputs
Fuzzy neural system is a system with seamless integration of fuzzy logic and neural'networks
While fuzzy logic provides an inference mechanism under cognitive uncertainty, Neural networks offer advantages,
such as learning, adaptation, fault-tolerance, parallelism and generalization.
To enable a system to deal with cognitive uncertainties like humans, we need to incorporate the concept of fuzzy logic
into the neural networks.
The first step is to develop a “ fuzzy neuron” based on the.understanding of biological neuronal morphologies
This leads to the following three steps in a fuzzy neural computational process
(a) Model 1:
In this model the output of fuzzy system is fed as an input to the neural networks.
— . The input to the system is linguistic statements. The fuzzy interface block converts these linguistic statements into
an input vector which is then fed to a multi- layer neural network. The neural network can be adapted (trained) to
yield desired command outputs or decisions.
Decisions
———>
Perception as
neural inputs a .
one {Neural
outputs)
Linguistic
’ statements
war TechKnowledge
Scanned by CamScanner
EE
‘Expert System
ained
Neuro —fuzzy Inference system) expl
pra cti c ally used Neur o-Fuzzy system is ANFIS (Adaptive
- Awell —known and rence system developed in
1990s.
Takagi-Sugeno fuzzy infe |
below. It is based on len t to a fuzzy inference model.
ona lly equ iva
or k which is functi
- ANFIS is a neural netw ential to capture the benefits of both in a sin
gle
networks and fuzzy logic, it has t he pot
es both neura |
- Since it integrat
framework.
: n in Fig. 6.1.7.
ANFIS Architecture iva len t ANF IS architecture is show
and its equ
Sugeno model
shows the first-order
- 6.1.6.6 sno
Fig.Fig. 6.1
f, = pxt nyt 4
h
. wy fit 2 =w, f+ Wo
=> f -” Wi +W
pe Loi
“PPT RET We
<a
et 3 y
; 16:4 two-Iinput fl rst-order Sugeno model
Ag.
1.6 1 A ay panes
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp)
Layer 1
Layer 4
Tech Knowledge
Publications
Scanned by CamScanner
: Al&SC (MU-Sem. 7-Comp) . Expert System
. ‘Layer 3: Normalization | ayer 6-6 eS
O,
= Ww, f=W(pxtaytn>
ai i=l,2
where w, is the normalized firing strength from layer 3 and {p, q, 1} is the parameter set of this node.
O,, =
= Lwf= Sit
yw, i=1,220
Here weight
=
Scanned by CamScanner
Al&SC (MU-Sem. 7-Comp) 6-7 ae System
Artificial intelligent aims to implement intelligence in machines by developing computer programs that exhibit
intelligent behavior. To develop systems which are capable of solving complex problems, it needs efficient access to
substantial domain knowledge, a good reasoning mechanism and an effective and efficient way of representing
, knowledge and inferences; to apply the knowledge to the problems they are supposed to solve. Expert systems also
need to explain to the users how they have reached their decisions.
Expert systems are generally based on the way of knowledge representation, production rule formation, searching
mechanism and so on. Usually, to build an expert system, system’s shell is used, which is an existing knowledge
independent framework, into which domain knowledge can be added to produce a working expert system. This avoids
programming for each new system from scratch.
Often, the two terms, Expert Systems (ES) and Intelligent Knowledge Based Systems (IKBS), are used synonymously,
Expert systems are programs whose knowledge base contains the knowledge used by human experts, in place of
knowledge gathered from textbooks or non-experts. Expert systems have the most widespread areas of artificial -
intelligence application.
Expert systems simulate human reasoning process in the given problem domain, whereas computer applications try to
simulate the domain itself.
Expert systems use various methods to represent the domain knowledge gathered from human expert. It performs
reasoning over representations of human knowledge, in addition to numerical calculations or data retrieval. In order
to do this, expert systems have corresponding distinct modules
’
referred to as the inference engine and the
knowledge base. Whereas in case of computer applications it might be just the calculations performed on available
data, without inference knowledge.
Use approximations .
Expert systems tend to solve the problems using heuristics or approximate methods or probabilistic methods which
are very much like how human do in general. While in case of computer applications strict algorithms are follows to
produce solutions, most of the times which do not guarantee the result to be correct or optimal.
Provide explanations
Expert systems usually need to provide explanations and justifications of their solutions or recommendations in order
to make user understand their reasoning process to produce a particular solution. This type of behavior is hardy
observed in case of computer applications.
Data interpretation : There are different types of data to be interpreted by expert system, which have various formats
and features. Example: sonar data, geophysical measurements.
Diagnosis of malfunctions: While collecting data from machines or from | experts there can be shortfall of accuracy or
mistakes in readings. Example equipment faults or human diseases.
Scanned by CamScanner
p_DSE (MU-Sem. 7-Compy tem
Expert Syste a
: 6-8
_- Structural analysis . If syst =<
mM is buj : unds, computer
systems are used; conf
| uration
Uild for a domain where complex objects like, chemical compo
of
- Planning : Expert these complex objects must be studied by the expert system.
Syste . .
task. Example: actions
Quired to plan sequences of actions in order to perform some
. ms are requi
Scanned by CamScanner
¥ Al&SC (MU-Sem. 7-Comp) 6-9 Expert System
- Following are the two principal components of every expert system.
1. Knowledge base
Knowledge ;
Acquisition Knowledge Engineer | Domain Expert
“- Subsyste’ 3
rane >| User Elendy ___,
Explanation : intewace :
- Subsystem ‘ User
4 4 y
Problem
Inference : Status .- -
-;Rules. . » -
- _ If we consider inference engine as the brain of the expert systems, then knowledg
e base is the heart. As the heart is
more powerful, the brain can function faster and efficient way. Hence the success
of any expert system is more or less
depends on the quality of knowledgebase it works on.
1. Knowledge base
There are two types of knowledge expert systems can have about the task domain.
(i) Factual knowledge :It is the knowledge which is accepted widely as a standard
knowledge. It is available in text
books, research journals and internet. It is generally
accepted and verified by domain experts or resear
chers of
that particular field.
(ii) Heuristic knowledge : It is experiential, judgmental
and may not be approved or acknowledged publica
lly. This
type of knowledge is rarely discussed and is largely
individualistic. It doesn’t have standards for evalua
tion of its
correctness. It is the knowledge of good practice,
good judgment and Probable reasoning in the domai
n. It is the
knowledge that is based on the “art of good guessi
ng.” It is very subjective to the Practitioner’s knowl
edge, and
experience in the respective problem domain.
Scanned by CamScanner
Expert System
; i nce, the
t uses iis based on his learning from various sources over a time period. Hence,
an expert ; ‘ : : im to
formation j increases with number of years of experience in the given field, which allows hi
in hi s :
2. Inference engine N his databases to advantage in diagnosis, analysis and design.
The infere
nce engine
ha S$ a problem: Solving module which organizes ; he
and controls the steps required to solve t
‘
but powerful rea soning.8
rorwan eng ine inv olv es cha ini ng of IF...THEN rules to form a line of
inf
.. ng Pra . erence
oO f chaini
ctices to solve a proble
m.
concluard Chaining: NE: Thistr; type of reasoning strategy starts from a set: of conditions and moves toward some
n, also called as dat
a driven approach.
Backward Chaini ini
ng : Backward chaining is a goal driven approach. In this type of reason
ing, the conclusion is
nown an ‘ soot
that d the path to the conclusion needs to be found out. For example, a goal state is " given, but the path tomt
at state from start state :is not known, then
backward reasoning is used.
Inf erence engine
ine jis nothing
. but these methods implemented as program modules. Inference engine
i i
manipulates
and uses knowledge ;in the knowledge base to generate .
a line of reasoning.
Data i
x=1 ifx=1andy=2thenz=3,
y=2 ifz=3thena=4
—
uncertainty in
Knowledge is most of the times incomplete and uncertain. There are various ways to deal with
knowledge. One of the simplest methods is to associate a weight or a confidence factor with each rule. The set of
uncertain knowledge in combination with uncertain data in the reasoning process is called
m ethods used to represent
intell
artificialsystem
area of “fuzzy
reason ing with uncertainty. “Fuzzy Logic’ is an important called igence which provides methods for
s use it are as s”
i and the system that
i
reasoning i
with unce rtainty
:Most of the times, human experts can guess or use their gut feeling to approximate or
on subsystem
3. Explanation Its. As an expert system try to mimic human thinking, it uses uncertain or heuristic
knowledge as we
estimate the rem cult its credibility of the system is often in question, as is the case with humans. As an answer toa ©
humans do. Asa r =afiondble, one urges to know the rationale or the reasoning behind the answer. If the rationale
problem ee eeptable generally the answer is considered to be correct. So is the case with expert systems!!
seems to be a ’ ;
rt systems have the ability to answer questions of the form: "Why is the answer : X?" Infe rence engine
i
Most of the expeplanations by tracing the line of reasoning used by it.
x
can generate & ;
: Building an expert system is also known as “knowledge engineering” and its practitioners are
4. Knowledge & ngineer
ge engine's The primary job of knowledge engineer is to make sure that the expert system has all the
called knowle ‘ d to solve a problem. He does a vital task of gathering knowledge from domain experts.
knowledg' e neede
WF TechKnowledys
Pooticarions
Scanned by CamScanner
ett Expert System -
mp)__
¥. Al&SC (MU-Sem. 7-Co
them. Then
Knowledge engineer needs to learn how the domain expert reasons with their knowledg
e by interviewing
the interface engine. There might Be some
he translates his knowledge into programs using which he designs
how to integrate this with
uncertain knowledge involved in the knowledgebase; knowledge engineer needs to decide
be required by the user,
the available knowledgebase. Lastly, he needs to decide upon what kind of explanations will
and according to that he designs the inference levels.
6. Maintain the ES
Keep the knowledge base up-to-date by regular review and update.
Cater for new Interfaces with other information systems, as those systems evolve,
Knowledge affects the development, efficiency, speed, and maintenance of the system. Knowledge representation is a
way to transform human knowledge to machine understandable format. It Is a very challenging task in expert systems,
as the knowledge is very vast, unformatted and most of the times It Is uncertain. Knowledge representation formalizes
and organizes the knowledge required to bulld an expert system.
TechKnowtedgé
Publications
Scanned by CamScanner
ee
|
. e
s
Expert Sy—
BASS (MU-Sem.7-Comp)
,
4
The knowl iC t in
- tiyfy = ledge should be pake
engineer must
idenntif o
ne or mo re fo rm s in which the required know
the syste m.edgeH © must also ens re oning
as
ith the selecte
:
Y of knowl
ed Be matters, the representation used also matters 4
programm er to write
i the Code for the .
system.
— Anumber of k nowledge-r sent the know ledg e effic ientl y,
have been devised till date to repre
. :
“repres
poir eiae l techn iques on tech niqu es are
Few of the knowledge representati
.
7
but finally it depends on the
mentions below. pplication, the design of system.
Production rules
Decision trees
000000000
Semantic nets
Factor tables
Attribute-value pairs
Frames
Scripts
Logic
Conceptual graphs antic |
are Production Rules, Sem
used met hod s for repr esenting domain knowledge
monly
— Out of these the most com |
}
evel count OF ©
NTIs
lity , nt jek
por resetabl pouiads
&
ENT.
stability of ELEM EMENT greate
r than2
jal level count oF FL
TechKnewledge
Puatications
Scanned by CamScanner
¥ Al&SC (MU-Sem. 7-Comp) 6-13 Expert System
b. Semantic Nets
- Asemantic net or semantic network is a knowledge representation technique used for propositional information,
so it is also called a propositional net. In semantic networks the knowledge is represented as objects and
relationships between objects. They are two dimensional representations of knowledge. It conveys meaning.
Relationships provide the basic structure for organizing knowledge. It uses graphical notations to draw the
networks, Mathematically a semantic net can be defined as a labelled directed graph. As nodes are associated
with other nodes semantic nets are also referred to as associative nets.
—- Semantic nets consist of nodes, links and link labels. Objects are denoted by nodes of the graph while the links
indicate relations among the objects. Nodes can appear as circles or ellipses or rectangles to represent objects
such as physical objects, concepts or situations. Links are drawn as arrows to express the relationships between
objects, and link labels specify specifications of relationships.
— The two nodes connected to each other via a link are related to each other. The relationships can be of two types:
“IS-A” relationship or “HAS” relationship. IS-A relationship stands for one object being “part of” the other related
object. And “HAS” relationship indicates one object “consists of” the other related object. These relationships are
nothing but the super class subclass relationships. It is assumed that all members of a subclass will inherit all the
properties of their super classes. That’s how semantic network allows efficient representation of inheritance
reasoning.
‘Mammals
SubsetOf
SubsetOf
MemberOf
SisterOf Legs
— For example Fig..6.5.1 is showing an instance of a semantic net. In the Fig. 6.5.1 all the objects are within
rectangles and connected using labeled arcs. The links are given labels according to the relationships. This makes
the network more readable and conveys more information about all-the related objects. For example, the
“Member Of” link between Jill and Female Persons indicates that Jill belongs to the category of Female Persons.
— It does also indicate the inheritance among the related objects. Like, Jill inherits the property of having two legs
as she belongs to the category of Female Persons which in turn belongs to the category of Persons which has a
boxed Legs link with value 2.
Scanned by CamScanner
re than one
Seje
ob mact
ntsntii csn net S als
oO can re presen oug h whi ch, an obj ect can belong to mo
} a inheritance thr mmon form of
d an object can bea ste obj ect s. It doe s also allows a co
much easier to answer
e 2 more than one anothe r
inferen ce
[i the job of inference algorithms
rse
knieows n as inve se links. The inverse lin ks ma ke
rev erse quer
a is a query “such as
For exam ple, in Fi pia s inv ers e of Sister Of link. If there
there ine IS- A HAS Sister link which is the of, to make the
t h e s i t ig.
Jac6.5
k .1?” The Has Sist er is the inverse of Sister
who he sister of cover that
rence system will dis the query.
inference algorithm f from Jack to Jill and answer
ollow the link HAS Sister
A dvantages of Semantic Nets
it can
using semantic nets. For example,
1. Semanti Ic nets are very exp res si ‘ ute detail. s can be represented
min by explicitly mentionin
g as
ues for siv e and be rep res ent ed
val ‘ 6.5.1, Jack has one leg can
represent default c i . In Fig.
ategories
person object.
shown. . Thi This specified v: a
the default value 2 for number of legs for
lue replaces
an nets canre present semantics
Semantic manti of relationships in a transparent manner.
pwn
Semanti
ic net s are eas y to im plement using PROLOG.
ant
involved
Disadvantage of Semantic Nets o objects are
re la ti on s wh er e more than tw Ba ng alore,
ent only binary relation s.
In i,
obj ect s rep res ai Express, Chenna
1. The links betw ee n the tence Run (Ch enn
For example, t he sen
be rep res ent ed using semantic nets.
cann ot
resented directly.
Today) cannot be rep
link names.
rd definition of
There is no standa
situations. For
are typ ica l to stereotypical
cts th at decomposed
Frames uc tu re for re presenting obje thi s te ch ni qu e knowledge is
nvenient $ tr physical objects
etc. In
Frames provide a co e of comp lex
— structur language
sual scen es, sion and natural
example, vi at. ns in cl ud in g vi
odular form igence applicatio
into highly m in ma n Y Artificial Intell kn ow le dge.
of sc he ma used pr es en ti ng commonsens e
tween : concepts
, and also
ty pe re be
is regarded as 3-D
is 2 eful fo r ts, rela ti on sh ip s
Frame are also us ae utes of concep he nc e
am es at tr ib ur es an d
des to have struct
. Fr
proces sing ts, situations
,
resent coi ncep janshipsp . It allows no :
repi™ n their relations ; ject and
_ As frames can ex pl ai t of a lis: t of properties of the ob lers. The
ro cedures . nsists slot fil
ow le dg e Ty pi ca ll y, 4 frame co es; also called as slots and
of kn as unit schema, OF lis t. lu So
epresentatio es ; si mi la r to the fields
and va ;
procedures, etc.
e proper ti e
kn ;
pers, functions, le frame, fram
_ A frame !5 also st in g, ob je ct . Ra th er than a sing an ot he r
ea pical be
associate yalues
ll er s th at de fi nes @ stereoty of th e at tr ib ute values can
and fi One
te to ea’ ch other.
contents no am es connected .
of ecec ekti book obje ct
A frame is 3B a ow s a frame fora
= F 6.5.2
suall .
systems
frame: F xampler
e
. opup s
iie arione
t
Scanned by CamScanner
WF _AlaSc (MU-Sem. 7-Comp)_ 6-15 XPM Syston
- This is one of the simplest frames, but frames can have more complex structure in real world. A powerful
knowledge system can be built with filler slots and inheritance provided by frames. Following Fig. 6.5.3 is the
example for generic frame.
Slot Fillers
Name Laptop
Scanned by CamScanner
: ae : = Expert System
= : 6-16 ae
Fig. - 6 6.6.1. depj acquisition syst em
epicts generic , includes Knowledge
Knowl wledgeba compon ents o f expert system shell. It
oO e, Inference anelherseec and iinference
shais interface. _ K Knowledgebase
led
rethec re cdiipon esol, planation sub system, and user
m m
Knowledge
engineer
st
mental al step i n build
building
dal
ilding Pp rt
an expe
1. Knowledge
ge acquisition s ystem : K Know ledge isition is the fir:first and the f fundamen
acquisitio
U
led
problems and build the k nowledgebase.
system. It hel ps to collect the e xperts knowledge required to solve the
But this iis also the biggest
i bottleneck of building ES
| and heuri stic
tl l fo| ge
k nowle!
tt euris
th e h eart of f e xpiert 1
syste m wit
t
store Sa Il f. actua
pone nt is
T com
the
s s fo for all the data, as
2 . Know! ledg e bas e ‘ his ‘Oo
they rang
wise,
TechKnowledge
control. Puolications
Scanned by CamScanner
¥ Al&SC (MU-Sem. 7-Comp) <a
6-17
— So
Expert System
SS
6.7 Explanations
— Expert systems are developed with the aim of efficient and maximum utilization of technology in place of human
expertise. To achieve this aim, along with the accuracy in the working, also the user interface must be good.
— User must be able to interact with system easily. To facilitate user interactions, system must possess following two
properties:
1, Expert systems must be able to explain its reasoning. In many cases user are interested in not only knowing the
answers to their queries but also how the system has generated that answer. That will ensure the accuracy of the
reasoning process that has produced those answers. Such kind of reasoning will be required typically in medical
applications, where doctor needs to know why a particular medicine is advised for a particular patient, as he
owns the ultimate responsibility for the medicines he subscribe. Hence, it is important that the system must store
enough meta-knowledge about the reasoning process and should be able to explain it to the user in an
understandable way.
2. The system should be able update its old knowledge by acquiring new knowledge. As the knowledgebase is
where the system’s power resides, expert system should be able to maintain the complete, accurate and up to
date knowledge about the domain. It is easy said than done!! As the system is programmed based on the
available knowledgebase, it is very difficult to adapt to the changes in knowledgebase. It must have some
mechanism through which the programs will learn its expert behavior from raw data. Another comparatively
simple way is to keep on interacting with human experts and update the system.
— TEIRESIES was the first expert system with both these properties implemented in it. MYCIN used TEIRESIES as its user
interface.
— As the TEIRESIAS-MYCIN system answers the user questions, he might be satisfied or might want to know the
reasoning behind the answers. User can very well find it out by asking “HOW” question.
— The system will interpret it as “How do you know that?” and answers it by using backward chining starting from the
answer to one of the given fact or rule. TEIRESIAS-MYCIN can do fairly goad job in satisfying the user’s query and
providing proper reasoning for it.
Types of Explanations
There are four types of explanations generally the expert system is asked for there are:
Report on rule trace on progress of consultation.
Pen PP
Scanned by CamScanner
° : : stem
SF _Al&SC (MU-sem, 7-Comp) Expert System
—==
6-18
The knowledge acquis
== =>
ise into the expert system.
ition co mponent allows the expert to enter their knowledge
It can be refined as or expertise r nteract aceusiy with
and whe n required. Nowadays automated systems that allow the ore acs enepacr
the system are becomi
ng inc reasingly common, thereby reducing the work pressure of knowlecg ,
The knowledge acquisition process has three principal steps. Fig. 6.8.1 depicts them in. a diagrammatic
. i
for
m. They
Y
are
as follow :
1. Knowledge elicitation : Knowledge engineer needs to interact with the domain expert and get all the knoeene:
He also needs to format it in a systematic way so that it can be used while developing the expert system shell.
.
2. Intermediate knowledge representation : The knowledge obtained from the domain expert needs to be store in
some intermediate representation, such that, it can be worked upon to produce the final refined version.
3. Knowledgebase representation : The intermediate representation of the knowledge needs to be complied and
transformed into an executable format. This version of knowledge is ready to get uploaded to system shell as it
is.
©.B. production rules, that the inference engine
can process.
In the process of expert system development, numbers of iterations through these three stages are required in.
order to equip the system with good quality knowledge.
The iterative nature of the knowledge acquisition process can be
represented in Fig. 6.8.1.
Reformulations
Redesigns
a *
c Refinements
5 2 2
oO
y 5os y g ¥ — go - : y — 8
identi a Find 5 Designa» | 2 | Fomulate | 3
eeecen ify | ,| conceptsto.| O | structureto | © | tulesto =| &
“Problem represent. organize: “} embody. -
characteristics _ knowledge.” ~ knowledge| knowledge©
As quality of knowledge base determines the success of an expert system, Al researchers are continually exploring and
adding new methods of knowledge representation and reasoning. Future of expert systems depends on breaking the
knowledge acquisition bottleneck and codifying and representing a large knowledge infrastructure.
The knowledge elicitation is the first step of knowledge acquisition. In this process itself there are several stages
Generally knowledge engineer performs these steps.
these steps need to be carried out before meeting the
domain expert to collect the quality knowledge. They
are as
follows.
mum possible data about
. the problem
: domain from books, manuals, internet, etc.,
1. Gather maxi i
become familiar w! h specialist terminology and :jargons of the problem domain.
rt it!
“In arder to
Identify the types of reasoning and problem solving tasks that the system will be required
e i to perform .
d domain expert or team of experts who are willing to work on the Project. So metimes exp
Find do I erts are frightened
of being replaced by a computer system!
jaw the domain experts multiple times during the course of buildin g the system,
4. Elemis that, the system is expected to solve. Have them check Find out how they sol
a nd refine the int ve
the pro ’ ermediate knowledge
representation.
Ww TechKnowledga
Publications
Scanned by CamScanner
Expert
knowle
proce sg. There
exists automated
ic it at io n is a time consuming co mmon modern al
ternatives.
ow le dg e el in g us ed as
Kn easingly be
es which are incr
learning techni qu
structures
na l sy st em s are Algorithms + d ata
+ inference Traditio
Expert system Is knowledge base
they do not
Engine
ot do pr ed ic ti on ta sks so efficiently as
ed on TS cann
t future events bas
Expert syste ms can predic have a strong infere
nce engine.
their
the current data input patterns using
inference process. . inference sys tem to deduce
does no t have a strong
TS
system to deduce
ES have very strong infere nce
knowledge.
s
knowledge from given fact the results.
not have mechanism to justify
any
TS do
tem which can explain
ES have explanation subsys ed to be done
edi ate stag e. Man ual debugging is requir
interm
and justify the results at any intervention.
g, | TS can not do exp ert tasks without human
planning, schedulin
ES can do tasks like :
require to deal with
prediction, diagnosis; which
wledge from past
current data input and kno
handled by human
experiences, which generally
experts.
data based on available data,
to match human expertise | TS systems can only provide
Expert systems are able the domain.
ple te | It cann ot prov ide user with knowledge about
ided with a com
in a particular domain prov to analyse the data
ul inf ere nce eng ine. Hence human expertise are required
knowled gebase and a pow erf
it. Hence TS c annot
further to deduce knowledge from
t them.
eliminate Human experts it can only assis
Systems
6.10 The Applications 0 f Expert
rt systems are used in the
t syst ems find their way into most areas of knowledge work. Expe
The applications of exper problems.
to in dustrial and commercial
place of human interactions
big in number that, they
wide s pectr um of appli catio ns for expert systems. These applications are so
There exists a seven major classes. -
All these applications are clustered into
cannot be categorized easily.
Diagnosis and troubleshooting of
devices and systems of all kinds
1.
est corrective actions fora
of applications comprises of expert syste ms that detect faults and sugg
This category which expert system
diagnosis was one of the first knowledge areas to
malfunctioning device or process. Medical
technology was applied.
Scanned by CamScanner
Expert System
3. Confi eurati i on
Configure Os =
of manufactured objects from
subassemblies
ation « sole
applications is histori aly One of the most important of expert system applications. In this category of
by a set of constraints.
These applicati ution to a problem is synthesized from a given set of elements related
include modular home building,
manufacturing con used in many different industries. Examples
? m i j '
4. plex engineering design and manufacturing.
Financial decision
making
; .
Expert systems ;ar i. rams
ces, typically in foreign exchange trading. Advisory prog
Ms .
are widely used te wee vigorously in financial servi s expert systems to
used
sis © assist bankers to take decisions about loans. Insurance companie
assess the risk associated with the customer in order to determine price for the insurance. Atypical application in~
i i
& tinancial market
s is
5. nae
Knowledge Publishing
3
. .
Usage of ex further exploration. There are
appli pert
many Y applic systems in this area is relatively new, but has good potential for
ations based on user information access preferences.
6. Process monitoring and control
In thi . .
from
h :al eee system s perfor ming analysi s of real time data are designed. These systems obtain data
Physic devices and produce results specifying anomalies, predicting trends, etc. We have existing expert
systems to monitor the manufacturing processes in the steel making and oil refining industries.
7. Design and manufacturing
These expert systems assist in the design of physical devices and processes. It is ranging from high level
- conceptual design of abstract entities all the way to factory floor configuration of manufacturing processes.
jew Guestions
Q.1 Explain the need of hybrid approach.
0.3 Explain the architecture of ANFIS with the help of block diagram.
examples of re al time expert systems.
Q.4 What is expert system? Give
of expert system?
Q.5 Whatare the characteristics |
UF Tecthnamieags
ae
Scanned by CamScanner