0% found this document useful (0 votes)
135 views30 pages

Expert System

This document discusses expert systems and knowledge representation. It begins by defining expert systems and providing examples. It then discusses some famous early expert systems like MYCIN, PROSPECTOR, and INTERNIST. The document outlines the typical structure of expert systems including the database of facts, knowledge base, inference engine, and explanation mechanism. It explains forward and backward chaining and how rules can represent uncertainty. The document provides examples of rules from different expert systems and discusses desirable features of rules. It also discusses generating explanations in expert systems.

Uploaded by

Alman Albo
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views30 pages

Expert System

This document discusses expert systems and knowledge representation. It begins by defining expert systems and providing examples. It then discusses some famous early expert systems like MYCIN, PROSPECTOR, and INTERNIST. The document outlines the typical structure of expert systems including the database of facts, knowledge base, inference engine, and explanation mechanism. It explains forward and backward chaining and how rules can represent uncertainty. The document provides examples of rules from different expert systems and discusses desirable features of rules. It also discusses generating explanations in expert systems.

Uploaded by

Alman Albo
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 30

EXPERT SYSTEMS AND KNOWLEDGE REPRESENTATION

Ivan Bratko Faculty of Computer and Info. Sc. University of Ljubljana

EPERT SYSTEMS

An expert system behaves like an expert in some narrow area of expertise


Typical examples: Diagnosis in an area of medicine Adjusting between economy class and business class seats for future flights Diagnosing paper problems in rotary printing Very fashionable area of AI in period 1985-95

SOME FAMOUS EXPERT SYSTEMS

MYCIN (Shortliffe, Feigenbaum, late seventies) Diagnosis of infections

PROSPECTOR (Buchanan, et al. ~1980) Ore and oil exploration


INTERNIST (Popple, mid eighties) Internal medicine

Structure of expert systems

Structure of Expert System


Components database of facts knowledge base inference engine explanation mechanism Forward and Backward Chaining

Database of facts
This holds the user's input about the current problem. The user may begin by entering as much as they know about the problem or the inference engine may prompt for details or ask whether certain conditions exist. Gradually a database of facts is built up which the inference engine will use to come to a decision. The quality and quantity of data gained from the user will influence the reliability of the decision. The following form gives a very simple example of how a database of facts might be built up in an expert system to identify types of animals. Known facts would be entered.

Animal-ID Backbone: Yes No Body parts: Two Three Four Outer body: Hair Feathers Scales Exoskeleton Number of legs: Two Four Six Eight Eight Knowledge base
Expert systems differ from other information systems in that they hold knowledge. Knowledge is not the same as data or information because it is active and can generate new understanding. It consists not only of data and information but interrelationships, consequences and predictions. Someone who is very knowledgeable in a specific field is known as an expert. While knowledge in humans is gained by learning, experience and experimentation, knowledge in a computer is often represented by rules. The knowledge base contains the facts and rules or knowledge of the expert. Below is an example of how IF THEN rules might be applied in our Animal-ID expert system.

IF animal has backbone THEN vertebrate IF animal is vertebrate AND has hair THEN mammal IF animal is mammal AND has pointed teeth AND has claws THEN carnivore

Inference engine
The inference engine connects the knowledge base and the database of facts. It interprets the rules and draws conclusions. With rule based expert systems there are two main types of reasoning - forward chaining and backward chaining. The user interface for both of these systems may be similar, it is how they use the rules that is different. Some expert systems use both backward and forward chaining. Forward chaining Forward chaining is a 'data driven' method of reasoning. It begins with the available data, compares it with the facts and rules held in the knowledge base and then infers or draws the most likely conclusion. IF THEN. Forward chaining starts with the symptoms and works forward to find a solution. Backward chaining Backward chaining is a 'goal driven' method of reasoning. It begins with a goal and then looks at the evidence (data and rules) to determine whether or not it is correct. THEN IF. Backward chaining starts with a hypothesis and works backwards to prove or disprove it. Chaining Activity Fuzzy logic In expert systems there are usually many different variables, so how does the system know which are the most or least important? Fuzzy logic allows expert systems to reach decisions when there are no completely true or completely false answers. It is based on the way that humans make decisions. Each variable is assigned a value between 0 and 1, such as 0.4 or 40%, the variables can then be processed by the system to reach a decision. In our Animal-ID system for example, the user or may not know whether the animal has pointed teeth and so therefore most but not all rules would match and the certainty value would be reduced to say 0.8.

Explanation mechanism
The explanation mechanism is an important feature of expert systems. It explains how the system has reached a decision and can also allow users to find out why it has asked a particular question. Because it is a rule based, the system is able to provide the user with an explanation of which rules were used during the inference process. This allows the user to make a judgement on the reliability of the decision. For example: The animal has no backbone therefore it is an invertebrate The invertebrate has an exoskeleton therefore it is an arthropod The arthropod has three main body parts therefore it is an insect The insect has a triangular shaped head and elliptical shaped body therefore it is a cockroach Certainty In the examples from the Animal-ID expert system all of the data held in the database of facts matched all of the IF THEN rules. The system could therefore confidently (1 or 100%) predict that the conclusions were true. However, if some parts of the IF THEN rules could not be matched with data held in the database of facts, the system could not be as confident. Forward and Backward Chaining

Below are rules for an Expert System used to identify animals. Read through and then complete the activity at the end which looks at the difference between forward and backward chaining.

FEATURES OF EXPERT SYSTEMS


Problem solving in the area of expertise
Interaction with user during and after problem solving Relying heavily on domain knowledge Explanation: ability of explain results to user

REPRESENTING KNOWLEDGE WITH IF-THEN RULES

Traditionally the most popular form of knowledge representation in expert systems


Examples of rules:

if condition P then conclusion C if situation S then action A if conditions C1 and C2 hold then condition C does not hold

DESIRABLE FEATURES OF RULES


Modularity: each rule defines a relatively independent piece of knowledge
Incrementability: new rules added (relatively independently) of other rules Support explanation Can represent uncertainty

TYPICAL TYPES OF EXPLANATION

How explanation Answers users questions of form: How did you reach this conclusion?
Why explanation Answers users questions: Why do you need this information?

RULES CAN ALSO HANDLE UNCERTAINTY

If condition A then conclusion B with certainty F

EXAMPLE RULE FROM MYCIN


if
1 the infection is primary bacteremia, and 2 the site of the culture is one of the sterilesites, and 3 the suspected portal of entry of the organism is the gastrointestinal tract then there is suggestive evidence (0.7) that the identity of the organism is bacteroides.

EXAMPLE RULES FROM AL/X

if the pressure in V-01 reached relief valve lift pressure then the relief valve on V-01 has lifted [N = 0.005, S = 400] if Diagnosing equipment failure on oil platforms, Reiter 1980

NOT the pressure in V-01 reached relief valve lift pressure, and the relief valve on V-01 has lifted then the V-01 relief valve opened early (the set pressure has drifted) [N = 0.001, S = 2000]

EXAMPLE RULE FROM AL3 Game playing, Bratko 1982


if 1 there is a hypothesis, H, that a plan P succeeds, and 2 there are two hypotheses, H1, that a plan R1 refutes plan P, and H2, that a plan R2 refutes plan P, and 3 there are facts: H1 is false, and H2 is false then 1 generate the hypothesis, H3, that the combined plan R1 or R2' refutes plan P, and 2 generate the fact: H3 implies not(H)

A TOY KNOWLEDGE BASE: WATER IN FLAT

Flat floor plan Inference network

IMPLEMENTING THE LEAKS EXPERT SYSTEM

Possible directly as Prolog rules: leak_in_bathroom :hall_wet, kitchen_dry.


Employ Prolog interpreter as inference engine Deficiences of this approach: Limited syntax Limited inference (Prologs backward chaining) Limited explanation

TAILOR THE SYNTAX WITH OPERATORS

if
hall_wet and kitchen_dry then leak_in_bathroom.

Rules:

Facts:

fact( hall_wet). fact( bathroom_dry). fact( window_closed).

BACKWARD CHAINING RULE INTERPRETER


is_true( P) :fact( P). is_true( P) :if Condition then P, is_true( Condition). is_true( P1 and P2) :is_true( P1), is_true( P2). is_true( P1 or P2) :is_true( P1) ; is_true( P2).

% A relevant rule % whose condition is true

A FORWARD CHAINING RULE INTERPRETER


forward :new_derived_fact( P), % A new fact !, write( 'Derived: '), write( P), nl, assert( fact( P)), forward % Continue ; write( 'No more facts'). % All facts derived

FORWARD CHAINING INTERPRETER, CTD.


new_derived_fact( Concl) :if Cond then Concl, % A rule not fact( Concl), % Rule's conclusion not yet a fact composed_fact( Cond). % Condition true?
composed_fact( Cond) :fact( Cond). % Simple fact composed_fact( Cond1 and Cond2) :composed_fact( Cond1), composed_fact( Cond2). % Both conjuncts true composed_fact( Cond1 or Cond2) :composed_fact( Cond1) ; composed_fact( Cond2).

FORWARD VS. BACKWARD CHAINING

Inference chains connect various types of info.:

data ... goals evidence ... hypotheses findings, observations ... explanations, diagnoses manifestations ... diagnoses, causes

Backward chaining: goal driven Forward chaining: data driven

FORWARD VS. BACKWARD CHAINING


What is better?
Checking a given hypothesis: backward more natural If there are many possible hypotheses: forward more natural (e.g. monitoring: data-driven)

Often in expert reasoning: combination of both e.g. medical diagnosis water in flat: observe hall_wet, infer forward leak_in_bathroom, check backward kitchen_dry

GENERATING EXPLANATION


How explanation: How did you find this answer?
Explanation = proof tree of final conclusion E.g.: System: There is leak in kitchen. User: How did you get this?

% is_true( P, Proof) Proof is a proof that P is true


:- op( 800, xfx, <=). is_true( P, P) :fact( P). is_true( P, P <= CondProof) :if Cond then P, is_true( Cond, CondProof). is_true( P1 and P2, Proof1 and Proof2) :is_true( P1, Proof1), is_true( P2, Proof2). is_true( P1 or P2, Proof) :is_true( P1, Proof) ; is_true( P2, Proof).

WHY EXPLANTION

Exploring leak in kitchen, system asks: Was there rain? User (not sure) asks: Why do you need this? System: To explore whether water came from outside Why explanation = chain of rules between current goal and original (main) goal

UNCERTAINTY

Propositions are qualified with: certainty factors, degree of belief, subjective probability, ...
Proposition: CertaintyFactor if Condition then Conclusion: Certainty

A SIMPLE SCHEME FOR HANDLING UNCERTAINTY


c( P1 and P2) = min( c(P1), c(P2))
c(P1 or P2) = max( c(P1), c(P2)) Rule if P1 then P2: C implies: c(P2) = c(P1) * C

DIFFICULTIES IN HANDLING UNCERTAINTY



In our simple scheme, for example: Let c(a) = 0.5, c(b) = 0, then c( a or b) = 0.5 Now, suppose that c(b) increases to 0.5 Still: c( a or b) = 0.5 This is counter intuitive! Proper probability calculus would fix this Problem with probability calculus: Needs conditional probabilities - too many?!?

TWO SCHOOLS RE. UNCERTAINTY


School 1: Probabilities impractical! Humans dont think in terms of probability calculus anyway!
School 2: Probabilities are mathematically sound and are the right way! Historical development of this debate: School 2 eventually prevailed - Bayesian nets, see e.g. Russell & Norvig 2003 (AI, Modern Approach), Bratko 2001 (Prolog Programming for AI)

You might also like