0% found this document useful (0 votes)
4 views16 pages

AIML Module 3,4

The document discusses uncertainty in artificial intelligence, outlining four levels of uncertainty: a clear future, alternative futures, a range of futures, and true ambiguity. It explains probability notation, axiomatic probability, and Bayesian networks as methods to represent and manage uncertainty in AI. Additionally, it highlights the role of statistical learning in assessing certainty and making predictions based on data.

Uploaded by

Mukesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views16 pages

AIML Module 3,4

The document discusses uncertainty in artificial intelligence, outlining four levels of uncertainty: a clear future, alternative futures, a range of futures, and true ambiguity. It explains probability notation, axiomatic probability, and Bayesian networks as methods to represent and manage uncertainty in AI. Additionally, it highlights the role of statistical learning in assessing certainty and making predictions based on data.

Uploaded by

Mukesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

.

,,,

Mod ule -IW 'I 11


Actin g unde r Unce rtain ty in Artifi cial Intell igenc
e
With this knowledge representation, we might write A-B
, which means if A is true then
Bis true, but consider a situa tion wher e we are not
sure abou t whet her A is true or
not then we ~ o t express this statement, this situat
ion is called 1uncertainty'.
What are the\~ levels of uncertainty?

✓ Four level s of unce rtain ty

• Level one: A clear enough future


• Level two: Alternative futures
• Level three: A range of futures
• Level four: True ambiguity
Exam ple of Unce rtain ty

Episte molog ical Know ledge guide d


. t· __,,--
_,.,,,,...- - - - decisi on
ObJee 1ve UNCE RTAIN TY
. ~t g__
UNCERTAINTY

I
~~"" "' '<.,; ~ ~ ~... .-<A
~ "C Qntolog1cal _ _ _ Quasi -rational
UNCERfAINlY decision
UNCERTAINTY

\ Subje ctive
UNCERTAINTY ----..__
~ UNCERfAINlY

----......
Moral

Rule
_ _ _ Rule guided
decisi on

_ _ _ Intuiti on guided
UNCE RfAIN lY decisi on

c Uncertainty 1 is everywhere and you cannot escape from


t unkn own whet her or not it will rain tomo rrow ,
r V \'"\ ~ 'Y,re,.~"' h • ,. . . . .~. J~~ ~ Lo._c__,1,,1.,_ u+- G,
it. For example, if it is
then there is state of uncertainty. ~ a
~ n."'-, 'T'-1. r
~h½""U... ~ l?.D-c. l a u H vt o ~ ~
\'V\.°'-(_\.-..~"'----".. ~~ f\ \ ~ M1 . _Si\-- "-"- ~ ,µ..d,µ t-<>
"",Iv ~ ¾ "'"' /\.~"-d4""1. -
Prob abilit y notat ion in Al
\ _ \....,,,J,,--- Lu, " t<:-Ll c .Q..'-1""-'"' \-y ·( $.1-:, ... ""' to"''(, J,.u,._,,__ 1\,-., A , ,., " 1
ww )
--1;--'w ~ ~ -?,.:~-~ ~ 1 .._,__.i ~ 'Y,"'4 " ~ 1..1--nu ... H ~ 7 l~w t-v
· .... cQ,,v """'-\--,."½, 11 > \.--"<v +~ (GJ- v~ ,+-
The value of probability always remains oetween O ? ·
and 1. O S P(X) S 1, where P(X) is
the probability of an event X. P(X) = 0, indicates total
uncertainty in an event X. P(X) =1,
indicates total certainty in an event X.

44...+.-vv- Lv~J - d-'-", ~ -\,J ~ ""'- "" K"' n.:t · c,,o f"'I ,J_
. (__ A.-r 'r-' 0-),. )

What are prob abilit y Axio ms?


of an event. As,
Axiomatic Probability is just another way of describing the probability
assigning
the wor? _i~self sa~s,_ in this approach, some axiom s are predefined before
to ease the calcula tion of
probab1ht1es. This 1s done to quantize the event and hence
occurrence or non-occurrence of the event.

~- P ,""L<, h~ ~ l,' 1-y


E- f; v~ .-J:-
£
Probabi licy is a set function P(£) chat assigns to every event
a number called the "probab ilicy of£" such that:

zero ...,.
I. The probabi lity of an event is greater than or equal to
P(E) ;;=::0
2 . The probabi licy of the sample space is one '-
P(Q) = l

Inference using Full ioint Distributions in Al :


model is completely
The joint distribution f~II of the random variable, a probability
r, then the full joint
determined by it. e.g. jf the variable are Cavity, Toothache and Weathe
distribution is given by P(Cavity, Toothache, Weather) --

Example: The probability that a card is a five and black p(five and = =
black) 2/52 =1/26.
the five of clubs).
(There are two black fives in a deck of 52 cards, the five of spades and

Baye's rule and its use:


Thomas Bayes. It is a
Bayes' Theorem, named after 18th-century British mathematician
onal probability is the
mathematical formula for determining conditional probability. Conditi
occurred in similar
likelihood of an outcome occurring, based on a previous outcome having
circumstances.
, a person's age can be
For example, if a disease is related to age, then, using Bayes' theorem
, compared to the
used to more accurately assess the probability that they have the disease
person's age.
assessment of the probability of disease made without knowledge of the
' - :.EL". - .:iO ..

j} j}
P(B I A).P (A)
P(iJ. B) = --P- (B-) 1J- -

Representing knowledge from Uncertainty :


What are [email protected] to represent knowledge?
as follows :
There are mainly four ways of knowledge representation which are given

'-" Logical Representation.


~ Semantic Network Representation .

~ Frame Representation .
.._,.,-- Production Rules.

Lf"a nting

''
!'
-

.----- --~
---- --- -~
Kno,,•lrd g .- -----I Rea..sou.iu g
rcp rNe11tnt.i ou .__--.-- ---,_.
:

''
':,
,

'
'
~-- - ---------- ---- ----- ----- ------ - - ------ - --- -- !

PJanuiu g

Educatio n

(Logical Flow Diagram of Uncertainty)

Semantics of Bayesian network:


with some local
The syntax of a Bayes net consists of a directeji acyclic graph
mantics defines how the syntax
probability information attached to each node. The-se
networ k.
corresponds to a joint distribution over the variables of the
: --~ t e
A c c id e "E _ ' Ra in y
Wake
nt · day
Up

\ / '
· T ra ff ic
__ _
M e e ti n g
ed
Jam P o s tp o n
--
· G;ie ., _ .-
for
·) - W o_r_k~
L a te fo r
m e e ti n g
I
.> . ·--. ,.

Al 1
network in
(Ba yesian

:
ti c N e tw o rk .
of Seman e e n o b ·ects
E x a m p le e la ~ s h ip s
b e tw
n im :~ d(~d~~
~sF ir
e n ti n g r iffe re n t a
p re s tween d
1-1
II
o rk s a re a w a y o f re the rela tionship be
ne tw compute r
S e m a n ti c o rk m i~ h t tell ).a
n e tw
example,,, aa c a t H A S whiskers
mamma
ify
ution ts whe n you spec
al distrib ble that exis
Use o f Condition s fo r one varia
the dispers
al of
value you to ass
ess
n is a d is tribution o f on a ll o w s
nal distribu
ti~ distributi
A conditio s. T his typ~ o f ns .
.s o f o th ~ r van a ble
e r s p e c if ic conditio
th e valu e est und
u r v a r ia b le o f m te r
yo

tribution? for each


ional Dis p u ter and gender
Condit of com female
Example o f ou record the type u te r types for only
, and y rs f com p lo ter types pu
u 'r e s e ll in g compute s s e s s th e dispersa e're conditioning com
o a ution. W
Suppose y g in e th a t you
w a n t to
d itional distrib
w im a f a c o n
sale. N o ample o
. T hat's an ex .
c u s to m e r s
ria b le va lue o f female
d e r va
o n th e g e n
Al over
a yesian networks in v bability distribution
B nal pro
Inference in compute th
e conditio
lly
e , o n e can analytica
ferenc with the
In e x a c t in t. licit prior knowledge
s ining exp
les o f intere ta b y comb
the variab from da
y to learn
n in fe re n ce is a wa
Bayesia
data .
■ Prior knowledge is defined b a . . . .
y pnor di stribution over upcoming possible models.
■ Leaming means deducing the . . . . . . -
Data. poStenor distnbut1on by using different models with given set of
1
- - -----· .

(Example of Baysein Network)


.I.......,,.~o•~ l o ~ 7'"
Ba~-~ k s provide a useful tool to visualize the probabilistic mode_! for a domain ,
review all of the relationships between the random variables, and reason about causal
probabilities for scenarios given available evidence.

@ Bayesian probability is the study of subjective probabilities or ~ in an outcome,


compared to the frequents
,/'
approach where probabilities are based purely on the past
occurrence of the event. A Bayesian Network captures the joint probabilities of the events
represented by the model.

J~✓ ''.4 Bayesian belief network describes the joint probability distribution for a set of variables in
appropriate Manner"

✓• Bay Visualization. The model provides a direct way to visualize the structure of the model
and motivate the design of new models.

,,.-- Relationships. Provides insights into the presence and absence of the relationships
between random variables.

,,,.- Computations. Provides a way to s,tructure complex P!Oba~ility calculations.


Module3Y

Statistical Learning:

Statistical learning plays a key role in many areas


of science, finance and indus try. A f~w
examples are already considered -~ Some more exam ples of the learning proble m·i
are: Predict whet her a patient, hospitalized due to
a heart attack, will have a second heart
attac k. ?

Statistical learning theory is a framework for mach


ine learning that draws from statistics and
functional analysis. It deals with finding a predictive
function based on the data presented~ e
main idea in statistical learning theory is to build a
model that can draw- con-clusion s from
data and make predictions. --- - - - -

Why statistics is used in Al?


Asse ssme nt of certainty or uncertainty in results:✓
Statistics can help to enable or
impr ove the quantification of uncertainty in and
the interpretability of Al methods.
By adopting specific statistical mode ls, math emat
ical proof s of validity can also be
provided .


Mach in e
learn ing St ati stics

-0
M..,1c. l 11n~ I(• .rr, Ill .•
,)1 t1f1c1.ll 1nt<.tl1Pt rH ,.
. l,I,, • ,1 0 ',l.1 11 IH • , l1 • Id ,,I 111. , 111111 ,1ln•.1'1, JI
,1l id11 tJ.11o111i ,,, ugn 1,1111,1,
11< 111Hj1J/ ",


P , ('( ll ({lll j~ ,lr< tll ,Jle r 11 Jt( IHII P '
I (tp• 0 Th• ', l,,.1 l1•,1 11 ,1\ 11 H ,clt•l', cJl(•l(\l1 •• 11 1(_
• r1
'.'>lte ngth of rn,Htnnt ll·, irr 1111p l1J • 111 11.. ,f,.. ,, ,rH t• ,H1t 1u l 1t 1t •
dlgoru lirn~ r1)1,r11· r1 1•, 11. !;1• I N ' ·f•11 !hr• •/,H1,1fl l • ••

O Tl'lf' rnodcls in n 1,Jc-11 f •P lt-..H n ,"1'. ._ir ,. O r-11,J'' / ·.1.,1,•.11( ,JI ,,11 , rh•l·. 11 •~1Y ' '
desi gne d t o c.011c lucJe tt,e rnu, • fJf ! d1 1 ll'J ll ' , , Lui / i ll• / ,Jl'f•
n r ll ,J ,. ( l H , Jf(•
ilCC.Vf.3t e pr<:cJ1ct, 0'1~ poc,s ll>IC ( •rt•,11 ir,t I

e M ,1 ch1ne learning ,s ..tll about


ciTITmm
0 Sl.Jt1:.l1C" $ I'. a l l rlhQUI lm<Jr ni
~l~!,O f~1)·. l.,•· l v;(:;._~J~~ 1,1t, I,..•~ ur 111
the ir •. •grnf1c.-Jnc t: - -

(Tur ing
learn ing with Complex Data:
little or no hum~ intervention. It
Machine Leam ing (ML) is an automated learning with
the available inputs. The main purpose
involves programming computers so that they learn from
that can learn from the previous data
of machine learning is to explore and construct algorithms
and make predictions on nev.i inp~t d_ata.

/ What is data complexity in Al ?


set as the "Kolmogorov
The ''universal data complexity '' is defined for a data
(Kolmogorov who is the Soviet
complexity" of the mapping enforced by the data set.
to the theoretical foundations of
Mathematician, who made important contributions
probability.) It is closely related to several existing
principles used in machine learning
the Bayesian approach.
such as Occam's razor, the minimum description leng_th, and
level of difficulty you'll face when trying to
v'lhe complexity of your data is likely to indicate the
typically more difficult to prepare and
translate it into business value - a complex data set is
")t set of Bl tools to do so.
analyze than simple data, and often will require a differef

SIZ E
: SffiUCTU RE I
' - - - -·-
- DETAIL

' - :::
I
--

~ j i
GROvVT H
RATE
\w.Hnr1 thnkEs .. . r
QUERY
LANGU;>;GE

DATA COffiPLEX

TYPE

Hidden variables in Al:


belief network whose value is
A hidden variable or a latent variable is a variable in a
not observed for any of the examples. That is, t~ere
_!~-~~-- column in the data
corresponding to that variable.
presence of hidden variables.
A serious problem in learning probabilistic models is the
l of the observed variables. As
These variables are not observed, yet interact with severa
the latter. In recent years,
such, they induce seemi ngly compl ex dependencies among
development of algorithms for learning
much attention has been devoted to the
presence of hidden variable.
parameters , and in some cases structure, in the

(An Example of Hiddeen Variable )

Rote learning in Al:


ng specific new items as they are
Rote learning is the process of memorizi
easy to realize within a computer program.
encountered. The basic idea is simple and
n is encountered, it is stored away for
Each time a new and useful piece of infonnatio
future use.
nation based on repetition~ Rote
CD v"'Rote learning is the process of memorizing infon
ly recall basic facts and helps _deve_!_?_P
learning enhances students' ability to quick
of rote learning includ~memorizing
foundational knowledg ~~_ _!Qp i_c. Examples
the periodic table of elements .
multiplication tables or...,...--
bet, n~ rs, and multiplication
~ of IQ_te learning include memorizing the alpha
ssary step in learning certain subjects.
t~bles. Some consider rote learning to be a nece

@-
Advantages of rote learning in Al
wholly, and even to retain it for life. Rote
..Q) Rote learning allows one to recall information it difficult to UJJdersJa.nd or master reading
learning makes it easier for people to score who find ory.
and ~th s concept~ ote learning can help improve short-term mem

tability.)"
is ~th at helps measure a company's profi
~ ROTE or ("~eturn ~n :ang ible ~qui ty"
Performa nce
standard

Pe rce pt-. \
Sensors

changes
~ \
R I

knowled ge
~ I!
N
Effector s
Action s \ T /
! '-,___/
learnin g Agent
j
---- ---- -·- - -· - -
(Component of Learning System)

Learning by taking Advice


Learning by taking advice is the easiest and simple way of learning. In this type of learning.
a
the
/ program mer writes a program to give some instructions to perform a task to
compute r. Once it is learned (i.e. programmed), the system will be able to do new things

Four pillars of learning

There are Four Fundamental elements as the "four pillars of learning ".

\,o'""'e The first pillar: Attention. (One can't learn without paying attention to what must be
learned.)
• The second pillar: Active engagement.
• The third pillar: Feedback.

• The fourth pillar: consolidation.


ng-inatilllftl'I

a problem (by an agent based

a problem....
the state space....
3) Gather knowledge. ...
4) Planning-(Decide data structure and control strategy) ...
• 5) Applying and executing

Leaming from Examples:

Leaming is one of the fundamental building blocks of artificial intelligence (Al) solutions. From a
conceptual standpoint, learning is a process that improves the knowledge of an Al program
by making observations about its environment.

What is learning by example called?


v--
1n psychology, this is referred to as observational learning. Observational learning is
sometimes called shapir:!9, modeling, and vicarious reinforcement.

Best Examples of Leaming


vff' Speaking and writing a foreign language.
~ Greeting the teacher by folding hands.
~ To gain speed in mathemat ic calculations.
~ Students can categorize types of animals .
...,.- Students can apply the acquired skills or knowledge .
..,.,,,.--students can accurately describe their observation .

Induction Based Leaming :


Inductive teaching and learning is an umbrella tenn that encompasses a range of i
181hDds, including inquiry learning, problem-based learning, project-based lea ·
leaching, discovery learning, and just-in-time teaching.

we are given examples of a function


ilrllte'li-.:IIIIDIIINI Cf(x)). The goal of inductive '
. .
Here the learners must analyze infor . .
and even if they're wron th mat,on in front of them, come up with logical conclusions,
them understand the und~'rty·e plroce~s helps the~ engage better with the information. It helps
ing 091c in a way thats more memorable.

Explanation Based Learning:

t
:xplana io~-based learning (ESL) is a technique by which an intelligent system can learn
Y obs_erv_1ng exam~les. EBL systems are characterized by the ability to create justified
genera 1IzatIons from single training instances.

Inputs

Specifi c goa l
/ pro b l em
~
Problem Solver

Gener
~ ust , f,ca ti on
co n cep t
---~---"'-,

(Explanation Based Learning)

An Explanation text is typically written in the present tense with formal to-the-point
language that doesn't deviate from the topic. It uses separate text with headings and sub-
headings to make the explanation text simple and easy to understand.

Discovery in Al

A discovery system is an artificial intelligence system that attempts to discover new


scientific concepts or laws. The aim of discovery systems is to automate scientific data
analysis and the scientific discovery process.

(John McCarthy was one of the most influential people in the field . He is known as the "father of
✓ artificial intelligence" because of his fantastic work in Computer Science and Al. McCarthy coined
{ the term "artificial intelligence" in the 1950s.)

Analogy:

It's a fundamental mechanism of thought that will help Al get to where we want it to be.

✓ Leaming by analogy means acquiring new knowledge about an input entity by ! ra_!lsfer~~ng
it from a known similar entity.
-v·
For exam ple you might analo · d • . .
·. - ~•!..e_ r!vmg to
have a r:!:i~P (1.e., ~I~!) ) for where you're going.proJec
-
t management. In both cases it helps to
- - - --- - - --

Formal Learning Theory:

Formal learning theory is the mathematical embodimen


t of a normative epistemology. It
deals with the question of how an agent should use observ
ations about her environment to
arrive at correct and informative conclusions.

Why is formal theory important?


The development, testing, and application of forma
l theories allows researchers to
systematically revise and expand theories and leads
to cumulative knowledge, which,
ultimately, advances understanding of the phenomeno
n in question.
Formal learning is the learning that takes place throu
gh a structured program of learning
that leads to the full or partial achievement of an officia
lly accredited qualification.

all ravms are black

y ,4

~~
not all ravens

..s
are black V
~
J,

~? ~~
not all ravens
are black ~J
at this point either a black or
~ not all ravms a nonblack raven is observed
are black

(Exam ple of Formal Learning Theory)

Neural Net learning and Genetic learning:


First of all, genetic algorithms are search-based optimization
algorithms used to find optimal or
near-optimal solutions for search problems and optimization
problems. Neural networks, on the
other hand, are mathematical models that map betwe
en complex inputs and outputs.
I

/ ANN is a widely accepted machine learning method that uses


past data to predict future
s of input variables for
trend, while GA is an algorithm that can find better subset
its efficient feature selection.
importing into ANN, hence enabling more accurate prediction by

Genetic algorithm in machine learning is mainly adaptive


heuristic or search engine

I algorithms that provide solutions for search and optimization


learning. It is a methodology that solves unconstrained and constra
based on natural selection.
problems in machine
ined optimization problems

What is Genetic Algorithm?

,~Population
~
lnftialaation of ----,,--
,._,.
Function

\
Selection

www.aducba.com

(Different Phases of Genetic Algorithm }

artificial intelligence (Al)


Expert Systems: An expert system is a computer program that uses
or an organization that has
technologies to simulate the judgment and behavior of a human
s are usually intended to
expertise and experience in a particular field . v§'xpert system
complement, not replace , human experts .

Examples of Expert Systems


infections and was based
..,.-,--• MYCIN: It could recognize different bacteria that might cause acute
on backward chaining ....
s.
vA· DENDRAL: A molecular structure prediction tool for chemical analysi
detect cancer in its earliest
~ CaDet: It's one of the best examples of an expert system that can
stages.

Representin g and Using Domain Knowledge:


Expert system is built around a
Knowledge Base (Representing and Using Domain Knowledge)
representation of the
knowledge base module. Expert system contains a formal
be in the form of problem-
information provided by the domain expert. This information may
solving rules, procedures, or data intrinsic to the domain .
In software engineering, domain know
the target system oper t f ledge is knowledgo about tho environment In which
learned from softwa a es, or example, software agents. Domain knowledge usually must be
re users in the domai ( d i '
software developers. n as oma n specialists/experts), rather than from

With domain knowledg 'II k


y . . e, you now what data Is best to use to train and test your model.
11 1
ou WI a so realize how you can tailor the model you use to better represent the data set and
th e problem yo ' t ·
u re rymg to tackle, and how to make the best use of the insights your model
produces.

In software engineering, domain knowledge is knowledge about the environment in which


the target system operates, for example, software agents. Domain knowledge usually must be
learned from software users in the domain (as domain specialists/experts), rather than from
software developers.

Expert System Shells:

The E.S shell simplifies the process of creating a knowledge base. It is the shell that
actually processes the information entered by a user relates it to the concepts contained in the
knowledge base and provides an assessment or solution for a particular problem.

Your interface to the operating system is called a shell. The shell is the outermost
layer of the operating system. Shells incorporate a programming language to control
processes and files, as well as to start and control other programs.

Components of an Expert System

These are:

l £. 1) The inference engine --;> 5l !: -2 A o.. T'"u ,t u ~ I-<> /v\0114. L.rv ~ CY-,{_ W v ~ , "-{
C\ 6 0 u .t:, \,<._r~ C w~ -0..M t l )I,, •
K !1 2) The knowledge base -7 :, 'r -{ " a,__ r n , r _0 , . I , t
, f .::,.. "'(_A..t" /'>i/.l"\V"-<. OY'\ - A,\ I'-(. t-- ~ 1b"'-"'r<-y ot-
lJ l. 3) The User interface G (_ ,
~
'
~'-
----c""-.:t-""Y-vw---\"l·,_, Y) Cl. ic,c vk- a, ~
.f-,-_c ,..,._,
0
k:
""-"
vvt ~
w " , .,
,:,u....i._
: ~L
d'-"-1'' ·
C;D~ \ " """'
'
r" "" ""T.P ~ ~ •1 • _.
o,m-y"V'"--'<A'l)
o/ ~ '"l. -0 ~ ~ ~~ C 0 t- V ~ v -~
\-r-, ""'-""' ~ rl\ '} ½-"'~ OYJ..

. r 2x_ C-cl"-l'W\ V \'\ '. C.."-'DOY) -~ ""-. °' ~v ~ G~.


1 1 ~ ~ 0 ~ '.J "" \-v. t--. C!--- ~
1) ~ ,.__.'.l,),...__.C L1-,. . _~ - <w- "-'\ k ~ ~ '" \-t-r t., u
N ~ "-J.. ' - ~ c---µ_ 3 4 -h,,,-, «_
) 3) h
-'t-) N ~"' v D n : ..;_ \/ "'"" ~'Y\ -\-vr k
C:.Z (_ t-\J) 'I.) '5') f-o" """ b cv\)Lu'\ ~ '\ ·k,.,... h (z (__N LI)
@ u ·:t. ~ }, +LL J.. n, -<,. y \ "'t t<' Jo--( b le. l ~ ~ +l.q_ ). " e,' c_,,.uL;{
Knowledge Acquisition: o\- A~ \.-\-•u'-\~ S. 11,-..) ..s.~"-vi ,

Knowledge acquisition is the process of extracting, structuring, and organizing _knowledge


from o1 e or more sources, and its transfer to the knowledge base and sometimes to the
inference engine. This process has been identified by many researchers and practitioners as a
major bottleneck.
Q_' )- '. E__ck.vc_c...,1,..; o"'- ~A -1-4_ A c.v~~,-h c. n. o\- ~ o l v.~ .

u ·:1. . v S.. - U)( - (J~')'Y\h-vk~ NL.kl\..,\ -t-o t-4-.. -r~ucl. r - :~~


<:) 1-- °' r~'"rA O'I" L,~ ~ -<i:""-y>--V-- wd-\.._ J-.,.; ),t' N
pn ......i A " <.k-- •
U A VY \?- ~ \I -e .._; 'VY1 C.-C ~ he l0 "'-M v.A...>vr- -~ ..R,,
~ "- ~ +-k-y ~ 'V\ ~'Y'°I. c.J:: W ,' ~ o.,_ () A O ,J,. V J - c,y
,-- s. Q I" v ~ CJl_ '
C l

µ..c_ v ~ '- " -(_________


+\ "~ ~ \ L ( ,· \ "\ h"-'~I ~-- (

I
V

~ .,.,,. ol knowledge ~ is lllacl,jne learning. II may be PfOcess of autonomous


~ creation or refinements through the Use of COmPlJter Programs. The
should be integrated With existing knowtedge in some 1T1ean
ing1u1 way
newry acquired

( Know ledge Acqu isition )

Book s ·
Refer ence · Appro ach Pt>'.t" ~'' "' ~•
1. Artrfic ial Intelli gence .. A Mode m
2 mach ine Learn ng
Hand s on )'
3· . 5 ( 2 nd Edmo n
.
Artific ial lnteUigenc For Dumm ,e

e Analy tics - Algon.thms. Worked
4 . · Funda menta ls ofStud . tor Predid Jve Data
Mach in e Leam ing
..,,,,- Ex.am ies (2nd Edition)"
p Ies and Case

You might also like