AIML Lab Manual
AIML Lab Manual
Aim:
T o w ri te a Py thon p ro gra m to im p lem ent B rea dth F irst S earch (B FS).
Algorithm:
S tep 1 . S tart
S tep 4 . C rea te a li st o f tha t vert ex's ad ja cent no des . A dd tho se w hic h are no t w ithin t he visi ted
S tep 6 . S top
Program:
g rap h = {
'8 ' : []
q ueu e.po p (0 )
# D r iver Cod e
3 7 2 4 8
Result:
T hus th e P ytho n pr og ram to im ple m ent B rea dt h First S earc h (B FS ) w a s deve lop ed successfully.
Ex. No.1 .B U N INFO R M ED SEA R CH A LG O R ITH M - D FS
D ate:
Aim:
T o w ri te a P ython p ro gra m to im p lem ent D ep th First S ea rch (D FS).
Algorithm:
S tep 1.Sta rt
S tep 2 .Pu t a ny one o f the g rap h's vert ex o n to p of the sta ck.
S tep 3 .Af ter tha t ta ke the to p i tem o f the sta ck a nd ad d it to the visited lis t o f th e vertex.
S tep 4 .N ext, crea te a list o f that ad jac ent no de o f th e vertex. Ad d t he one s w hich aren't i n the
S tep 6 .St op
Program:
g rap h = {
'8 ' : []
if no de not in visited:
D riv er C od e
Result:
T hus th e Pyth on p ro gra m to im ple m ent D e pth Fi rst S ear ch (D FS) w as deve lop ed successfully.
Ex. N o.2 . A IN FO R M ED SEA R CH A LG O R ITH M A * SEAR CH
D ate:
Aim:
T o w r ite a P ytho n pro gr am t o im plem ent A* s earc h algo rithm .
Algorithm:
S tep 1 : Cre ate a pri ori ty qu eue a nd pu sh the sta rtin g nod e o nto t he qu eu e.Initia liz e m inim u m
S tep 4 : If the q ue ue is e m p ty a nd t he go a l has not been f ou nd, ret urn N one (n o p ath f o und ). S tep
5 : S top
Program:
d ef a Sta rA lgo (sta rt_ no de , sto p_ nod e):
o pe n_ set = set(sta rt_ no d e)
cl osed _ se t = set ()
g = {} # sto re di sta nce f rom sta rting no de
p ar ents = {}# p aren ts cont ains a n ad ja cency m a p of all n od es
# dit ance of sta rting no de fro m itself is zer o
g [star t_ no de] = 0
# star t_ no de is ro o t nod e i.e it has no pa rent no des
# so start _ no de i s set to its o w n pa rent no d e
p ar ents[sta rt_ no de ] = sta rt_ nod e
w hile l en(op en_ set ) > 0 :
n = None
# nod e w ith low est f() is fo un d
f o r v in op en_ se t:
i f n = = N o ne o r g [v] + heu rist ic(v) < g[n] + heur istic(n): n =
v
i f n = = sto p_ nod e or G ra ph_ nod es[n] = = N one :
pa ss
else:
f o r (m , w eig ht) i n get_ neighbo rs(n):
# nod es 'm ' no t in f irst a nd la st set a re ad de d to firs t
# n is s et its pa rent
i f m no t in op en_ s et and m not i n clos ed_ set:
o pen_ set.a dd(m )
p ar ents[m ] = n
g [m ] = g [n] + w eig ht
# fo r each no d e m ,com pa re its d ista nce f rom start i.e g(m ) t o the
# fro m s tart thro ug h n n od e
else:
i f g[m ] > g [n] + w eig ht:
# up da te g (m )
g [m ] = g[n] + w eig ht
# chang e pa rent of m to n
p ar ents[m ] = n
Result:
T hus th e pyt hon p ro gra m fo r A* S ea rch w as d evel op ed a nd the o ut pu t w a s ve rifi ed su ccessf ully.
Ex. N o.2 . B IM PLEM EN TAT IO N O F INFO R M ED SEAR C H A LGO RIT HM S D ate:
(M EM O R Y - B O U ND ED A*)
Aim:
T o w rite a p ytho n p rog ram to i m p lem ent t he m em ory- bo und ed A*.
Algorithm:
S tep 1 : S tart the p ro gra m
Program:
g rap h= {'A':['B ','C '], 'B ':['D ','E'], 'C ':['F ','G '], 'D ':['H','J'], 'E':['J','K '], 'F':['L','M '],
'G ':['N','O '],'H':[], 'I':[], 'J':[], 'K ':[], 'L':[], 'M ':[], 'N':[], 'O ':[]}
i f (sta rt = = g oa l):
re turn pa th
else:
(lev el= = m a xD ):
re turn Fa lse
re turn pa th
p ath.p op ()
re turn Fal se
start= 'A'
p ath= list()
else:
G o al no d e testi ng fo r A
G o al nod e te st f ail ed
le vel- -> 2
G o al no de testing f or B
G o al nod e te st f ail ed
le vel- -> 3
G o al no de testing f or D
G o al nod e te st f ail ed
le vel- -> 4
G o al no de testing f or H
G o al nod e te st f ail ed
le vel- -> 4
G o al no de testing f or J
Result:
T hus the p rog ra m f or im p lem enting inf or m ed sea rch a lgo rit hm s - m e m o ry bo und ed A* has
ver ifi ed s ucces sfu lly.
Ex. No.3 NAIVE BAYES MODEL
D ate:
Aim:
Algorithm:
S tep 1 . Lo ad the li brari es: im po rt the r equ ired libra ries su ch a s pa nda s, num py, a nd skl earn . Step 2 .
S tep 3 . C lea n and prep ro cess the d ata as nec essar y. For ex am p le, yo u ca n ha ndl e m issi ng val - u es,
co nvert cat ego rica l va ria bles into num erica l va ria bles, a nd nor m al iz e the d a ta.
S tep 4 . Sp lit t he da ta into t rain ing a nd te st sets u sing the tra in_ t est_ sp lit fu nctio n f rom scikit-
learn.
S tep 5 . Tra in th e G au ssia n Na ive B ayes m o del usi ng the t rain ing d ata.
S tep 6 . Eval ua te the p erf orm anc e of the m o de l usi ng the test d ata an d the accu ra cy_ sco re f unc-
S tep 7 . Fina lly, yo u ca n us e the tra ined m o d el to m a ke p redi ctio ns o n new data .
Program:
im po rt p and a s as p d
im po rt nu m p y as np
# Loa d t he data
y = d f['buy_ co m pu ter']
X _ t rai n, X _ test , y_ trai n, y_ t est = tra in_ t est_ sp lit(X , y, test_ siz e= 0 .3 ,
m od el = G a u ssia nNB ()
# Test the m od el
Data.csv
a ge,inco m e,stu dent,credit_ ra ting,bu y_ com p uter
3 0 ,4 5 0 0 0 ,0 ,1 0 ,0
3 2 ,5 4 0 0 0 ,0 ,1 0 0 ,0
3 5 ,6 1 0 0 0 ,1 ,1 0 ,1
4 0 ,6 5 0 0 0 ,0 ,5 0 ,1
4 5 ,7 5 0 0 0 ,0 ,1 0 0 ,0
Output:
A ccu racy: 0 .5
P redi ctio n: [0 ]
Result:
T hus the P ytho n pro gra m f or i m pl em enti ng Na ïve B ay es m o del w as d evelo ped and the o u tpu t
Aim:
Algorithm:
S tep 1 . Start by im portin g th e required lib raries such as m ath and pom egranate.
S tep 2 . Defin e the discrete prob ability distribution for th e guest's initial choice of door
S tep 3 . Defin e the discrete prob ability distribution for th e prize door
S tep 4 . Defin e the con ditional probability table for the door that M onty pick s based on the
S tep 5 . Create S tate objects for the guest, prize, an d M onty's choice
S tep 6 . Create a B ayesian N etw ork ob ject and add the states an d edges betw een th em
S tep 8 . Use th e predict_ proba m eth od to calculate th e beliefs for a given set of evidence
S tep 1 0 . Stop
Program:
f ro m p ybbn.gr ap h.da g im po rt B bn
f ro m p ybbn.g rap h.edg e im po rt Ed g e, Edg eT ype fro m
p ybbn.g rap h.jo intree im po rt Evi denc eB uil der fro m
p ybbn.g rap h.nod e i m po rt Bb nNo de
f ro m p ybbn.gra p h.v aria ble im p ort V aria ble
f ro m p ybbn.pp tc.inf erencec ontr oll er im p ort Inf erenceC ont rol ler # t he
g u est's inti tia l d oo r sel ectio n is com plet ely ra nd om
g u est = B bn No de(V aria ble(0 , 'g uest ', ['A ', 'B ', 'C ']), [1 .0 /3 , 1 .0 / 3 , 1 .0 /3 ]) # the
d o or t he p riz e is be hind is a lso co m p letely rand o m
p riz e = B bnN od e(V a ria ble(1 , 'p riz e', ['A', 'B ', 'C']), [1 .0 / 3 , 1 .0 /3 , 1 .0 / 3 ])
# m o nty is dep end ent o n bo th g ues t a nd p riz e
m ont y = B bnNo d e(V a riab le(2 , 'm o nty', ['A', 'B ', 'C ']), [0 , 0 .5 , 0 .5 , # A, A 0 , 0 ,
1 , # A, B
0, 1, 0, #A, C
0, 0, 1, #B, A
0 .5 , 0 , 0 .5 , # B , B
1, 0 , 0 , # B , C
0, 1, 0, #C, A
1, 0 , 0 , # C , B
0 .5 , 0 .5 , 0 # C , C
])
# C rea te Netw or k
bb n = B bn() \
.a dd _ no de (g ue st) \
.a dd _ no de (p riz e) \
.a dd _ no de (m onty ) \
.a dd _ ed ge (Ed ge (g ue st, m ont y, Ed ge Typ e.D IR ECT ED )) \
.a dd _ ed ge (Ed ge (p riz e, m on ty, Ed ge Typ e.D IR ECT ED )) #
C o nvert th e B B N t o a jo in t ree
j oin_ tree = Inf erenceC ontroller.app ly(bbn)
# D ef ine a f un ctio n f o r p rint ing m a rgi nal p rob abil ities def
p rint _ pro bs():
f o r nod e in join_ tree.g et_ bbn_ no des():
p o tentia l = j oin_ tree.g et_ bbn_ po tenti al(no de)
p rint (" No d e:" , nod e)
p rint(" V alu es:" )
p rint(po tential) print('
')
p rint_ probs()
Output:
N od e: 1 |pri ze|A ,B ,C
V a lues:
1 = A|0 .3 3 3 3 3
1 = B |0 .3 3 3 3 3
1 = C|0 .3 3 3 3 3
N od e: 2 |m onty |A ,B ,C
V alues:
2 = A |0 .3 3 3 3 3
2 = B |0 .3 3 3 3 3
2 = C |0 .3 3 3 3 3
N od e: 0 |g ues t|A,B ,C
V alues:
0 = A |0 .3 3 3 3 3
0 = B |0 .3 3 3 3 3
0 = C |0 .3 3 3 3 3
Result:
T hus, the P ytho n pro gra m f or im pl em enti ng B ay esia n N etw orks w as su cces sfu lly deve lop ed a nd the
o ut pu t w a s ve rifi ed.
Ex. N o. 5 REGRESSION MODEL
D ate:
Aim:
Algorithm:
Step 1. Im p ort necessary libraries: num py, p anda s, m atplo tlib.p yplot, LinearR egression,
S tep 2 . Crea te a num py ar ray fo r w a ist and w eig ht va lu es an d sto re them in sep a rate v aria bles . Step
3 . Crea te a p and as D ata Fra m e w ith w ai st a nd w eig ht co lu m ns u sing the num py arra ys.
S tep 4 . Ex tract i npu t (X ) a nd o u tpu t (y) var iabl es fro m the D a ta Fram e. Ste p
S tep 6 . Fi t the Linea rR eg ressio n m o del t o the inp ut a nd o u tpu t var iabl es. St ep
S tep 8 . U se the p redi ct() m e thod o f the Lin earR eg ressi on m od el to pred ict the w eight fo r t he new w a ist
va lu e.
S tep 9 . Ca lcu lat e the m e an squ a red erro r and R - sq ua red va lue s u sing m ea n_ sq u ared _ err or() and
S tep 1 0 . P lo t t he a ctu al a nd pred icte d val ues u sing m a tpl otl ib.pyp lo t.scat ter() and m atp lo tlib.p y-
Program:
im po rt nu m p y as n p
im po rt p and as as pd
im po rt m a tp lo tlib.p yplo t a s plt
f ro m sklea rn.line ar_ m od el i m p ort Linea rReg ressio n
f ro m skle arn.m etrics im p ort m ea n_ sq ua red _ erro r, r2 _ sco re
# im p or t sa m pl e d ata u sing pa nd as
w aist = np.a rra y([7 0 , 7 1 , 7 2 , 7 3 , 7 4 , 7 5 , 7 6 , 7 7 , 7 8 , 7 9 ])
w eigh t = np .arra y([5 5 , 5 7 , 5 9 , 6 1 , 6 3 , 6 5 , 6 7 , 6 9 , 7 1 , 7 3 ])
d ata = pd .D ata Fra m e({'w ais t': w aist , 'w e ight ': w e ight }) #
ext ract inp ut and o utp ut var iabl es
X = da ta[['w a ist']]
y = da ta ['w e ight ']
# fi t a li near reg ressio n m o del
m od el = L inear R egres sio n()
m od el.fi t(X , y)
# m a ke pre dict ions on new d ata
new _ d ata = p d.D a taF ram e({'w a ist': [8 0 ]})
p red icted _ w e ight = m o de l.pred ict(new _ d ata [['w aist ']])
p rint(" Pred icte d w eig ht fo r new w a ist valu e:" , i nt(pred icted _ w eig ht)) # cal cul ate
M S E and R - squ a red
y_ pred = m od el.p redict(X )
m se = m ean _ squ a red_ erro r(y, y_ p red)
p rint('M ean Sq ua red Erro r:', m se )
r2 = r2 _ sco re(y,
y_ pred )
p rint('R -s qu ared :', r2 )
# plo t t he a ctua l a nd pre dict ed v alu es
p lt.sca tter(X , y, m a rker= '*', edg eco lo rs= 'g')
p lt.sca tter(new _ da ta , p red icted _ w eig ht, m ar ker= '*', ed g ecol ors= 'r') plt .p lo t(X ,
y_ pred , co lor = 'y')
p lt.xla bel('W a ist (c m )')
p lt.yla bel('W e ight (kg)')
p lt.titl e('Li near R eg ressi on M od el') p lt.
show ()
Output:
Result:
T hus th e Pyth on p ro gra m to bu ild a s im p le lin ear R egre ssio n m o del w a s de velo ped success-
f ully.
Ex. N o. 6 DECISION TREE AND RANDOM FOREST
D ate:
Aim:
T o w rite a Pytho n p ro gra m to bu ild de cisio n tree and ra ndo m f orest.
Algorithm:
S tep 1 . Im po rt necess ary libra ries : num py, m a tp lotl ib, seabo rn, pa nd as, tra in_ tes t_ sp lit, La belEn co d er,
D ec isio nTree Cla ssif ier, p lo t_ tre e, and R a ndo m Fo restC la ssif ier.
S tep 2 . R ead the da ta fro m 'f lo w ers.csv ' in to a p and as Da taFram e.
S tep 3 . Extra ct the f ea ture s into an a rra y X , a nd th e targ et va ria ble int o a n ar ray y. S tep
4 . Encod e th e ta rget var iabl e us ing the La belEnc od er.
S tep 5 . Sp lit t he da ta into tra ining an d tes ting se ts u sing trai n_ test_ spl it f unction.
S tep 6 . Crea te a D ecisio nT reeCl assi fie r obj ect, fi t the m o d el to the tra ining da ta, a nd vi sua liz e the
d ecisi on tree u sing p lot_ tree.
S tep 7 . Crea te a R a ndo m Fo rest Cla ssif ier o bject w ith 1 0 0 estim ato rs, f it the m o de l to t he trai ning da ta ,
a nd visu ali ze the ra nd om fo rest by d ispl ayin g 6 trees.
S tep 8 . Pri nt the a ccur acy o f th e d ecis ion t ree and ra nd om fo rest m od els u sing the sco re m etho d on the
te st d ata
Program:
i m p ort num py as np
i m p ort m at plo tli b.p yp lot as p lt
i m p ort sea bo rn as sns; sns.set()
i m p ort pa nda s a s p d
f ro m skle arn.m od el_ se lectio n im po rt tra in_ t est_ sp lit
f ro m skl earn .p rep roc essing im po rt La belEnco d er
f ro m s klear n.t ree im po rt D ecisio nT reeCl assi fie r, plo t_ tree
f ro m skl earn .en sem ble im p or t R an do m Fo restC la ssif ier
# read the data
d a ta = p d.read_ csv('f low ers.csv')
X = d a ta.ilo c[:, :- 1 ].va lues y
= da ta.il oc[:, - 1 ].va lues #
e ncod e the la bels
l e = LabelEncod er()
y = le.fit_ tra nsf orm (y)
# spli t the d a ta i nto tra ining a nd testi ng sets
X _ tra in, X _ t est, y_ tra in, y_ test = t rain _ test_ spl it(X , y,
r and om _ sta te= 0 ) # cr eate and f it a d ecisio n tr ee m o d el
t ree = De cisio nTr eeCl assi fier ().f it(X _ t rain, y_ tra in)
# visu al ize the d ecisi on tree p lt.f igure(figsiz e= (1 0 ,6 ))
p lo t_ tre e(t ree, fil led= T ru e)
p lt.tit le("D ec isio n Tr ee")
p lt.sho w ()
# crea te an d f it a ra ndo m fo rest m odel
r f = R a ndo m Fo rest Cla ssif ier(n_ e stim a to rs= 1 0 0 , rand o m _ sta te= 0 ).fi t(X _ tra in, y_ t rain )
# visu al ize the ra nd om fo rest
p lt.f igure(fig size= (2 0 ,1 2 ))
f o r i, tr ee_ in_ fo rest in enu m er ate(rf .estim ato rs_ [:6 ]):
p lt.su bpl ot(2 , 3 , i+ 1 )
p lt.a xis('off ')
p lo t_ tre e(t ree_ in _ fo rest , f illed = Tr ue, rou nd ed= T ru e)
p lt.tit le("T ree " + s tr(i+ 1 ))
p lt.su pt itle(" R and om F orest ")
p lt.sho w ()
# ca lcul ate and p rint t he a ccura cy o f dec isio n tree and ra ndo m fo rest
p rint (" Ac cura cy o f dec isio n tre e: {:.2 f }".f orm at (tr ee.scor e(X _ test, y_ t est)))
p rint(" Acc ura cy o f rand o m f orest : {:.2 f }" .fo rm a t(rf .s core (X _ test, y_ test)))
Output:
A ccu racy of d ecisio n tre e: 0 .5 0
A ccu racy o f ra ndo m fo rest : 1 .0 0
A ccu racy of d ecisio n tre e: 0 .5 0
A ccu racy o f rand om f ores t: 1 .0 0
Result:
T hus th e Pyth on p ro gra m to bu ild d ecisio n tree and ra ndo m f o rest w as d evelo p ed su ccess-
f ully.
Ex. No.7 SVM MODELS
D ate:
Aim:
T o w rite a Py thon pro g ram to buil d S V M m o del.
Algorithm:
S tep 1 . Im po rt the neces sary libra ries (m atp lo tlib.p yplo t, num py, a nd sv m fr om sklea rn). S tep 2 .
D ef ine t he f eatu res (X ) and la bels (y ) f or t he f rui t da ta set.
S tep 3 . C rea te an S V M cla ssif ier w i th a linea r kernel u sing s vm .SV C(kern el= 'linea r'). St ep
4 . Tra in the cla ssif ier o n the fr uit da ta usi ng c lf.f it(X , y).
S tep 5 . Plo t the f ru its a nd d ecis ion bo u nda ry using p lt.sca tter(X [:, 0 ], X [:, 1 ], c= co lor s), w here col- o rs
is a l ist o f col ors ass igned to ea ch f ru it ba sed on its l abel.
S tep 6 . Crea te a m eshg rid to eva lu ate the d ecis ion f un ctio n usi ng np.m eshgr id(np .linsp ace(xl im [0 ], x lim [1 ],
1 0 0 ), np .linsp ac e(y lim [0 ], ylim [1 ], 1 0 0 )).
S tep 7 . U se the d ecisi on f unc tio n to cr eate a co nto ur p lo t of the d ecisi on bo u nda ry a nd m a rgin s us- ing
a x.cont ou r(xx, yy, Z , col ors= 'k', leve ls= [-1 , 0 , 1 ], alp ha = 0 .5 , li nestyles = ['- -', '- ', '- -']).
S tep 8 . S how the p lo t us ing p lt.show ().
Program:
im po rt m a tp lo tlib.p yplo t a s plt
im po rt nu m p y a s np
f ro m sklea rn im po rt svm
# D ef ine t he f ruit fea tu res (siz e a nd co lor)
X = np .ar ray([[5 , 2 ], [4 , 3 ], [1 , 7 ], [2 , 6 ], [5 , 5 ], [7 , 1 ], [6 , 2 ], [5 , 3 ], [3 , 6 ], [2 , 7 ], [6 , 3 ], [3 , 3 ],
[1 , 5 ], [7 , 3 ], [6 , 5 ], [2 , 5 ], [3 , 2 ], [7 , 5 ], [1 , 3 ], [4 , 2 ]])
# D efin e the f rui t la bels (0 = a pp les, 1 = ora nges)
y = np .arra y([0 , 0 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1, 0 , 1 , 0 ])
# C rea te a n SV M c lass ifi er w ith a li near kern el
cl f = svm .SV C(ker nel= 'linea r')
# T rai n t he cl assi fie r o n the fru it d ata
cl f.f it(X , y)
# Plo t the fru its a nd dec isio n bou nd ary
co lo rs = ['red ' if la bel = = 0 else 'yell ow ' f or la bel in y]
p lt.sca tter(X [:, 0 ], X [:, 1 ], c= co lor s)
a x = p lt.g ca()
a x.set_ xlabel('S ize')
a x.set_ ylabel('Co lor') xlim
= a x.ge t_ xli m () yli m = a x.
g et_ yl im ()
# C rea te a m eshg rid to eva lua te th e de cisio n fu nction
xx, yy = np .m eshg rid (np.lins pa ce(xlim [0 ], xli m [1 ], 1 0 0 ), np.li nspa ce(ylim [0 ], ylim [1 ], 1 0 0 )) Z =
cl f.de cisio n_ f unc tio n(np .c_ [xx.ra vel(), yy.ra vel()])
Z = Z.resha pe(xx.shap e)
# P lo t the d ecis ion bou nd ary a nd m argins
a x.cont ou r(xx, yy, Z , col ors= 'k', levels = [- 1 , 0 , 1 ], al pha = 0 .5 , lines tyles= ['-- ', '- ', '- -']) plt.sho w ()
Output:
Result:
T hus, the P ytho n pro gr am t o bu ild an S V M m o del w a s devel op ed, a nd t he ou tp ut w as su c-
ces sfu lly verif ied .
Ex.No : 8 ENSEMBLE METHODS
Da te:
Aim:
T o im pl em ent t he ensem bling techniq ue
Algorithm:
Step 1 : S pl it the t rain ing d at aset into tra in, test and va lid atio n d ataset.
Step 4 : T hese p red ictio ns a re used a s fea tu res to buil d a s econd level m o del Step
5 : T his m o del is u sed to m a ke p redi ctio ns o n test and m e ta- fe atu res.
Program:
f ro m skle arn.ense m ble i m po rt R a nd om F ores tCla ssif ier fro m
s klear n.e nsem bl e im p o rt A da B oo stC las sifi er
f ro m skle arn.ense m ble i m po rt G ra d ientB o ost ingC la ssif ier #
c reate the d at aset
X = [[0 , 0 ], [1 , 1]]
y = [0 , 1]
# cre ate the rand o m f o rest clas sifi er
r f_ c lf = R and om Fo restC lass ifi er()
# crea te th e ad a boo st cl assi fie r
a d b_ clf = A da B oo stC la ssif ier()
# cre ate the gra d ient boo sting cl assi fier
g b_ cl f = G rad ientB o o sting Cla ssif ier() #
f it the cla ssif iers to the d at aset
r f_ c lf.f it(X , y)
a db_ clf.fit(X ,y) g b_ clf .f it(X ,
y)
# m a ke p red ictio ns
r f_ p red s = rf _ cl f.p redi ct(X )
a d b_ pr eds = ad b_ cl f.p redi ct(X )
g b_ p red s = g b_ clf .pr edic t(X )
# com bine the pred icti ons
e nsem ble _ pr eds = []
f o r i in range(len(X )):
p red s = [rf _ p reds[i], ad b_ p reds [i], gb_ pred s[i]]
e nsem ble _ pr eds.a pp end (m ax(set (p red s), key= pr eds.co unt ))
# p rint t he ensem ble p red ictio ns
p rint(ensem ble_ preds)
Output:
[0 , 1 ]
Result:
T hus th e pro g ram fo r ense m ble m ethod w as ex ecut ed su ccess fu lly and verified .
Ex. N o: 9 IMPLEMENT CLUSTERING ALGORITHMS
D ate:
Aim:
T o find p at terns a nd stru ctu re in da ta that m a y no t be im m edi atel y ap p arent , and to d isco ver
Algorithm:
S tep 1 : D ata prepa ration
Program:
f ro m nu m py im po rt w here
X , y = m a ke_ cla ssif ica tio n(n_ sa m p les= 1 0 0 0 , n _ fe atu res= 2 , n_ in fo rm a tive= 2 , n_ redu nd ant= 0 ,
n_ clus ters_ p er_ c lass = 1 , rand om _ sta te= 4 )
sho w the p lo t
p yp lot.show ()
Output:
Result:
T hus the p ro gra m fo r clu ster m etho d w as e xecut ed s ucce ssfu lly a nd verif ied.
Ex. N o: 10 IMPLEMENT EM FOR BAYESIAN NETWORKS
D ate:
Aim:
Algorithm:
S tep 2 : Exp ecta tio n Step – T his step is u sed to e stim a te the va lu es of the m issing val ues i n thed ata.
It invo lves the o bserved d ata to bas ical ly gu ess t he va lues in the m issing da ta.
S tep 3 : M a xim i za tio n Step – T his step gene rate s co m p lete d at a af ter the Exp ecta tio n step u pd at es the
RESULT:
T hus the p ro gra m f or Exp ecta tio n M ax im iz a tio n Alg o rithm w a s e xecu ted a nd verified .
Ex. NO :1 1 SIMPLE NEURAL NETWORK MODEL
D ate:
Aim:
T o im plem ent t he neu ral netw o rk m o de l fo r th e gi ven nu m p y a rra y.
Algorithm:
S tep 1 : U se num py arra ys to sto re i npu ts (x) a nd o ut pu ts (y).
Program:
# im p o rt lib rari es
im p ort num py a s np
im p ort m at plo tli b.p yp lot a s plt
fro m skl earn .ne ura l_ net w o rk im p o rt M LPC las sif ier # c reat e
a sa m p le d a tase t
x = np .arra y([[0 , 0 ], [0 , 1 ], [1 , 0 ], [1 , 1 ]])
y = np .arra y([0 , 1 , 1 , 0 ])
# crea te a nd tra in the m o del
m o del = M LP Cl assi fie r(h idd en_ l ayer_ siz es= (2 ), a ctiva tio n= 'relu ', sol ver= 'lbf gs') m od el.f it(x,
y)
# m a ke predictio ns
pred icti ons = m od el.p redi ct([[2 , 2 ], [2 , 3 ]]) print(predictio ns)
# visu al ize the re sult s
plt .sc atte r(x [:,0 ], x [:,1 ], c= y)
plt.xla bel('x1')
plt.yla bel('x2 ')
plt .ti tle('Neu ra l N etw ork M o del')
p lt.show ()
Output:
[0 0]
Result:
T hus the p ro gra m f o r neu ral n etw o rk w a s exec uted a nd verif ied successfully.
Ex. NO :1 2 BUILD DEEP LEARNING NN MODELS
D ate
Aim:
T o im p lem ent and bu ild a neu ral ne tw o rk m o del w hich p red icts d at a usi ng the g iven p re- tra ined
m od els.
Algorithm:
S tep 1 : Im p o rt th e req ui red pa cka ges
9 : S ave the m o d el
Program:
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 2 0 s [0 m
6 5 6 m s/ step - accu ra cy: 0 .0 6 2 5 - lo ss:
2 .4 2 1 6
[1 m 2 8 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━ [0 m [3 7 m ━━━ [0 m [1 m 0 s [0 m
2 m s/ step - accu ra cy: 0 .0 8 5 0 - lo ss: 2 .3 8 1 3
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 1 s [0 m
2 m s/ step - accu ra cy: 0 .0 8 6 2 - lo ss: 2 .3 7 6 8
Ep o ch 2 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 7 m s /ste p - a ccu racy: 0 .0 6 2 5 - l oss :
2 .3 7 7 1
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .0 8 7 7 - l oss: 2 .3 3 6 5
Ep o ch 3 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 6 m s /ste p - a ccu racy: 0 .0 9 3 8 - l oss :
2 .2 9 6 4
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 0 4 7 - lo ss: 2 .3 1 9 8
Ep o ch 4 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 5 m s /ste p - a ccu racy: 0 .1 2 5 0 - lo ss:
2 .2 6 0 7
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 0 1 5 - los s: 2 .3 1 7 3
Ep o ch 5 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 6 m s /ste p - a ccu racy: 0 .0 6 2 5 - l oss :
2 .2 8 8 3
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
9 8 1 us/ step - a ccur acy: 0 .1 0 3 8 - lo ss: 2 .3 0 6 0
Ep o ch 6 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 5 m s /ste p - a ccu racy: 0 .0 6 2 5 - l oss :
2 .3 3 9 9
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 0 0 2 - lo ss: 2 .3 1 4 4
Ep o ch 7 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 6 m s /ste p - a ccu racy: 0 .1 2 5 0 - lo ss:
2 .2 4 0 8
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .0 9 6 1 - lo ss: 2 .2 9 9 6
Ep o ch 8 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 5 m s /ste p - a ccu racy: 0 .0 0 0 0 e + 0 0 - lo ss:
2 .2 8 8 5
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .0 9 3 8 - l oss: 2 .3 0 1 9
Ep o ch 9 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 5 m s /ste p - a ccu racy: 0 .1 2 5 0 - lo ss:
2 .3 0 2 6
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 2 5 9 - lo ss: 2 .2 8 7 0
Ep o ch 1 0 /1 0
[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 6 m s /ste p - a ccu racy: 0 .1 2 5 0 - lo ss:
2 .3 2 5 7
[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 0 7 2 - lo ss: 2 .3 0 2 2
[1 m 1 / 4 [0 m [3 2 m ━━━━━ [0 m [3 7 m ━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
1 0 6 m s/ step - ac cura cy: 0 .1 2 5 0 - lo ss:
2 .2 8 8 8
[1 m 4 /4 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
3 m s/ step - accu ra cy: 0 .0 9 5 4 - lo ss: 2 .3 2 9 5
T est l oss: 2 .3 4 6 2 7 5 3 2 9 5 8 9 8 4 3 8
T est a ccur acy: 0 .0 9 0 0 0 0 0 0 3 5 7 6 2 7 8 6 9
Result:
T hus the p ro gra m f or d eep lea rning in ne ura l netw ork s execu ted a nd o u tpu t w as su c-
cessfu llyverif ied.