0% found this document useful (0 votes)
36 views38 pages

AIML Lab Manual

Uploaded by

Bharathi V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views38 pages

AIML Lab Manual

Uploaded by

Bharathi V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Ex. N o. 1.

A UNINFORMED SEARCH ALGORITHM - BFS


D ate:

Aim:
T o w ri te a Py thon p ro gra m to im p lem ent B rea dth F irst S earch (B FS).

Algorithm:
S tep 1 . S tart

S tep 2 . P ut a ny o ne o f t he gr ap h’s v ertices at the ba ck o f the q ueu e.

S tep 3 . T ake th e fro nt i tem o f t he qu eu e and a dd it to the vi sited list.

S tep 4 . C rea te a li st o f tha t vert ex's ad ja cent no des . A dd tho se w hic h are no t w ithin t he visi ted

li st to the rear of the qu eue .

S tep 5 . C onti nue step s 3 an d 4 till th e q ue ue is em p ty.

S tep 6 . S top

Program:
g rap h = {

'5 ' : ['3 ','7 '],

'3 ' : ['2 ', '4 '],

'7 ' : ['8 '],

'2 ' : [],

'4 ' : ['8 '],

'8 ' : []

vi sited = [] # List fo r visite d no des.

q ueu e = [] # Init ial ize a qu eue

d ef bf s(visited , g ra ph, nod e): # f u nctio n f o r B FS

visited.app end(no de) q ueue.app end(no de)

w hile qu eu e:# C reati ng lo op t o visit ea ch no de m =

q ueu e.po p (0 )

p rint (m , e nd = " ")

f or neig hbou r in gra p h[m ]: i f

nei ghbo ur not i n visite d:

visited.ap pend(neig hbour)

qu eue.app end(neig hbo ur)

# D r iver Cod e

p rint(" Fol low ing is t he B re ad th-F irst Sea rch" )

bf s(visit ed, g rap h, '5 ') # f unct ion call ing


Output:
Fo llo w in g is the B read th- First Sea rch 5

3 7 2 4 8

Result:
T hus th e P ytho n pr og ram to im ple m ent B rea dt h First S earc h (B FS ) w a s deve lop ed successfully.
Ex. No.1 .B U N INFO R M ED SEA R CH A LG O R ITH M - D FS

D ate:

Aim:
T o w ri te a P ython p ro gra m to im p lem ent D ep th First S ea rch (D FS).

Algorithm:
S tep 1.Sta rt

S tep 2 .Pu t a ny one o f the g rap h's vert ex o n to p of the sta ck.

S tep 3 .Af ter tha t ta ke the to p i tem o f the sta ck a nd ad d it to the visited lis t o f th e vertex.

S tep 4 .N ext, crea te a list o f that ad jac ent no de o f th e vertex. Ad d t he one s w hich aren't i n the

vis ited list of vert exes to the to p o f the st ack.

S tep 5 .R ep ea t st eps 3 a nd 4 unt il the st ack is em p ty.

S tep 6 .St op

Program:
g rap h = {

'5 ' : ['3 ','7 '],

'3 ' : ['2 ', '4 '],

'7 ' : ['8 '],

'2 ' : [],

'4 ' : ['8 '],

'8 ' : []

vi sited = se t() # Set t o keep tra ck of visit ed n od es o f g rap h. d ef

d fs(vi sited , gra p h, nod e): # f uncti on fo r d fs

if no de not in visited:

p rint (no d e) visited.add (no de)

f or n eighb ou r in g ra ph[no de]:

d fs(vi sited , g ra ph, neig hbo ur) #

D riv er C od e

p rint(" Fol low ing is the D e pth- First S earc h")

d fs(vi sited , gra p h, '5 ')


Output:
Fo llo w in g is the De pth- Firs t Se arch 5

Result:
T hus th e Pyth on p ro gra m to im ple m ent D e pth Fi rst S ear ch (D FS) w as deve lop ed successfully.
Ex. N o.2 . A IN FO R M ED SEA R CH A LG O R ITH M A * SEAR CH

D ate:

Aim:
T o w r ite a P ytho n pro gr am t o im plem ent A* s earc h algo rithm .

Algorithm:
S tep 1 : Cre ate a pri ori ty qu eue a nd pu sh the sta rtin g nod e o nto t he qu eu e.Initia liz e m inim u m

va lu e (m in_ inde x) to lo ca tio n 0 .

S tep 2 : Crea te a set t o st ore t he visi ted nodes.

S tep 3 : R ep ea t the f o llo w ing step s u ntil t he qu eu e is em pty:

3 .1 : P op the nod e w i th the l ow est co st + heuri stic f ro m the queu e.

3 .2 : If the cu rrent no d e is t he g oa l, retu rn the p a th to the g oa l.

3 .3 : If the cu rrent no d e ha s al read y been vi sited , skip it.

3 .4 : M a rk the curr ent no de a s visited.

: Ex pa nd the cu rrent n od e a nd a dd its neig hbo rs to the q ueue.

S tep 4 : If the q ue ue is e m p ty a nd t he go a l has not been f ou nd, ret urn N one (n o p ath f o und ). S tep

5 : S top

Program:
d ef a Sta rA lgo (sta rt_ no de , sto p_ nod e):
o pe n_ set = set(sta rt_ no d e)
cl osed _ se t = set ()
g = {} # sto re di sta nce f rom sta rting no de
p ar ents = {}# p aren ts cont ains a n ad ja cency m a p of all n od es
# dit ance of sta rting no de fro m itself is zer o
g [star t_ no de] = 0
# star t_ no de is ro o t nod e i.e it has no pa rent no des
# so start _ no de i s set to its o w n pa rent no d e
p ar ents[sta rt_ no de ] = sta rt_ nod e
w hile l en(op en_ set ) > 0 :
n = None
# nod e w ith low est f() is fo un d
f o r v in op en_ se t:
i f n = = N o ne o r g [v] + heu rist ic(v) < g[n] + heur istic(n): n =
v
i f n = = sto p_ nod e or G ra ph_ nod es[n] = = N one :
pa ss
else:
f o r (m , w eig ht) i n get_ neighbo rs(n):
# nod es 'm ' no t in f irst a nd la st set a re ad de d to firs t
# n is s et its pa rent
i f m no t in op en_ s et and m not i n clos ed_ set:
o pen_ set.a dd(m )
p ar ents[m ] = n
g [m ] = g [n] + w eig ht
# fo r each no d e m ,com pa re its d ista nce f rom start i.e g(m ) t o the
# fro m s tart thro ug h n n od e
else:
i f g[m ] > g [n] + w eig ht:
# up da te g (m )
g [m ] = g[n] + w eig ht
# chang e pa rent of m to n
p ar ents[m ] = n

# if m in clo sed set,rem o ve a nd ad d to op en if


m in cl ose d_ se t:
clo sed _ set.rem o ve(m )
o pen_ set.a dd(m )
i f n = = N one:
p rint('Pa th d oes n ot exi st!')
re turn No ne

# if the cu rrent nod e is the stop _ node


# t hen w e beg in reco nstru ctin the p ath f ro m it to the sta rt_ no de
i f n = = sto p_ nod e:
p at h = []
w hile p ar ents[n] != n:
p ath.a pp end (n)
n = pa rents [n]
p ath.a pp end (sta rt_ no de)
p ath.reverse()
p rint('Pa th f ou nd: {}'.f o rm a t(p a th))
re turn pa th
# r em o ve n fro m the o pen _ list, a nd a dd it to cl osed _ li st
# beca use all of hi s neig hbo rs w e re insp ected
o pen_ set.rem o ve(n)
clo sed _ set.a dd(n) pr int('Pa th
d oe s not exist!') retu rn N one
# d ef ine fu ctio n to retu rn neig hbo r a nd it s d ista nce
# f ro m t he pa ssed no de
d ef get _ neig hbo rs(v): i f
v i n G r ap h_ no des :
re turn G ra ph_ nod es[v]
else:
re turn N one
# f o r sim pli city w e ll con side r heu ristic dis tanc es g iven
# a nd t his fu nctio n retu rns heu ristic d ista nce fo r al l nod es
d ef heur istic(n):
H _ d ist = { 'A ':
11 ,
'B ': 6 ,
'C ': 99 ,
'D ': 1,
'E': 7 ,
'G ': 0 ,
}
ret urn H_ d ist[n]
# D e scribe you r gr ap h here
G ra ph _ nod es = {
'A ': [('B ', 2 ), ('E', 3 )],
'B ': [('C ', 1 ),('G ', 9 )],
'C ': No ne, 'E':
[('D ', 6 )],
'D ': [('G ', 1)],
}
a Sta rA lgo ('A', 'G')
Output:
P ath f ou nd : ['A ', 'E', 'D ', 'G ']

Result:
T hus th e pyt hon p ro gra m fo r A* S ea rch w as d evel op ed a nd the o ut pu t w a s ve rifi ed su ccessf ully.
Ex. N o.2 . B IM PLEM EN TAT IO N O F INFO R M ED SEAR C H A LGO RIT HM S D ate:

(M EM O R Y - B O U ND ED A*)

Aim:
T o w rite a p ytho n p rog ram to i m p lem ent t he m em ory- bo und ed A*.

Algorithm:
S tep 1 : S tart the p ro gra m

S tep 2 : G e t the g rap h a s u ser inpu t.

S tep 3 : Crea te a f u nctio n A w it h pa ram eters (S tart , g oa l, pa th, level , m ax D ).

S tep 4 : G er the dep th li m it as u ser inpu t.

S tep 5 : Che ck fo r t he go al n od e and retu rn prese nt or no t p resent. S tep

6 : C all the f un ctio n w it h req uir ed p a ram eters.

S tep 7 : S to p the pro gr am and Exit

Program:
g rap h= {'A':['B ','C '], 'B ':['D ','E'], 'C ':['F ','G '], 'D ':['H','J'], 'E':['J','K '], 'F':['L','M '],

'G ':['N','O '],'H':[], 'I':[], 'J':[], 'K ':[], 'L':[], 'M ':[], 'N':[], 'O ':[]}

d ef M A (sta rt,go al ,p a th,level,m a xD ):

p rint(" \n C urre nt leve l-- > " ,l evel)

p rint(" G oa l no de testing fo r" ,s tart )

p ath.a pp end (start)

i f (sta rt = = g oa l):

p rint(" G oa l test su ccessf u l")

re turn pa th

else:

p rint(" G oa l no de test f ail ed" ) i f

(lev el= = m a xD ):

re turn Fa lse

p rint(" \n Exp a ndin g the cur rent no d e",sta rt) fo r

ch ild in g rap h[sta rt]:

i f M A(child ,go a l,pa th,level+ 1 ,m axD ):

re turn pa th

p ath.p op ()

re turn Fal se

start= 'A'

g oa l= inp u t(" Ente r the g o al stat e:" )

m axD = int (i npu t("En ter the m a x d ept h lim it:" ))


p rint()

p ath= list()

res= M A (start,go al,path,1 ,m a xD) if (res):

p rint(" Pa th to g o al nod e a vail abl e")

p rint("p ath",pa th)

else:

p rint(" No pa th a va ilab le f or th e go al nod e in the g iven d ept h lim it")


Output:
Ent er the g oa l state:J

Ent er the m ax d epth lim it:5

C urr ent lev el-- > 1

G o al no d e testi ng fo r A

G o al nod e te st f ail ed

Exp a nding the cu rrent no de A Cu rrent

le vel- -> 2

G o al no de testing f or B

G o al nod e te st f ail ed

Exp a nding the cu rrent nod e B C u rrent

le vel- -> 3

G o al no de testing f or D

G o al nod e te st f ail ed

Exp a nding the cu rrent nod e D C u rrent

le vel- -> 4

G o al no de testing f or H

G o al nod e te st f ail ed

Exp a nding the cu rrent nod e H Cu rrent

le vel- -> 4

G o al no de testing f or J

G o al test succe ssfu l

P ath to go a l no de av aila ble

p ath ['A ', 'B ', 'D ', 'J']

Result:

T hus the p rog ra m f or im p lem enting inf or m ed sea rch a lgo rit hm s - m e m o ry bo und ed A* has
ver ifi ed s ucces sfu lly.
Ex. No.3 NAIVE BAYES MODEL

D ate:

Aim:

T o w rite a py thon pro gr am to i m p lem ent Na ïve B ay es m o del.

Algorithm:
S tep 1 . Lo ad the li brari es: im po rt the r equ ired libra ries su ch a s pa nda s, num py, a nd skl earn . Step 2 .

Lo ad the da ta into a pa nd as d a taf ra m e.

S tep 3 . C lea n and prep ro cess the d ata as nec essar y. For ex am p le, yo u ca n ha ndl e m issi ng val - u es,

co nvert cat ego rica l va ria bles into num erica l va ria bles, a nd nor m al iz e the d a ta.

S tep 4 . Sp lit t he da ta into t rain ing a nd te st sets u sing the tra in_ t est_ sp lit fu nctio n f rom scikit-

learn.

S tep 5 . Tra in th e G au ssia n Na ive B ayes m o del usi ng the t rain ing d ata.

S tep 6 . Eval ua te the p erf orm anc e of the m o de l usi ng the test d ata an d the accu ra cy_ sco re f unc-

ti on f ro m sc ikit- lea rn.

S tep 7 . Fina lly, yo u ca n us e the tra ined m o d el to m a ke p redi ctio ns o n new data .

Program:
im po rt p and a s as p d

im po rt nu m p y as np

f ro m skle arn.na ive_ b ayes i m p ort Ga ussianNB

f ro m skle arn.m o d el_ sel ectio n im po rt tra in_ test_ sp lit

f ro m skle arn.m etrics im p ort acc ura cy_ sc ore

# Loa d t he data

d f = pd.read_ csv('d ata.csv')

# S pli t the d a ta into tra ining a nd tes t s ets

X = df .dro p('bu y_ co m p uter ', a xis= 1 )

y = d f['buy_ co m pu ter']

X _ t rai n, X _ test , y_ trai n, y_ t est = tra in_ t est_ sp lit(X , y, test_ siz e= 0 .3 ,

ra nd om _ sta te= 0 ) # T rai n the m od el

m od el = G a u ssia nNB ()

m od el.fi t(X _ tra in.val ues, y_ tr ain.va lu es)

# Test the m od el

y_ pred = m od el.p redi ct(X _ test.va lu es)

a ccu racy = a ccura cy_ s core (y_ test, y_ pred )

p rint(" Accu ra cy:" , ac cura cy)

# M a ke a p redi ctio n o n new data

new _ d ata = np .arra y([[3 5 , 6 0 0 0 0 , 1, 1 0 0 ]])


p red ictio n = m od el.p redi ct(new _ d a ta)

p rint(" Pred icti on:", p red ictio n)

Data.csv
a ge,inco m e,stu dent,credit_ ra ting,bu y_ com p uter

3 0 ,4 5 0 0 0 ,0 ,1 0 ,0

3 2 ,5 4 0 0 0 ,0 ,1 0 0 ,0

3 5 ,6 1 0 0 0 ,1 ,1 0 ,1

4 0 ,6 5 0 0 0 ,0 ,5 0 ,1

4 5 ,7 5 0 0 0 ,0 ,1 0 0 ,0
Output:
A ccu racy: 0 .5

P redi ctio n: [0 ]

Result:
T hus the P ytho n pro gra m f or i m pl em enti ng Na ïve B ay es m o del w as d evelo ped and the o u tpu t

w as v erif ied suc cessf ull y.


Ex. N o: 4 BAYESIAN NETWORK
D ate:

Aim:

T o im p lem ent th e B a yesia n netw o rk.

Algorithm:

S tep 1 . Start by im portin g th e required lib raries such as m ath and pom egranate.

S tep 2 . Defin e the discrete prob ability distribution for th e guest's initial choice of door

S tep 3 . Defin e the discrete prob ability distribution for th e prize door

S tep 4 . Defin e the con ditional probability table for the door that M onty pick s based on the

guest's ch oice and the prize door

S tep 5 . Create S tate objects for the guest, prize, an d M onty's choice

S tep 6 . Create a B ayesian N etw ork ob ject and add the states an d edges betw een th em

S tep 7 . B ak e the netw ork to prepare for inference

S tep 8 . Use th e predict_ proba m eth od to calculate th e beliefs for a given set of evidence

S tep 9 . Display the beliefs for each state as a strin g.

S tep 1 0 . Stop
Program:
f ro m p ybbn.gr ap h.da g im po rt B bn
f ro m p ybbn.g rap h.edg e im po rt Ed g e, Edg eT ype fro m
p ybbn.g rap h.jo intree im po rt Evi denc eB uil der fro m
p ybbn.g rap h.nod e i m po rt Bb nNo de
f ro m p ybbn.gra p h.v aria ble im p ort V aria ble
f ro m p ybbn.pp tc.inf erencec ontr oll er im p ort Inf erenceC ont rol ler # t he
g u est's inti tia l d oo r sel ectio n is com plet ely ra nd om
g u est = B bn No de(V aria ble(0 , 'g uest ', ['A ', 'B ', 'C ']), [1 .0 /3 , 1 .0 / 3 , 1 .0 /3 ]) # the
d o or t he p riz e is be hind is a lso co m p letely rand o m
p riz e = B bnN od e(V a ria ble(1 , 'p riz e', ['A', 'B ', 'C']), [1 .0 / 3 , 1 .0 /3 , 1 .0 / 3 ])
# m o nty is dep end ent o n bo th g ues t a nd p riz e
m ont y = B bnNo d e(V a riab le(2 , 'm o nty', ['A', 'B ', 'C ']), [0 , 0 .5 , 0 .5 , # A, A 0 , 0 ,
1 , # A, B
0, 1, 0, #A, C
0, 0, 1, #B, A
0 .5 , 0 , 0 .5 , # B , B
1, 0 , 0 , # B , C
0, 1, 0, #C, A
1, 0 , 0 , # C , B
0 .5 , 0 .5 , 0 # C , C
])
# C rea te Netw or k
bb n = B bn() \
.a dd _ no de (g ue st) \
.a dd _ no de (p riz e) \
.a dd _ no de (m onty ) \
.a dd _ ed ge (Ed ge (g ue st, m ont y, Ed ge Typ e.D IR ECT ED )) \
.a dd _ ed ge (Ed ge (p riz e, m on ty, Ed ge Typ e.D IR ECT ED )) #
C o nvert th e B B N t o a jo in t ree
j oin_ tree = Inf erenceC ontroller.app ly(bbn)
# D ef ine a f un ctio n f o r p rint ing m a rgi nal p rob abil ities def
p rint _ pro bs():
f o r nod e in join_ tree.g et_ bbn_ no des():
p o tentia l = j oin_ tree.g et_ bbn_ po tenti al(no de)
p rint (" No d e:" , nod e)
p rint(" V alu es:" )
p rint(po tential) print('
')
p rint_ probs()
Output:
N od e: 1 |pri ze|A ,B ,C
V a lues:
1 = A|0 .3 3 3 3 3
1 = B |0 .3 3 3 3 3
1 = C|0 .3 3 3 3 3

N od e: 2 |m onty |A ,B ,C
V alues:
2 = A |0 .3 3 3 3 3
2 = B |0 .3 3 3 3 3
2 = C |0 .3 3 3 3 3

N od e: 0 |g ues t|A,B ,C
V alues:
0 = A |0 .3 3 3 3 3
0 = B |0 .3 3 3 3 3
0 = C |0 .3 3 3 3 3

Result:
T hus, the P ytho n pro gra m f or im pl em enti ng B ay esia n N etw orks w as su cces sfu lly deve lop ed a nd the
o ut pu t w a s ve rifi ed.
Ex. N o. 5 REGRESSION MODEL
D ate:
Aim:

T o w rite a Pyt hon p ro gra m to bu ild R egr essio n m od els

Algorithm:

Step 1. Im p ort necessary libraries: num py, p anda s, m atplo tlib.p yplot, LinearR egression,

m ean_ squ ar ed_ e rror , and r2 _ sco re.

S tep 2 . Crea te a num py ar ray fo r w a ist and w eig ht va lu es an d sto re them in sep a rate v aria bles . Step

3 . Crea te a p and as D ata Fra m e w ith w ai st a nd w eig ht co lu m ns u sing the num py arra ys.

S tep 4 . Ex tract i npu t (X ) a nd o u tpu t (y) var iabl es fro m the D a ta Fram e. Ste p

5 . Crea te a n ins tanc e of Li nearR eg ress ion m o del.

S tep 6 . Fi t the Linea rR eg ressio n m o del t o the inp ut a nd o u tpu t var iabl es. St ep

7 . Crea te a new D at aFra m e w ith a s ingl e va lue of w a ist.

S tep 8 . U se the p redi ct() m e thod o f the Lin earR eg ressi on m od el to pred ict the w eight fo r t he new w a ist

va lu e.

S tep 9 . Ca lcu lat e the m e an squ a red erro r and R - sq ua red va lue s u sing m ea n_ sq u ared _ err or() and

r2 _ sco re() f unc tio ns resp ectiv ely.

S tep 1 0 . P lo t t he a ctu al a nd pred icte d val ues u sing m a tpl otl ib.pyp lo t.scat ter() and m atp lo tlib.p y-

p lo t.plo t() f unc tio ns.

Program:
im po rt nu m p y as n p
im po rt p and as as pd
im po rt m a tp lo tlib.p yplo t a s plt
f ro m sklea rn.line ar_ m od el i m p ort Linea rReg ressio n
f ro m skle arn.m etrics im p ort m ea n_ sq ua red _ erro r, r2 _ sco re
# im p or t sa m pl e d ata u sing pa nd as
w aist = np.a rra y([7 0 , 7 1 , 7 2 , 7 3 , 7 4 , 7 5 , 7 6 , 7 7 , 7 8 , 7 9 ])
w eigh t = np .arra y([5 5 , 5 7 , 5 9 , 6 1 , 6 3 , 6 5 , 6 7 , 6 9 , 7 1 , 7 3 ])
d ata = pd .D ata Fra m e({'w ais t': w aist , 'w e ight ': w e ight }) #
ext ract inp ut and o utp ut var iabl es
X = da ta[['w a ist']]
y = da ta ['w e ight ']
# fi t a li near reg ressio n m o del
m od el = L inear R egres sio n()
m od el.fi t(X , y)
# m a ke pre dict ions on new d ata
new _ d ata = p d.D a taF ram e({'w a ist': [8 0 ]})
p red icted _ w e ight = m o de l.pred ict(new _ d ata [['w aist ']])
p rint(" Pred icte d w eig ht fo r new w a ist valu e:" , i nt(pred icted _ w eig ht)) # cal cul ate
M S E and R - squ a red
y_ pred = m od el.p redict(X )
m se = m ean _ squ a red_ erro r(y, y_ p red)
p rint('M ean Sq ua red Erro r:', m se )
r2 = r2 _ sco re(y,
y_ pred )
p rint('R -s qu ared :', r2 )
# plo t t he a ctua l a nd pre dict ed v alu es
p lt.sca tter(X , y, m a rker= '*', edg eco lo rs= 'g')
p lt.sca tter(new _ da ta , p red icted _ w eig ht, m ar ker= '*', ed g ecol ors= 'r') plt .p lo t(X ,
y_ pred , co lor = 'y')
p lt.xla bel('W a ist (c m )')
p lt.yla bel('W e ight (kg)')
p lt.titl e('Li near R eg ressi on M od el') p lt.
show ()
Output:

Pred icted w eig ht fo r ne w w a ist valu e: 7 5


M e an S qu a red Error : 0 .0
R -sq ua red : 1.0

Result:
T hus th e Pyth on p ro gra m to bu ild a s im p le lin ear R egre ssio n m o del w a s de velo ped success-
f ully.
Ex. N o. 6 DECISION TREE AND RANDOM FOREST
D ate:

Aim:
T o w rite a Pytho n p ro gra m to bu ild de cisio n tree and ra ndo m f orest.
Algorithm:
S tep 1 . Im po rt necess ary libra ries : num py, m a tp lotl ib, seabo rn, pa nd as, tra in_ tes t_ sp lit, La belEn co d er,
D ec isio nTree Cla ssif ier, p lo t_ tre e, and R a ndo m Fo restC la ssif ier.
S tep 2 . R ead the da ta fro m 'f lo w ers.csv ' in to a p and as Da taFram e.
S tep 3 . Extra ct the f ea ture s into an a rra y X , a nd th e targ et va ria ble int o a n ar ray y. S tep
4 . Encod e th e ta rget var iabl e us ing the La belEnc od er.
S tep 5 . Sp lit t he da ta into tra ining an d tes ting se ts u sing trai n_ test_ spl it f unction.
S tep 6 . Crea te a D ecisio nT reeCl assi fie r obj ect, fi t the m o d el to the tra ining da ta, a nd vi sua liz e the
d ecisi on tree u sing p lot_ tree.
S tep 7 . Crea te a R a ndo m Fo rest Cla ssif ier o bject w ith 1 0 0 estim ato rs, f it the m o de l to t he trai ning da ta ,
a nd visu ali ze the ra nd om fo rest by d ispl ayin g 6 trees.
S tep 8 . Pri nt the a ccur acy o f th e d ecis ion t ree and ra nd om fo rest m od els u sing the sco re m etho d on the
te st d ata
Program:

i m p ort num py as np
i m p ort m at plo tli b.p yp lot as p lt
i m p ort sea bo rn as sns; sns.set()
i m p ort pa nda s a s p d
f ro m skle arn.m od el_ se lectio n im po rt tra in_ t est_ sp lit
f ro m skl earn .p rep roc essing im po rt La belEnco d er
f ro m s klear n.t ree im po rt D ecisio nT reeCl assi fie r, plo t_ tree
f ro m skl earn .en sem ble im p or t R an do m Fo restC la ssif ier
# read the data
d a ta = p d.read_ csv('f low ers.csv')
X = d a ta.ilo c[:, :- 1 ].va lues y
= da ta.il oc[:, - 1 ].va lues #
e ncod e the la bels
l e = LabelEncod er()
y = le.fit_ tra nsf orm (y)
# spli t the d a ta i nto tra ining a nd testi ng sets
X _ tra in, X _ t est, y_ tra in, y_ test = t rain _ test_ spl it(X , y,
r and om _ sta te= 0 ) # cr eate and f it a d ecisio n tr ee m o d el
t ree = De cisio nTr eeCl assi fier ().f it(X _ t rain, y_ tra in)
# visu al ize the d ecisi on tree p lt.f igure(figsiz e= (1 0 ,6 ))
p lo t_ tre e(t ree, fil led= T ru e)
p lt.tit le("D ec isio n Tr ee")
p lt.sho w ()
# crea te an d f it a ra ndo m fo rest m odel
r f = R a ndo m Fo rest Cla ssif ier(n_ e stim a to rs= 1 0 0 , rand o m _ sta te= 0 ).fi t(X _ tra in, y_ t rain )
# visu al ize the ra nd om fo rest
p lt.f igure(fig size= (2 0 ,1 2 ))
f o r i, tr ee_ in_ fo rest in enu m er ate(rf .estim ato rs_ [:6 ]):
p lt.su bpl ot(2 , 3 , i+ 1 )
p lt.a xis('off ')
p lo t_ tre e(t ree_ in _ fo rest , f illed = Tr ue, rou nd ed= T ru e)
p lt.tit le("T ree " + s tr(i+ 1 ))
p lt.su pt itle(" R and om F orest ")
p lt.sho w ()
# ca lcul ate and p rint t he a ccura cy o f dec isio n tree and ra ndo m fo rest
p rint (" Ac cura cy o f dec isio n tre e: {:.2 f }".f orm at (tr ee.scor e(X _ test, y_ t est)))
p rint(" Acc ura cy o f rand o m f orest : {:.2 f }" .fo rm a t(rf .s core (X _ test, y_ test)))
Output:
A ccu racy of d ecisio n tre e: 0 .5 0
A ccu racy o f ra ndo m fo rest : 1 .0 0
A ccu racy of d ecisio n tre e: 0 .5 0
A ccu racy o f rand om f ores t: 1 .0 0

Result:
T hus th e Pyth on p ro gra m to bu ild d ecisio n tree and ra ndo m f o rest w as d evelo p ed su ccess-
f ully.
Ex. No.7 SVM MODELS
D ate:
Aim:
T o w rite a Py thon pro g ram to buil d S V M m o del.
Algorithm:
S tep 1 . Im po rt the neces sary libra ries (m atp lo tlib.p yplo t, num py, a nd sv m fr om sklea rn). S tep 2 .
D ef ine t he f eatu res (X ) and la bels (y ) f or t he f rui t da ta set.
S tep 3 . C rea te an S V M cla ssif ier w i th a linea r kernel u sing s vm .SV C(kern el= 'linea r'). St ep
4 . Tra in the cla ssif ier o n the fr uit da ta usi ng c lf.f it(X , y).
S tep 5 . Plo t the f ru its a nd d ecis ion bo u nda ry using p lt.sca tter(X [:, 0 ], X [:, 1 ], c= co lor s), w here col- o rs
is a l ist o f col ors ass igned to ea ch f ru it ba sed on its l abel.
S tep 6 . Crea te a m eshg rid to eva lu ate the d ecis ion f un ctio n usi ng np.m eshgr id(np .linsp ace(xl im [0 ], x lim [1 ],
1 0 0 ), np .linsp ac e(y lim [0 ], ylim [1 ], 1 0 0 )).
S tep 7 . U se the d ecisi on f unc tio n to cr eate a co nto ur p lo t of the d ecisi on bo u nda ry a nd m a rgin s us- ing
a x.cont ou r(xx, yy, Z , col ors= 'k', leve ls= [-1 , 0 , 1 ], alp ha = 0 .5 , li nestyles = ['- -', '- ', '- -']).
S tep 8 . S how the p lo t us ing p lt.show ().
Program:
im po rt m a tp lo tlib.p yplo t a s plt
im po rt nu m p y a s np
f ro m sklea rn im po rt svm
# D ef ine t he f ruit fea tu res (siz e a nd co lor)
X = np .ar ray([[5 , 2 ], [4 , 3 ], [1 , 7 ], [2 , 6 ], [5 , 5 ], [7 , 1 ], [6 , 2 ], [5 , 3 ], [3 , 6 ], [2 , 7 ], [6 , 3 ], [3 , 3 ],
[1 , 5 ], [7 , 3 ], [6 , 5 ], [2 , 5 ], [3 , 2 ], [7 , 5 ], [1 , 3 ], [4 , 2 ]])
# D efin e the f rui t la bels (0 = a pp les, 1 = ora nges)
y = np .arra y([0 , 0 , 1 , 1 , 0 , 1 , 0 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 0 , 1 , 1, 0 , 1 , 0 ])
# C rea te a n SV M c lass ifi er w ith a li near kern el
cl f = svm .SV C(ker nel= 'linea r')
# T rai n t he cl assi fie r o n the fru it d ata
cl f.f it(X , y)
# Plo t the fru its a nd dec isio n bou nd ary
co lo rs = ['red ' if la bel = = 0 else 'yell ow ' f or la bel in y]
p lt.sca tter(X [:, 0 ], X [:, 1 ], c= co lor s)
a x = p lt.g ca()
a x.set_ xlabel('S ize')
a x.set_ ylabel('Co lor') xlim
= a x.ge t_ xli m () yli m = a x.
g et_ yl im ()
# C rea te a m eshg rid to eva lua te th e de cisio n fu nction
xx, yy = np .m eshg rid (np.lins pa ce(xlim [0 ], xli m [1 ], 1 0 0 ), np.li nspa ce(ylim [0 ], ylim [1 ], 1 0 0 )) Z =
cl f.de cisio n_ f unc tio n(np .c_ [xx.ra vel(), yy.ra vel()])
Z = Z.resha pe(xx.shap e)
# P lo t the d ecis ion bou nd ary a nd m argins
a x.cont ou r(xx, yy, Z , col ors= 'k', levels = [- 1 , 0 , 1 ], al pha = 0 .5 , lines tyles= ['-- ', '- ', '- -']) plt.sho w ()
Output:

Result:
T hus, the P ytho n pro gr am t o bu ild an S V M m o del w a s devel op ed, a nd t he ou tp ut w as su c-
ces sfu lly verif ied .
Ex.No : 8 ENSEMBLE METHODS
Da te:

Aim:
T o im pl em ent t he ensem bling techniq ue

Algorithm:
Step 1 : S pl it the t rain ing d at aset into tra in, test and va lid atio n d ataset.

Ste p 2 : Fit a ll the ba se m o del s us ing tr ain data set.

Step 3 : M ake p redi ctio ns o n va lid atio n a nd test d ataset.

Step 4 : T hese p red ictio ns a re used a s fea tu res to buil d a s econd level m o del Step

5 : T his m o del is u sed to m a ke p redi ctio ns o n test and m e ta- fe atu res.

Program:
f ro m skle arn.ense m ble i m po rt R a nd om F ores tCla ssif ier fro m
s klear n.e nsem bl e im p o rt A da B oo stC las sifi er
f ro m skle arn.ense m ble i m po rt G ra d ientB o ost ingC la ssif ier #
c reate the d at aset
X = [[0 , 0 ], [1 , 1]]
y = [0 , 1]
# cre ate the rand o m f o rest clas sifi er
r f_ c lf = R and om Fo restC lass ifi er()
# crea te th e ad a boo st cl assi fie r
a d b_ clf = A da B oo stC la ssif ier()
# cre ate the gra d ient boo sting cl assi fier
g b_ cl f = G rad ientB o o sting Cla ssif ier() #
f it the cla ssif iers to the d at aset
r f_ c lf.f it(X , y)
a db_ clf.fit(X ,y) g b_ clf .f it(X ,
y)
# m a ke p red ictio ns
r f_ p red s = rf _ cl f.p redi ct(X )
a d b_ pr eds = ad b_ cl f.p redi ct(X )
g b_ p red s = g b_ clf .pr edic t(X )
# com bine the pred icti ons
e nsem ble _ pr eds = []
f o r i in range(len(X )):
p red s = [rf _ p reds[i], ad b_ p reds [i], gb_ pred s[i]]
e nsem ble _ pr eds.a pp end (m ax(set (p red s), key= pr eds.co unt ))
# p rint t he ensem ble p red ictio ns
p rint(ensem ble_ preds)
Output:
[0 , 1 ]

Result:

T hus th e pro g ram fo r ense m ble m ethod w as ex ecut ed su ccess fu lly and verified .
Ex. N o: 9 IMPLEMENT CLUSTERING ALGORITHMS
D ate:

Aim:

T o find p at terns a nd stru ctu re in da ta that m a y no t be im m edi atel y ap p arent , and to d isco ver

rel ati onshi ps and as soci ati ons betw ee n da ta po ints .

Algorithm:
S tep 1 : D ata prepa ration

S tep 2 : Cho os ing a d ista nce m etric S tep 3 :

C hoo sing a clu steri ng a lg ori thm

S tep 4 : C ho osi ng t he n um b er o f clu sters Step

5 : C lust er a ssig nm ent

S tep 6 : Int erpre tati on a nd eva luatio n

Program:
f ro m nu m py im po rt w here

f ro m sklea rn.da ta sets im po rt m a ke_ c lass ifi cati on f ro m

m atp lo tlib i m p ort pyp lo t

# def ine d ataset

X , y = m a ke_ cla ssif ica tio n(n_ sa m p les= 1 0 0 0 , n _ fe atu res= 2 , n_ in fo rm a tive= 2 , n_ redu nd ant= 0 ,
n_ clus ters_ p er_ c lass = 1 , rand om _ sta te= 4 )

# cre ate sca tter p lot f or sa m p lesf rom each cla ss

f or cla ss_ va lu e in ra ng e(2 ):

# g et ro w ind exes f or sa m pl es w ith t his cla ss

ro w _ ix = w here(y = = c lass _ val ue)

# crea te sca tter of thes e sa m pl es

p ypl ot.sca tter (X [ro w _ i x, 0 ], X [row _ ix, 1 ]) #

sho w the p lo t

p yp lot.show ()
Output:

Result:

T hus the p ro gra m fo r clu ster m etho d w as e xecut ed s ucce ssfu lly a nd verif ied.
Ex. N o: 10 IMPLEMENT EM FOR BAYESIAN NETWORKS
D ate:

Aim:

T o im p lem ent EM f or B a yesia n netw o rks

Algorithm:

S tep 1 : Co nsid er a s et o f st arti ng p ara m ete rs in inco m p lete d ata.

S tep 2 : Exp ecta tio n Step – T his step is u sed to e stim a te the va lu es of the m issing val ues i n thed ata.

It invo lves the o bserved d ata to bas ical ly gu ess t he va lues in the m issing da ta.

S tep 3 : M a xim i za tio n Step – T his step gene rate s co m p lete d at a af ter the Exp ecta tio n step u pd at es the

m issing va lu es in the d ata .

S tep 4 : Exe cute the step 2 a nd 3 u ntil t he conv ergen ce is m et.


Program:
f ro m nu m p y im p ort hstack
f ro m nu m p y.rand om im p o rt norm a l
f ro m sklea rn.m ix ture im po rt G au ssia nM ixtu re #
g enera te a sa m p le
s am p le1 = no rm a l(lo c= 2 0 , sca le= 5 , siz e= 4 0 0 0 )
s am p le2 = n orm al(lo c= 4 0 , s cale = 5 , siz e= 8 0 0 0 )
s am p le = hsta ck((sa m p le1 , sa m p le2 ))
# resha pe into a tab le w i th o ne co lu m n
s am p le = sa m p le.resha pe ((len(sa m p le), 1 )) #
f it m o del
m od el = G a uss ianM ixt ure(n_ com po nents= 2 , init_ p a ram s= 'rand om ')
m odel.fit(sam p le)
# pre dict lat ent valu es
yh at = m odel.predict(sam p le)
# c heck lat ent valu e fo r fi rst few p o ints
p rint(yhat[:8 0 ])
# che ck l aten t va lu e f o r l ast f ew po ints
p rint(yhat[-8 0 :])
Output:
[1 1 0 0 0 0 0 0 1 1 1 1 1 0 1 0 1 0 1 1 1 0 1 0 1 1 1 1 0 0 0 0 0 0 1 1 1
1 1 1 1 0 0 1 1 1 0 0 1 1 0 1 0 0 1 0 0 1 0 1 1 0 1 0 1 1 0 0 1 0 1 0 0 1
1 1 1 0 0 1]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0]

RESULT:

T hus the p ro gra m f or Exp ecta tio n M ax im iz a tio n Alg o rithm w a s e xecu ted a nd verified .
Ex. NO :1 1 SIMPLE NEURAL NETWORK MODEL
D ate:

Aim:
T o im plem ent t he neu ral netw o rk m o de l fo r th e gi ven nu m p y a rra y.

Algorithm:
S tep 1 : U se num py arra ys to sto re i npu ts (x) a nd o ut pu ts (y).

S tep 2 : D efin e the n etw o rk m od el a nd its a rg um e nts.

S tep 3 : Set t he n um b er o f ne uro ns/ nod es f or ea ch la yer. St ep

4 : C om pile the m od el a nd cal cul ate its a ccu racy . Step 5 :

P rint a su m m ary of the m od el.

Program:
# im p o rt lib rari es
im p ort num py a s np
im p ort m at plo tli b.p yp lot a s plt
fro m skl earn .ne ura l_ net w o rk im p o rt M LPC las sif ier # c reat e
a sa m p le d a tase t
x = np .arra y([[0 , 0 ], [0 , 1 ], [1 , 0 ], [1 , 1 ]])
y = np .arra y([0 , 1 , 1 , 0 ])
# crea te a nd tra in the m o del
m o del = M LP Cl assi fie r(h idd en_ l ayer_ siz es= (2 ), a ctiva tio n= 'relu ', sol ver= 'lbf gs') m od el.f it(x,
y)
# m a ke predictio ns
pred icti ons = m od el.p redi ct([[2 , 2 ], [2 , 3 ]]) print(predictio ns)
# visu al ize the re sult s
plt .sc atte r(x [:,0 ], x [:,1 ], c= y)
plt.xla bel('x1')
plt.yla bel('x2 ')
plt .ti tle('Neu ra l N etw ork M o del')
p lt.show ()
Output:
[0 0]

Result:

T hus the p ro gra m f o r neu ral n etw o rk w a s exec uted a nd verif ied successfully.
Ex. NO :1 2 BUILD DEEP LEARNING NN MODELS
D ate
Aim:
T o im p lem ent and bu ild a neu ral ne tw o rk m o del w hich p red icts d at a usi ng the g iven p re- tra ined

m od els.

Algorithm:
S tep 1 : Im p o rt th e req ui red pa cka ges

S tep 2 : Loa d the M NIST d ata set

S tep 3 : No rm a liz e the tra inin g a nd t est data

S tep 4 : V is ua liz e the nor m al iz ed f irst im ag e i n the tra ining d at aset

S tep 5 : D efin e the m od el a rchi tectu re

S tep 6 : Co m p ile the m od el

S tep 7 : Tra in th e m o del

S tep 8 : Eval ua te the m o del o n the test d ata M o de l. Ste p

9 : S ave the m o d el

S tep 1 0 : Lo ad the sa ved m o del

S tep 1 1 : M a ke pr edic tio ns on the t est da ta usin g the lo ad ed m od el

S tep 1 2 : V isu al iz e the f irst test im a ge and its pred icte d l ab

Program:

# Im p ort necess ary l ibra ries


im p ort num py as np
fro m kera s.uti ls i m p ort to _ ca teg ori cal
fro m ke ras.m od els im po rt S eq uenti al
fro m k eras.la yers im p or t D ense
# D ef ine the neur al netw ork m o del
m o del = Seq u entia l()
m o del.a dd (D ense(uni ts= 6 4 , a ctiva tio n= 'relu ', inp ut _ di m = 1 0 0 ))
m o del.a dd (D ense(uni ts= 1 0 , a ctiva tio n= 'so ftm ax'))
# C om pil e the m o del
m o del.co m p ile(lo ss= 'ca teg oric al_ cros sentro py', o pti m iz er= 'sg d', m etr ics= ['accu ra cy']) #
G enera te so m e r and om d ata f or trai ning and test ing
da ta = np .rand o m .ran do m ((1 0 0 0 , 10 0 ))
labe ls = np .rand o m .ran dint (1 0 , siz e= (1 0 0 0 , 1))
one_ hot _ la bels = to _ ca tego rica l(la bels, nu m _ cl asse s= 1 0 )
# T rain the m od el o n the da ta
m o del.f it(d ata , one_ hot _ la bels, ep ochs= 1 0 , batc h_ siz e= 3 2 )
# Eva lu ate the m od el o n a test set
test_ d a ta = np .rand om .rand o m ((1 0 0 , 1 0 0 ))
test_ l abel s = np .rand om .rand int(1 0 , s ize = (1 0 0 , 1 ))
test_ o ne_ hot_ labe ls = to_ cat ego rica l(test_ lab els, num _ cla sses= 1 0 )
los s_ an d_ m etrics = m od el.eva lua te(test _ da ta , test_ one _ hot _ la bels, ba tch_ siz e= 3 2 )
prin t(" T est lo ss:", lo ss_ and _ m et rics[0 ])
prin t(" T est ac cura cy:" , loss_ and _ m etrics[1 ])
Output:
Ep o ch 1/ 10

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 2 0 s [0 m
6 5 6 m s/ step - accu ra cy: 0 .0 6 2 5 - lo ss:
2 .4 2 1 6

[1 m 2 8 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━ [0 m [3 7 m ━━━ [0 m [1 m 0 s [0 m
2 m s/ step - accu ra cy: 0 .0 8 5 0 - lo ss: 2 .3 8 1 3

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 1 s [0 m
2 m s/ step - accu ra cy: 0 .0 8 6 2 - lo ss: 2 .3 7 6 8
Ep o ch 2 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 7 m s /ste p - a ccu racy: 0 .0 6 2 5 - l oss :
2 .3 7 7 1

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .0 8 7 7 - l oss: 2 .3 3 6 5
Ep o ch 3 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 6 m s /ste p - a ccu racy: 0 .0 9 3 8 - l oss :
2 .2 9 6 4

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 0 4 7 - lo ss: 2 .3 1 9 8
Ep o ch 4 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 5 m s /ste p - a ccu racy: 0 .1 2 5 0 - lo ss:
2 .2 6 0 7

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 0 1 5 - los s: 2 .3 1 7 3
Ep o ch 5 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 6 m s /ste p - a ccu racy: 0 .0 6 2 5 - l oss :
2 .2 8 8 3

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
9 8 1 us/ step - a ccur acy: 0 .1 0 3 8 - lo ss: 2 .3 0 6 0
Ep o ch 6 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 5 m s /ste p - a ccu racy: 0 .0 6 2 5 - l oss :
2 .3 3 9 9

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 0 0 2 - lo ss: 2 .3 1 4 4
Ep o ch 7 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 6 m s /ste p - a ccu racy: 0 .1 2 5 0 - lo ss:
2 .2 4 0 8

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .0 9 6 1 - lo ss: 2 .2 9 9 6
Ep o ch 8 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 5 m s /ste p - a ccu racy: 0 .0 0 0 0 e + 0 0 - lo ss:
2 .2 8 8 5

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .0 9 3 8 - l oss: 2 .3 0 1 9
Ep o ch 9 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 5 m s /ste p - a ccu racy: 0 .1 2 5 0 - lo ss:
2 .3 0 2 6

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 2 5 9 - lo ss: 2 .2 8 7 0
Ep o ch 1 0 /1 0

[1 m 1 / 3 2 [0 m [3 7 m ━━━━━━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
2 6 m s /ste p - a ccu racy: 0 .1 2 5 0 - lo ss:
2 .3 2 5 7

[1 m 3 2 /3 2 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
1 m s /ste p - a ccur acy: 0 .1 0 7 2 - lo ss: 2 .3 0 2 2

[1 m 1 / 4 [0 m [3 2 m ━━━━━ [0 m [3 7 m ━━━━━━━━━━━━━━━ [0 m [1 m 0 s [0 m
1 0 6 m s/ step - ac cura cy: 0 .1 2 5 0 - lo ss:
2 .2 8 8 8

[1 m 4 /4 [0 m [3 2 m ━━━━━━━━━━━━━━━━━━━━ [0 m [3 7 m [0 m [1 m 0 s [0 m
3 m s/ step - accu ra cy: 0 .0 9 5 4 - lo ss: 2 .3 2 9 5
T est l oss: 2 .3 4 6 2 7 5 3 2 9 5 8 9 8 4 3 8
T est a ccur acy: 0 .0 9 0 0 0 0 0 0 3 5 7 6 2 7 8 6 9

Result:
T hus the p ro gra m f or d eep lea rning in ne ura l netw ork s execu ted a nd o u tpu t w as su c-
cessfu llyverif ied.

You might also like