0% found this document useful (0 votes)
26 views70 pages

Combine DLfile

Uploaded by

Akshita Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views70 pages

Combine DLfile

Uploaded by

Akshita Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Learn Python

There are 16 programs to explain the various concepts in python programming such as:
Syntex,
Loop,
if-else,
Data Structur es,
Strings,
File Handaling,
Ex ception Handaling,
Random Numbers,
Command Line Ar gunment
Use of Libr aries

Self learning resource

Tutorial on Python (Byte of Python) Click Here

1 Hello World

Learning: How to print and run python program

print ("Hello World")

Hello World

Assingment 1.1: WAP to print your name three times

x=”akshita”

print (3*x)

akshita

akshita

akshita

2 Add numbers and Concatinate strings

Learning: How to declare variable, add, concatenate and print the result.

2.1 Add two numbers

a = 10 b = 220 c = a + b #
Add two numbers print (a, " + ",
b, " --> ", c)

10 + 220 --> 230

2.2 Concatinate two strings

a = "Bhagat" b = " Singh" c = a + b


# Concatinate two strings print (a, " +
", b, " --> ", c)
Bhagat + Singh --> Bhagat Singh

2.3 Concatinate string with number

a = "Bhagat" b = 100 c = a + str(b) #


Concatinate string with number print (a, " + ", b, "
--> ", c)

Bhagat + 100 --> Bhagat100

Assingment 2.1: WAP to add three numbers and print the result. Assingment 2.2:
WAP to concatinate three strings and print the result.

a = 100 b
= 200 c =
500 f =
a+b+c
print( f
)

800

a ="Ram" b ="krishan" c = 100


Ram + krishan + 100 --> Ramkrishan100

s=a+b+str( c ) print
(a,"+",b,"+",c,"-->", s)

3 Input from user


3.1 Input two strings fr om user and concatinate them
Learning: How to take input from user

a = input("Enter First String: ") b =


input("Enter Second String: ") c = a + b
# concatinate two strings print (a, " +
", b, " --> ", c)

# Run the program with (1) Two strings and (2) Two numbers

Enter First String: Thapar


Enter Second String: Institute
Thapar + Institute --> ThaparInstitute

3.2 Input two numbers from user and add them

a = int(input("Enter First No: "))


b = int(input("Enter Second No: "))
c = a + b print (a, " + ", b, " -->
", c)

Enter First No: 4


Enter Second No: 6
4 + 6 --> 10

4 Loop

Learning: Learn various loops.

4.1 While Loop

i=1 while i <=


10: print
( i ) i =
i+1
1
2
3
4
5
6
7
8

9
10

4.2 Range Function

print ("range(10) --> ", list(range(10))) print


("range(10,20) --> ", list(range(10,20))) print
("range(0,20,2) --> ", list(range(2,20,2))) print
("range(-10,-20,2) --> ", list(range(-10,-20,2))) print
("range(-10,-20,-2)--> ", list(range(-10,-20,-2)))

range(10) --> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] range(10,20)


--> [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] range(0,20,2) -
-> [2, 4, 6, 8, 10, 12, 14, 16, 18] range(-10,-20,2) --> []
range(-10,-20,-2)--> [-10, -12, -14, -16, -18]

4.3 For loop

4.3.1 For loop - Version 1

for i in range(0,10):
print ( i )

0
1
2
3
4
5
6
7
8
9

4.3.2 For loop - Version 2

for i in range(0,20,2):
print ( i )

0
2
4
6
8
10
12
14
16
18

4.3.3 For loop - Version 3


for i in range(0,-10,-1):
print ( i )

0
-1
-2
-3
-4
-5
-6
-7
-8

-9

4.4 Print table of 5

for i in range(1,11):
print (5," * ", i , " = ", i * 5)

5 * 1 = 5
5 * 2 = 10
5 * 3 = 15
5 * 4 = 20
5 * 5 = 25
5 * 6 = 30
5 * 7 = 35
5 * 8 = 40
5 * 9 = 45
5 * 10 = 50

4.5 Sum all numbers from 1 to 10

4.5.1 Version 1

Sum is --> 55

s=0 for i in
range(1,11): s=s+i
print ("Sum is --> ",s)

4.5.2 Version 2

print ("Sum is --> ", sum(range(1,11)))

Sum is --> 55

Assingment 4.1: WAP to print the table of 7, 9.


Assingment 4.2: WAP to print the table of n and n is given by user. Assingment 4.3:
WAP to add all the numbers from 1 to n and n is given by user.

for i in range(1,11):
print(7," * ", i ," = ", i * 7)

7 * 1 = 7
7 * 2 = 14
7 * 3 = 21
7 * 4 = 28
7 * 5 = 35
7 * 6 = 42
7 * 7 = 49
7 * 8 = 56
7 * 9 = 63
7 * 10 = 70

for i in range(1,11):
print(9," * ", i ," = ", i * 9)

9 * 1 = 9
9 * 2 = 18
9 * 3 = 27
9 * 4 = 36
9 * 5 = 45
9 * 6 = 54
9 * 7 = 63
9 * 8 = 72
9 * 9 = 81
9 * 10 = 90

for i in range(1,11):
print(18," * ", i ," = ", i * 18)

18 * 1 = 18
18 * 2 = 36
18 * 3 = 54
18 * 4 = 72
18 * 5 = 90
18 * 6 = 108
18 * 7 = 126
18 * 8 = 144
18 * 9 = 162
18 * 10 = 180

a = int(input("Enter First No: "))


b = int(input("Enter Second No: "))
c = a + b print (a, " + ", b, " -->
", c)

Enter First No: 1


Enter Second No: 100

1 + 100 --> 101

5 If-Else - Conditional Checking

Learning: if-else Condition

5.1 Input two numbers from user and compare them

a = int(input("Enter First No: "))


b = int(input("Enter Second No: "))
if a > b:
print (a," > ",b)
else: print (a,"
< ",b)

Enter First No: 3


Enter Second No: 4
3 < 4

5.2 Check weather a number is odd or even

n = int(input("Enter a No: "))


Enter a No: 5
5 is odd

if n % 2 == 0:
print (n," is even")
else: print (n," is
odd")

5.3 Check weather a number is prime of not

n = int(input("Enter a No: "))


f=0 for i in range(2, n//2 +
1): if n % i == 0:
f=1
break

if f==0:
print ("Prime")
else: print ("Not
Prime")
Enter a No: 5
Prime

5.4 Conditional Checking - Compare strings

a = input("Enter First String : ")


b = input("Enter Second String: ")

if a == b:
print ("a == b")
elif a >= b:
print ("a > b")
else: print ("a
< b")

Enter First String : user


Enter Second String: friendly
a == b

Assingment 5.1: WAP to nd max amoung three numbers and input from user. [Try max() function] Assingment 5.2:
WAP to add all numbers divisible by 7 and 9 from 1 to n and n is given by the user.

Assingment 5.3: WAP to add all prime numbers from 1 to n and n is given by the user.

a = int(input("Enter First No: "))


b = int(input("Enter Second No: "))
c = int(input("Enter third No: "))
s = a + b + c
print (a, " + ", b," + ", c ," --> ", s)

Enter First No: 512


Enter Second No: 20
Enter third No: 358
512 + 20 + 358 --> 890

n = int(input("Enter the value of n: "))


sum_divisible = 0 for i in range(1, n +
1):

if i % 7 == 0 and i % 9 == 0:
sum_divisible += i

# Print the result


print("The sum of all numbers from 1 to", n, "that are divisible by both 7 and 9 is", sum_divisible)

Enter the value of n: 100


The sum of all numbers from 1 to 100 that are divisible by both 7 and 9 is 63
def is_prime(num):
"""Check if a number is prime."""
if num <= 1: return False
if num <= 3: return True
if num % 2 == 0 or num % 3 == 0:
return False i = 5 while i * i
<= num: if num % i == 0 or num % (i +
2) == 0:
return False i += 6
return True n = int(input("Enter the
value of n: ")) sum_primes = 0 for i in
range(1, n + 1): if is_prime(i):
sum_primes += i

# Print the result


print("The sum of all prime numbers from 1 to", n, "is", sum_primes)

Enter the value of n: 501


The sum of all prime numbers from 1 to 501 is 21536

6 Functions

Learning: How to declare and call function


6.1 Add two numbers

def Add(a,b):
c=a+b
return c

print ("Add(10,20) -->", Add(10,20))


print ("Add(20,50) -->", Add(20,50))
print ("Add(80,200) -->", Add(80,200))

Add(10,20) --> 30
Add(20,50) --> 70
Add(80,200) --> 280

6.2 Prime number

def IsPrime(n):
for i in range(2, n//2 + 1):
if n%i==0: return 0
return 1

print ("IsPrime(20) --> ", IsPrime(20))


print ("IsPrime(23) --> ", IsPrime(23))
print ("IsPrime(200) --> ", IsPrime(200))
print ("IsPrime(37) --> ", IsPrime(37))

IsPrime(20) --> 0
IsPrime(23) --> 1
IsPrime(200) --> 0
IsPrime(37) --> 1

6.3 Add 1 to n

def AddN(n): s=
sum(range(n+1))
return s

print ("AddN(10) --> ", AddN(10))


print ("AddN(20) --> ", AddN(20))
print ("AddN(50) --> ", AddN(50))
print ("AddN(200) --> ", AddN(200))

AddN(10) --> 55
AddN(20) --> 210
AddN(50) --> 1275
AddN(200) --> 20100

Assingment 6.1: WAP using function that add all odd numbers from 1 to n, n is given by the user.

Assingment 6.2: WAP using function that add all prime numbers from 1 to n, n given by the user.

def sum_of_odds(n):
"""Calculate the sum of all odd numbers from 1 to n."""
total_sum = 0 for i in range(1, n + 1, 2):
total_sum += i return
total_sum n = int(input("Enter the value
of n: ")) result = sum_of_odds(n)

# Print the result


print("The sum of all odd numbers from 1 to", n, "is", result)

Enter the value of n: 200


The sum of all odd numbers from 1 to 200 is 10000

def is_prime(num):
"""Check if a number is prime."""
if num <= 1: return False
if num <= 3: return True
if num % 2 == 0 or num % 3 == 0:
return False i = 5 while i * i
<= num: if num % i == 0 or num % (i +
2) == 0:
return False i += 6
return True n = int(input("Enter the
value of n: ")) sum_primes = 0 for i in
range(1, n + 1): if is_prime(i):
sum_primes += i
print("The sum of all prime numbers from 1 to", n, "is", sum_primes)

Enter the value of n: 100


The sum of all prime numbers from 1 to 100 is 1060

7 Math library

Learning: Use math library

import math as m print ("exp(-200) --> ", m.exp(-200)) #


Exponential function print ("log(100,2) --> ", m.log(100,2)) #
Log print ("log(100,10) --> ", m.log(100,10))# Log print
("log10(100) --> ", m.log10(100)) # Log 10 print ("m.cos(30)
--> ", m.cos(30)) # cos print ("m.sin(30) --> ",
m.sin(30)) # sin print ("m.tan(30) --> ", m.tan(30)) #
tan print ("m.sqrt(324) --> ", m.sqrt(324)) print
("m.ceil(89.9) --> ", m.ceil(89.9)) print ("m.floor(89.9)--> ",
m.floor(89.9))

exp(-200) --> 1.3838965267367376e-87 log(100,2)


--> 6.643856189774725 log(100,10) --> 2.0
log10(100) --> 2.0
m.cos(30) --> 0.15425144988758405
m.sin(30) --> -0.9880316240928618
m.tan(30) --> -6.405331196646276
m.sqrt(324) --> 18.0
m.ceil(89.9) --> 90
m.floor(89.9)--> 89

8 Strings

Learning: How to handle string

8.1 Indexing in string

var = 'Hello World!' print ("var


--> ", var) print ("var[0] -->
", var[0]) print ("var[1:5] -->
", var[1:5]) print ("var[:-5] -->
", var[:-5])

var --> Hello World!


var[0] --> H var[1:5] -
-> ello var[:-5] -->
Hello W

8.2 String length, upper, lower

var = 'Hello World!' print ("String


--> ", var) print ("Length --> : ",
len( var )) print ("Upper --> : ",
var.upper()) print ("Lower --> : ",
var.lower())

String --> Hello World!


Length --> : 12 Upper -
-> : HELLO WORLD!
Lower --> : hello world!
8.3 String formatting

name=input("Enter your name: ") age=int(input("Enter your age : "))


price=float(input("Enter the book price: ")) s="\nYour name is %s, age is %d
and book price is %f" %(name.upper(),age,price) print ( s )

Enter your name: Rahul


Enter your age : 20
Enter the book price: 1000

Your name is RAHUL, age is 20 and book price is 1000.000000

8.4 String in Triple Quotes

para_str = """This is a long string that is made up of


several lines and non-printable characters such as TAB (
\t ) and they will show up that way when displayed.
NEWLINEs within the string, whether explicitly given like
this within the brackets [ \n ], or just a NEWLINE within
the variable assignment will also show up. """ print (
para_str )

This is a long string that is made up of several lines


and non-printable characters such as TAB ( ) and they
will show up that way when displayed.
NEWLINEs within the string, whether explicitly given like
8/13/24, 10:11 PM Learn_Python.ipynb - Colab
this within the brackets [ ], or just a
NEWLINE within the variable assignment
will also show up.

8.5 String strip

var =" Indian Army "

print("String --> ", var)


print("Length --> ", len( var ))
print("var strip --> ", var.strip())
print("Length of var after strip --> ", len( var.strip ()))

String --> Indian Army


Length --> 18 var strip -->
Indian Army Length of var
after strip --> 13

8.6 String split

var =" Indian, Army "

print("String --> ", var)


print("Length --> ", len( var ))
print("var split --> ", var.split())
print("var split --> ", var.split(' '))
print("var split --> ", var.split(','))

# Strip + Split
print("var split --> ", var.strip().split(','))

String --> Indian, Army


Length --> 19
var split --> ['Indian,', 'Army']
var split --> ['', 'Indian,', '', '', 'Army', '', '', '',
''] var split --> [' Indian', ' Army '] var split -->
['Indian', ' Army']

8.7 Count in string

var=" Indian Army " print ("String


--> ", var) print ("Count of ' ' --> ",
var.count(' ')) print ("Count of 'a' --> ",
var.count('a')) print ("Count of 'n' --> ",
var.count('an'))

String --> Indian Army


Count of ' ' --> 6
Count of 'a' --> 1
Count of 'n' --> 1

8.8 Reverse a String

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 0/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab
var="Indian Army" print ("String
--> ", var) print ("var[::1] -->
", var[::1]) print ("var[::2] -->
", var[::2]) print ("var[::-1] -->
", var[::-1]) print ("var[::-2] -->
", var[::-2])

var=var[::-1]
print ("var after reverse --> ", var)

String --> Indian Army


var[::1] --> Indian Army
var[::2] --> Ida ry
var[::-1] --> ymrA naidnI

1
var[::-2] --> yr adI
var after reverse --> ymrA naidnI

8.9 Palindrome

s1="Indian Army" s2="malayalam"


s3="madam" s4="teacher" print
("s1 --> ", s1==s1[::-1]) print
("s2 --> ", s2==s2[::-1]) print
("s3 --> ", s3==s3[::-1]) print
("s4 --> ", s4==s4[::-1])

s1 --> False
s2 --> True
s3 --> True
s4 --> False

9 Random Numbers/String

Learning: Generate Random Numbers/String

9.1 Generate random number between 0 and 1

import random as r print (


r.random ()) print (
r.random ()) print
(round(r.random(),4))

0.4192513686511823
0.8252917405886293
0.1728

9.2 Generate random integer number

import random as r print


(r.randint(1, 100)) print
(r.randint(1, 100)) print
(r.randint(-10, 10)) print
(r.randint(-10, 10))

70
34
6
6

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 1/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab

9.3 Generate random real number

import random as r print


(r.uniform(1, 100)) print
(r.uniform(1, 100)) print (r.uniform
(-10, 10)) print (r.uniform (-10,
10)) print (round(r.uniform (-10,
10),2))

96.94103499219948
10.376334587064289 -
6.087968259755884
-2.942454747413537
-1.12

9.4 Select sample from a list of elements

1
import random as r

A=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

print (r.sample(A, 4)) print


(r.sample(A, 2)) print
(r.sample(range(0,100), 2)) print
(r.sample(range(-100,100), 5))

[2 , 9, 5, 7]
[5 , 2]
[97 , 12]
[-87 , -24, 27, 25, 1]

9.5 Generate random string

import string as s import random as r print


("String --> ",s.ascii_letters)

passwd=r.sample(s.ascii_letters, 6)
print ("Selected Char --> ",passwd)

passwd1="".join(passwd) print
("passwd1 --> ",passwd1)

passwd2="+".join(passwd) print
("passwd2 --> ",passwd2)

passwd3="*".join(passwd) print
("passwd3 --> ",passwd3)

String --> abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ


Selected Char --> ['B', 'u', 'x', 'p', 'U',
'C'] passwd1 --> BuxpUC passwd2 -->
B+u+x+p+U+C passwd3 --> B*u*x*p*U*C

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 2/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab

9.6 Generate random digits

import string as s import


random as r print ("Digits -->
",s.digits)

otp=r.sample(s.digits, 5) print
("Selected num1 --> ",otp)
otp="".join(otp) print ("otp1
--> ",otp)

otp=r.sample(s.digits, 5) print
("Selected num2 --> ",otp)
otp="".join(otp) print ("otp2
--> ",otp)

otp=r.sample(s.digits, 5) print
("Selected num2 --> ",otp)
otp="".join(otp)
print ("otp3 --> ",otp)

Digits --> 0123456789


Selected num1 --> ['3', '5', '8', '7', '4']
otp1 --> 35874
Selected num2 --> ['6', '4', '8', '9', '3']
otp2 --> 64893
Selected num2 --> ['2', '8', '1', '0', '9']
otp3 --> 28109

9.7 Generate random string + digits

1
import string as s import random as r print ("String +
Digits --> ",s.ascii_letters + s.digits)

mixPasswd=r.sample(s.ascii_letters + s.digits, 5)
print ("\nSelected Str1 --> ",mixPasswd)
mixPasswd="".join(mixPasswd) print ("mixPasswd1
--> ",mixPasswd)

mixPasswd=r.sample(s.ascii_letters + s.digits, 6)
print ("\nSelected Str2 --> ",mixPasswd)
mixPasswd="".join(mixPasswd) print ("mixPasswd2
--> ",mixPasswd)

splChar="#@!~%^&*()_+=-[]{}|" mixPasswd=r.sample(splChar +
s.ascii_letters + s.digits, 8) print ("\nSelected Str3 -->
",mixPasswd) mixPasswd="".join(mixPasswd) print
("mixPasswd3 --> ",mixPasswd)

String + Digits --> abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789

Selected Str1 --> ['B', 'K', 'J', 'k', '4']


mixPasswd1 --> BKJk4

Selected Str2 --> ['5', 'N', '2', 'a', '1', 't']


mixPasswd2 --> 5N2a1t

Selected Str3 --> ['c', '8', 'C', '+', '4', '~', '-', 'A']
mixPasswd3 --> c8C+4~-A

10 Exception Handaling

Learning: How to handle exceptions

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 3/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab

10.1 Error Generation

for i in range(-5,6):
print ("100/",i," --> ", 100/i)

100 / -5 --> -20.0


100 / -4 --> -25.0
100/ -3 --> -33.333333333333336
100 / -2 --> -50.0
100 / -1 --> -100.0
--------------------------------------------------------------------------ZeroDivisionError
Traceback (most recent call last) <ipython-input-39-5eb017879ab5> in <cell line: 1>()
1 for i in range(-5,6):
----> 2 print ("100/",i," --> ", 100/i)

ZeroDivisionError: division by zero

10.2 Exception handaling for division by zero

for i in range(-5,6):
try:
print ("100/",i," --> ", 100/i)
except: print ("error")

100 / -5 --> -20.0


100 / -4 --> -25.0 100/ -3 -
-> -33.333333333333336
100 / -2 --> -50.0
100 / -1 --> -100.0
error
100 / 1 --> 100.0
100 / 2 --> 50.0 100/ 3 --
> 33.333333333333336
100 / 4 --> 25.0
100 / 5 --> 20.0

1
10.3 Exception handaling for array out of index

L=[1,2,3,4,5]

for i in range(8):
try:
print (i," --> ",L[i])
except:
print ("error")

0 --> 1
1 --> 2
2 --> 3
3 --> 4 4 --> 5 error error error

10.4 Exception handaling for le not found

fileName=input("Enter File Name: ")


fp=open(fileName) # Open the file in reading mode
fp.close() print ("Done")
https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 4/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab

Enter File Name: test.txt


--------------------------------------------------------------------------FileNotFoundError Traceback
(most recent call last)
<ipython-input-43-df3155df6860> in <cell line: 2>()
1 fileName=input("Enter File Name: ")
----> 2 fp=open(fileName) # Open the file in reading mode
3 fp.close()
4 print ("Done")

FileNotFoundError: [Errno 2] No such file or directory: 'test.txt'

10.5 Exception handaling for le not found

fileName=input("Enter File Name: ")


try:
fp=open(fileName) # Open the file in reading mode
fp.close() except: print ("Error !! \"%s\" File Not
Found"%(fileName)) print ("Done")

Enter File Name: test.txt


Error !! "test.txt" File Not Found
Done

11 Data Structure 1 - List

Learning: How to use list, add, delete and search in the list.
Note: Read more about list and try yourself

11.1 List Declaration

L = ["Pratham", 'Sharma', 3.14, 3 ]


print ("Original List: ", L)
print ("Number of elements in list: ", len( L ))

Original List: ['Pratham', 'Sharma', 3.14, 3]


Number of elements in list: 4

1
11.2 List Iteration

L = ["Pratham", 'Sharma', 3.14, 3 ]


print ("Original List: ", L) i=0
while i < len( L ): print ( L[i
]) i+=1

Original List: ['Pratham', 'Sharma', 3.14, 3]


Pratham
Sharma
3.14
3

11.3 List Iteration using for loop

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 5/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab
L = ["Pratham", 'Sharma', 3.14, 3 ]
print ("Original List: ", L) for i in
range(0, len(L)): print ( L[i ])

Original List: ['Pratham', 'Sharma', 3.14, 3]


Pratham
Sharma
3.14
3

11.4 List Iteration using for loop

L = ["Pratham", 'Sharma', 3.14, 3 ]


print ("Original List --> ", L) for s
in L: print ( s )

Original List --> ['Pratham', 'Sharma', 3.14, 3]


Pratham
Sharma
3.14
3

11.5 Adding and deleting from list

L = ["Pratham", 'Sharma', 3.14, 3 ]


print ("Original List --> ", L)

L.append("Rahul") print ("List After


Adding --> ", L)

del L[1]
print ("List After Deleting --> ", L)

Original List --> ['Pratham', 'Sharma', 3.14, 3]


List After Adding --> ['Pratham', 'Sharma', 3.14, 3, 'Rahul']
List After Deleting --> ['Pratham', 3.14, 3, 'Rahul']

11.6 Sum/Average of List

1
L=[3, 6, 9, 12, 5, 3, 2] print
("Original List --> ", L)

print ("Sum --> ", sum( L )) print


("Average --> ", sum(L)/len( L ))
print ("Average --> ", sum(L)//len( L
))

print ("L * 3 --> ", L * 3) # Every element get tripled


print ("L + L --> ", L + L) # Every element get doubled

Original List --> [3, 6, 9, 12, 5, 3, 2]


https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 6/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab
Sum --> 40
Average --> 5.714285714285714
Average --> 5
L * 3 --> [3, 6, 9, 12, 5, 3, 2, 3, 6, 9, 12, 5, 3, 2, 3, 6, 9, 12, 5, 3, 2]
L + L --> [3, 6, 9, 12, 5, 3, 2, 3, 6, 9, 12, 5, 3, 2]

11.7 Min/Max/Sort the list

L=[3, 6, 9, 12, 5, 3, 2] print


("Original List --> ", L)

print ("max --> ", max( L ))


print ("min --> ", min( L ))

print ("\nBefore Sort --> ", L)


L.sort() print ("After Sort (Asending) --
> ", L)

L.sort(reverse=True)
print ("After Sort (Desending) --> ", L)

Original List --> [3, 6, 9, 12, 5, 3, 2]


max --> 12 min --> 2

Before Sort --> [3, 6, 9, 12, 5, 3, 2]


After Sort (Asending) --> [2, 3, 3, 5, 6, 9, 12]
After Sort (Desending) --> [12, 9, 6, 5, 3, 3, 2]

11.8 Merge lists and select elements

L1 = [3, 6, 9]
L2 = [12, 5, 3, 2]
L3 = L1 + L2 print
("L1 --> ",L1) print
("L2 --> ",L2) print
("L3 --> ",L3)

print ("\nL3[2:] --> ",L3[2:])


print ("L3[2:5] --> ",L3[2:5])
print ("L3[:-1] --> ",L3[:-1])
print ("L3[::2] --> ",L3[::2])

L1 --> [3, 6, 9]
L2 --> [12, 5, 3, 2]
L3 --> [3, 6, 9, 12, 5, 3, 2]

L3[2:] --> [9, 12, 5, 3, 2]


L3[2:5] --> [9, 12, 5]
L3[:-1] --> [3, 6, 9, 12, 5, 3]
L3[::2] --> [3, 9, 5, 2]

11.9 Multiply all elements of the list by a constant

L = [12, 5, 3, 2, 7] print
("Original List --> ", L)

newL = [ i * 5 for i in L ]
print ("After Multiply with constant --> ", newL)

1
https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 7/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab
Original List --> [12, 5, 3, 2, 7]
After Multiply with constant --> [60, 25, 15, 10, 35]

11.10 Searching in the list

L=[3, 6, 9, 12, 5, 3, 2] print


("Original List --> ", 6 in L) print
("Original List --> ", 10 in L) print
("Original List --> ", 12 in L)

if (6 in L) == True:
print ("Present") else:
print ("Not Present")

if 10 in L == False:
print ("Not Present")
else: print
("Present")

Original List --> True


Original List --> False
Original List --> True
Present
Present

12 Data Structure 2 - Dictionary

Learning: How to use Dictionary, add, delete, search in Dictionary

Note: Read more about Dictionary and try yourself

12.1 Declare Dictionary

CGPA={1:8.9, 2:5.6, 4:6.7, 7:9.1, 8:5.3}


print ("Dictionary --> ", CGPA)
print ("Num of elements --> ", len( CGPA
))

print ("CGPA of 1 --> ", CGPA[1])


print ("CGPA of 4 --> ", CGPA[4])
print ("CGPA of 7 --> ", CGPA[7])
print ("CGPA of 3 --> ", CGPA[3])

Dictionary --> {1: 8.9, 2: 5.6, 4: 6.7, 7: 9.1, 8: 5.3}


Num of elements --> 5
CGPA of 1 --> 8.9
CGPA of 4 --> 6.7
CGPA of 7 --> 9.1
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-55-598d6ecb7ab2> in <cell line: 8>()
6 print ("CGPA of 4 --> ", CGPA[4])
7 print ("CGPA of 7 --> ", CGPA[7])
----> 8 print ("CGPA of 3 --> ", CGPA[3])

KeyError: 3

12.2 Triverse dictionary

CGPA={1:8.9, 2:5.6, 4:6.7, 7:9.1, 8:5.3}


for k in CGPA: print ("CGPA of ", k, "
--> ", CGPA[k])

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 8/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab

12.3 Getting Keys and Values

1
CGPA={1:8.9, 2:5.6, 4:6.7, 7:9.1, 8:5.3} print
("Dictionary --> ", CGPA) print ("Keys -
-> ", list( CGPA.keys ())) print ("Values
--> ", list( CGPA.values ()))

12.4 Updating, Adding and Deleting from Dictionary

CGPA={1:8.9,2:5.6,4:6.7,7:9.1,8:5.3}
print ("Original Dictionary --> ", CGPA)

CGPA[4] = 9.2
print ("After Updating (4) --> ", CGPA)

CGPA[3] = 8.6 print ("After Adding (3)


--> ", CGPA)

del CGPA[1]
print ("After Deleting (1) --> ", CGPA)

CGPA.clear() print ("After Clear


--> ", CGPA)

del CGPA
print ("After Delete --> ", CGPA)

12.5 Checking for Key in Dictionary

CGPA={1:8.9, 2:5.6, 4:6.7, 7:9.1, 8:5.3}


print ("Original Dictionary --> ", CGPA)
print ("Is Key 2 Present --> ", 2 in CGPA)
print ("Is Key 9 Present --> ", 9 in CGPA)

12.6 More example1

HomeTown={"Robin":"Delhi", "Govind":"Gwalior", "Anil":"Morena", "Pankaj":"Agra"}


print ("Original Dictionary --> ", HomeTown) print ("Home Town of Robin is -->
", HomeTown["Robin"]) print ("Home Town of Govind is --> ",
HomeTown["Govind"]) print ("Home Town of Anil is --> ", HomeTown["Anil"])
print ("Home Town of Pankaj is --> ", HomeTown["Pankaj"])

12.7 More example2

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 9/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab
HomeTown={"Robin":"Delhi", "Govind":"Gwalior", "Anil":"Morena", "Pankaj":"Agra"}
print ("Original Dictionary --> ", HomeTown)

for d in HomeTown: print ("Home Town of ", d, " is


--> ", HomeTown[d])

1
Data Structure 3 - Tuple

Learning: How to use Tuple, add, delete, search in Tuple


Note: Read more about Tuple and try yourself

1
# Method 1
T = ("Pratham", 'Sharma', 3.14, 3)

print ("T -->", T) print


("Num of elements -->", len( T ))
print ("Type of Object -->", type( T
))

# Method 2
T = tuple(["Pratham", 'Sharma', 3.14, 3]) # Convert list to tuple
#T = tuple(("Pratham", 'Sharma', 3.14, 3)) # Also Works

print ("T -->", T) print


("Num of elements -->", len( T ))
print ("Type of Object -->", type( T
))

13.2 Tuple Iteration

T = ("Pratham", 'Sharma', 3.14, 3)


print ("T -->", T)

i = 0 while i <
len( T ):
print ( T[i ])
i += 1

13.3 Tuple iteration using for loop

T = ("Pratham", 'Sharma', 3.14, 3)


print ("T -->", T)

for i in range(0, len(T)):


print ( T[i ])

1
.1 Declare Tuple
https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 10/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab

13.4 Tuple iteration using for loop

T = ("Pratham", 'Sharma', 3.14, 3)


print ("T -->", T)

for s in T:
print ( s )

13.5 Accessing/Selecting in Tuple

# Example 1:
T = (3, 6, 9, 12, 5, 3, 2)
print ("T -->", T)

print ("T[1] -->", T[1])


print ("T[2] -->", T[2])
print ("T[-1] -->", T[-1])
print ("T[-2] -->", T[-2])

# Example 2:
T = (3, 6, 9, 12, 5, 3, 2)
print ("T -->", T)

print ("T[1:3] -->", T[1:3])


print ("T[2:] -->", T[2:])
print ("T[2:5] -->", T[2:5])
print ("T[:2] -->", T[:2])
print ("T[:-1] -->", T[:-1])
print ("T[-4:-1] -->", T[-4:-1])

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scrollTo=V1oualoWhJWu&printMode=true 11/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab

13.6 Sum/Average of Tuple

= (3, 6, 9, 12, 5, 3, 2)
print ("T -- T)
>",
print ("Sum -- sum( T ))
>",
print -- sum(T)/len( T ))
("Average >",
print -- sum(T)//len( T ))
("Average >",

13.7 Min/Max in Tuple

Example 1
= (3, 6, 9, 12, 5, 3, 2) # Integer Tuple
print ("T -- T)
>",
print -- max( T ))
("Max >",
print -- min( T ))
("Min >",

Example 2
= ("Ram", "Shyam", "Ant") # String Tuple
"Human",
print ("T -- T)
>",
print -- max( T ))
("Max >",
print -- min( T ))
("Min >",

13.8 Merging Tuples

T1
= (3, 6, 9)
T2 = (12, 5, )
3, 2 print ("T1
-->", T1)
print ("T2 -- T2)
>",

T3 = T1 + T2
print ("T3 -- T3)
>",
T4 = T1 + T2 + T1 + T2
print ("T4 -->", T4)

#
T

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scroll
To=V1oualoWhJWu&printMode=true 20/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab

13.11 Adding element to Tuple (Error)

T = ("Pratham", 'Sharma', 3.14, 3)


print ("T -->", T)

T[2] = 900 # Error; 'tuple' object does not support item assignment
print ("T -->", T)

#Tuples are unchangeable. We cannot add items to it.

13.12 Adding element to Tuple - (Jugaad)

T = ("Pratham", 'Sharma', 3.14, 3)


print ("T -->", T)

T1 = list( T )
T1.append(9.8) T = tuple(
T 1) print ("After Add --
>", T)

13.13 Inserting element in Tuple - (Jugaad)

T = ("Pratham", 'Sharma', 3.14, 3)


print ("T -->", T)

T1 = list( T )
T1.insert(2, "Rahul") T =
tuple( T 1) print ("After
Insert -->", T)

13.14 Deleting from Tuple (Error)

T = ("Pratham", 'Sharma', 3.14, 3)


print ("T -->", T) del
T[1] print ("After Delete -->", T)

13.15 Deleting from Tuple - (Jugaad)

T = ("Pratham", 'Sharma', 3.14, 3)


print ("T -->", T)

T1 = list( T ) del T1[1] T =


tuple( T 1) print ("After
Delete -->", T)

2
Data Structure 4 - Set

Learning: How to use Set, add, delete, search in Set


Note: Read more about Set and try yourself
s = set(['A', 'B', 'E', 'F','E', 'F' ]) print
("Original set --> ", s) print
("Num of elements in set --> ", len( s ))

14.2 Opertions on Sets

a = set(['A', 'B', 'E', 'F' ]) b = set(["A", "C", "D", "E"])


print ("Original set a --> ", a) print ("Original set b
--> ", b) print ("Union of a and b --> ", a.union(b))
print ("Intersection of a,b --> ", a.intersection(b)) print
("Difference a - b --> ", a - b) print ("Difference a - b
--> ", a.difference(b)) print ("Difference b - a --> ", b
- a) print ("Difference b - a --> ", b.difference(a))

2
.1 Declare Set

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scroll
To=V1oualoWhJWu&printMode=true 22/23
8/13/24, 10:11 PM Learn_Python.ipynb - Colab
print ("Symetric Diff a - b --> ", a.symmetric_difference(b))
print ("Symetric Diff b - a --> ", b.symmetric_difference(a))

14.3 Add, delete, pop element from set

a = set(['A', 'B', 'E', 'F' ]) print


("Original set a --> ", a)
a.add("D") print ("Set After Adding
(D) --> ", a)
a.add("D") print ("Set After Adding
(D) --> ", a)

a.remove("D") print ("Set After


Deleting(D)--> ", a)
a.pop() print ("Set After pop -
-> ", a)
a.pop()
print ("Set After pop --> ", a)

15 Command Line Argument

Learning: How to Take input from command line and process it


Note: Run the program at cmd line

15.1 Add two numbers given at cmd line

Note: To run the program at cmd line python


Program.py 10 20

import sys print ( sys.argv ) a =


int(sys.argv[1]) # First Number b =
int(sys.argv[2]) # Second Number c =
a + b print (a, " + ", b, " --> ", c)

15.2 Concatinate two strings given at cmd line

python Pr ogr am.p y


python Pr ogr am.p y 10
FirstString SecondString
python Pr ogr am.p y 10 20 30 40
import sys
print (sys.argv)
sum=0
for s in sys.argv[1:]:
sum += int(s)

print ("Sum is --> ", sum)

Note: To run the program at cmd line

import sys print ( sys.argv ) s = sys.argv[1] + " "


+ sys.argv[2] print (sys.argv[1], " + ",
sys.argv[2], " --> ", s)

15.3 Add all the numbers given at cmd line

Note: To run the program at cmd line

3
File Handling

Learning: How to open the le, read the le and write in the le

3
.1 Writing 1 to 10 in le

https://colab.research.google.com/drive/1DEI24YMyWk0TdNh34fkFuMYUR0Ltfytl#scroll
To=V1oualoWhJWu&printMode=true 23/23
8/20/24, 9:22 PM Copy of Welcome To Colab - Colab

Using Accelerated Hardware


TensorFlow with GPUs
TensorFlow with TPUs

keyboard_arrow_down Featured examples


Retraining an Image Classifier: Build a Keras model on top of a pre-trained image classifier to distinguish flowers.
Text Classification: Classify IMDB movie reviews as either positive or negative.
Style Transfer: Use deep learning to transfer style between images.
Multilingual Universal Sentence Encoder Q&A: Use a machine learning model to answer questions from the SQuAD dataset.
Video Interpolation: Predict what happened in a video between the first and the last frame.

import numpy as np
import matplotlib.pyplot as plt

# Define the function


def f(x):
return 3 * x**2 - 3 * x + 4

# Define the gradient of the function


def gradient_f(x):
return 6 * x - 3

# Gradient Descent Algorithm


def gradient_descent(starting_point, learning_rate, num_iterations):
x = starting_point
history = [x]

for _ in range(num_iterations):
grad = gradient_f(x)
x = x - learning_rate * grad
history.append(x)

return x, history

# Parameters for Gradient Descent


starting_point = 0.0
learning_rate = 0.1
num_iterations = 100

# Perform Gradient Descent


min_x, history = gradient_descent(starting_point, learning_rate, num_iterations)

# Theoretical minimum value


theoretical_min_x = 0.5
theoretical_min_value = f(theoretical_min_x)

# Result from Gradient Descent


min_value = f(min_x)

# Plotting the function and gradient descent path


x_vals = np.linspace(-1, 2, 400)
y_vals = f(x_vals)

plt.figure(figsize=(12, 6))

# Plot the function


plt.plot(x_vals, y_vals, label='f(x) = 3x^2 - 3x + 4', color='blue')

# Plot the gradient descent path


history_vals = [f(x) for x in history]
plt.plot(history, history_vals, 'ro-', label='Gradient Descent Path')

# Highlight the theoretical minimum


plt.plot(theoretical_min_x, theoretical_min_value, 'go', label='Theoretical Minimum')

plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('Gradient Descent on the Function f(x) = 3x^2 - 3x + 4')
plt.legend()
plt.grid(True)
plt.show()

# Print results
print(f"Result from Gradient Descent: x = {min_x}, f(x) = {min_value}")
print(f"Theoretical Minimum: x = {theoretical_min_x}, f(x) = {theoretical_min_value}")
https://colab.research.google.com/drive/1y2BmOL0LxC4tIEiaaNnDJoFQqpmcn0-d#scrollTo=-Rh3-Vt9Nev9&printMode=true 3/4
8/20/24, 9:22 PM Copy of Welcome To Colab - Colab

Result from Gradient Descent: x = 0.5, f(x) = 3.25


Theoretical Minimum: x = 0.5, f(x) = 3.25

import numpy as np

class Neuron:
def __init__(self, n_inputs, bias = 0., weights = None):
self.b = bias
if weights: self.ws = np.array(weights)
else: self.ws = np.random.rand(n_inputs)

def __call__(self, xs): #calculate the neuron's output: multiply the inputs with the weights and sum the values together, add the bi
return self._f(xs @ self.ws + self.b)

def _f(self, x): #activation function (default: leaky_relu)


return max(x*.1, x)
perceptron = Neuron(n_inputs = 3, bias = -0.1, weights = [0.7, 0.6, 1.4]) #using the same weights and bias value in the example above
perceptron([1.0, 0.5, -1.0]) #using the same inputs (and a leaky relu activation function), lets calculate the output value

-0.04999999999999999

https://colab.research.google.com/drive/1y2BmOL0LxC4tIEiaaNnDJoFQqpmcn0-d#scrollTo=-Rh3-Vt9Nev9&printMode=true 4/4
8/28/24, 4:18 PM Welcome To Colab - Colab

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt

np.random.seed(0)
X = np.random.rand(1000, 2)
y = np.where(X[:, 0] + X[:, 1] > 1, 1, 0)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

model = Sequential()
model.add(Dense(1, input_dim=2, activation='sigmoid')) # Single layer with 1 neuron

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

history = model.fit(X_train, y_train, epochs=50, batch_size=10, validation_split=0.2, verbose=1)

loss, accuracy = model.evaluate(X_test, y_test)


print(f"Test Accuracy: {accuracy:.2f}")

plt.plot(history.history['accuracy'], label='Training Accuracy')


plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
(
https://colab.research.google.com/#scrollTo=1RE2yiwqjRw4&printMode=true
) 1/6
8/28/24, 4:18 PM Welcome To Colab - Colab
plt.title('Training and Validation Accuracy')
plt.show()

y_pred = model.predict(X_test)
y_pred_classes = np.where(y_pred > 0.5, 1, 0)

plt.scatter(X_test[:, 0], X_test[:, 1], c=y_pred_classes.flatten(), cmap='coolwarm', edgecolors='k')


plt.title('Classification Results')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.show()

https://colab.research.google.com/#scrollTo=1RE2yiwqjRw4&printMode=true 2/6
8/28/24, 4:18 PM Welcome To Colab - Colab

/usr/local/lib/python3.10/dist-packages/keras/src/layers/core/dense.py:87: UserWarning: Do not pass an `input_shape`/`input_dim` a


super().__init__(activity_regularizer=activity_regularizer, **kwargs)
Epoch 1/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - accuracy: 0.8111 - loss: 0.5817 - val_accuracy: 0.8438 - val_loss: 0.5554
Epoch 2/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8859 - loss: 0.5427 - val_accuracy: 0.8625 - val_loss: 0.5319
Epoch 3/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8996 - loss: 0.5190 - val_accuracy: 0.8813 - val_loss: 0.5107
Epoch 4/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8923 - loss: 0.5101 - val_accuracy: 0.9000 - val_loss: 0.4910
Epoch 5/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9108 - loss: 0.4870 - val_accuracy: 0.9062 - val_loss: 0.4729
Epoch 6/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9130 - loss: 0.4718 - val_accuracy: 0.9187 - val_loss: 0.4563
Epoch 7/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9180 - loss: 0.4429 - val_accuracy: 0.9187 - val_loss: 0.4407
Epoch 8/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9374 - loss: 0.4332 - val_accuracy: 0.9187 - val_loss: 0.4264
Epoch 9/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9258 - loss: 0.4221 - val_accuracy: 0.9312 - val_loss: 0.4133
Epoch 10/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9402 - loss: 0.4026 - val_accuracy: 0.9375 - val_loss: 0.4010
Epoch 11/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9369 - loss: 0.3925 - val_accuracy: 0.9438 - val_loss: 0.3896
Epoch 12/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9509 - loss: 0.3723 - val_accuracy: 0.9563 - val_loss: 0.3790
Epoch 13/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9453 - loss: 0.3710 - val_accuracy: 0.9625 - val_loss: 0.3690
Epoch 14/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9527 - loss: 0.3646 - val_accuracy: 0.9688 - val_loss: 0.3597
Epoch 15/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9454 - loss: 0.3567 - val_accuracy: 0.9688 - val_loss: 0.3510
Epoch 16/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9589 - loss: 0.3365 - val_accuracy: 0.9688 - val_loss: 0.3427
Epoch 17/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9609 - loss: 0.3400 - val_accuracy: 0.9688 - val_loss: 0.3351
Epoch 18/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9667 - loss: 0.3298 - val_accuracy: 0.9688 - val_loss: 0.3277
Epoch 19/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9608 - loss: 0.3193 - val_accuracy: 0.9750 - val_loss: 0.3209
Epoch 20/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9697 - loss: 0.3106 - val_accuracy: 0.9750 - val_loss: 0.3144
https://colab.research.google.com/#scrollTo=1RE2yiwqjRw4&printMode=true 3/6
8/28/24, 4:18 PM Welcome To Colab - Colab
p y _ y _
Epoch 21/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9743 - loss: 0.2993 - val_accuracy: 0.9750 - val_loss: 0.3082
Epoch 22/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9620 - loss: 0.3085 - val_accuracy: 0.9875 - val_loss: 0.3023
Epoch 23/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9756 - loss: 0.2989 - val_accuracy: 0.9875 - val_loss: 0.2967
Epoch 24/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9714 - loss: 0.2828 - val_accuracy: 0.9875 - val_loss: 0.2914
Epoch 25/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9567 - loss: 0.2696 - val_accuracy: 0.9937 - val_loss: 0.2864
Epoch 26/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9670 - loss: 0.2779 - val_accuracy: 0.9937 - val_loss: 0.2815
Epoch 27/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9730 - loss: 0.2655 - val_accuracy: 0.9937 - val_loss: 0.2769
Epoch 28/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9730 - loss: 0.2629 - val_accuracy: 0.9937 - val_loss: 0.2725
Epoch 29/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9757 - loss: 0.2603 - val_accuracy: 0.9937 - val_loss: 0.2682
Epoch 30/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9752 - loss: 0.2664 - val_accuracy: 0.9937 - val_loss: 0.2642
Epoch 31/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9835 - loss: 0.2384 - val_accuracy: 0.9937 - val_loss: 0.2602
Epoch 32/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9775 - loss: 0.2598 - val_accuracy: 0.9937 - val_loss: 0.2565
Epoch 33/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9828 - loss: 0.2281 - val_accuracy: 1.0000 - val_loss: 0.2529
Epoch 34/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9776 - loss: 0.2318 - val_accuracy: 1.0000 - val_loss: 0.2494
Epoch 35/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9706 - loss: 0.2500 - val_accuracy: 1.0000 - val_loss: 0.2460
Epoch 36/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9940 - loss: 0.2466 - val_accuracy: 1.0000 - val_loss: 0.2428
Epoch 37/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9953 - loss: 0.2291 - val_accuracy: 1.0000 - val_loss: 0.2397
Epoch 38/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9959 - loss: 0.2429 - val_accuracy: 1.0000 - val_loss: 0.2366
Epoch 39/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9880 - loss: 0.2163 - val_accuracy: 1.0000 - val_loss: 0.2337
Epoch 40/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9910 - loss: 0.2468 - val_accuracy: 1.0000 - val_loss: 0.2309
Epoch 41/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9949 - loss: 0.2110 - val_accuracy: 1.0000 - val_loss: 0.2281
https://colab.research.google.com/#scrollTo=1RE2yiwqjRw4&printMode=true 4/6
8/28/24, 4:18 PM Welcome To Colab - Colab
Epoch 42/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9936 - loss: 0.2078 - val_accuracy: 1.0000 - val_loss: 0.2255
Epoch 43/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9933 - loss: 0.2156 - val_accuracy: 1.0000 - val_loss: 0.2230
Epoch 44/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9981 - loss: 0.2100 - val_accuracy: 1.0000 - val_loss: 0.2204
Epoch 45/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9950 - loss: 0.2041 - val_accuracy: 1.0000 - val_loss: 0.2180
Epoch 46/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.9985 - loss: 0.1903 - val_accuracy: 1.0000 - val_loss: 0.2157
Epoch 47/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9986 - loss: 0.1988 - val_accuracy: 1.0000 - val_loss: 0.2133
Epoch 48/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9997 - loss: 0.1937 - val_accuracy: 1.0000 - val_loss: 0.2111
Epoch 49/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9972 - loss: 0.2006 - val_accuracy: 1.0000 - val_loss: 0.2090
Epoch 50/50
64/64 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9990 - loss: 0.2046 - val_accuracy: 1.0000 - val_loss: 0.2068
7/7 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 1.0000 - loss: 0.1711
Test Accuracy: 1.00

https://colab.research.google.com/#scrollTo=1RE2yiwqjRw4&printMode=true 5/6
8/28/24, 4:18 PM Welcome To Colab - Colab

7/7 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step

https://colab.research.google.com/#scrollTo=1RE2yiwqjRw4&printMode=true 6/6
11/11/24, 9:46 AM Untitled6.ipynb - Colab

# Import libraries
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt

# Load and preprocess MNIST data


(train_images, _), (test_images, _) = mnist.load_data()
train_images, test_images = train_images / 255.0, test_images / 255.0 # Normalize
train_images = train_images.reshape(-1, 784) # Flatten images to 784 (28x28) vector
test_images = test_images.reshape(-1, 784)

# Define the autoencoder model


autoencoder = models.Sequential([
layers.Dense(64, activation='relu', input_shape=(784,)), # Encoder
layers.Dense(784, activation='sigmoid') # Decoder
])

# Compile and train the model


autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(train_images, train_images, epochs=5, batch_size=256, validation_data=(test_images, test_images))

# Generate and display some reconstructed images


reconstructed_images = autoencoder.predict(test_images[:10])

# Plot original and reconstructed images


plt.figure(figsize=(10, 4))
for i in range(10):
# Original images
plt.subplot(2, 10, i + 1)
plt.imshow(test_images[i].reshape(28, 28), cmap='gray')
plt.axis('off')

# Reconstructed images
plt.subplot(2, 10, i + 11)
plt.imshow(reconstructed_images[i].reshape(28, 28), cmap='gray')
plt.axis('off')
plt.show()

/usr/local/lib/python3.10/dist-packages/keras/src/layers/core/dense.py:87: UserWarning: Do not pass an `input_shape`/`input_dim` arg


super().__init__(activity_regularizer=activity_regularizer, **kwargs)
Epoch 1/5
235/235 ━━━━━━━━━━━━━━━━━━━━ 4s 12ms/step - loss: 0.3467 - val_loss: 0.1640
Epoch 2/5
235/235 ━━━━━━━━━━━━━━━━━━━━ 6s 17ms/step - loss: 0.1543 - val_loss: 0.1284
Epoch 3/5
235/235 ━━━━━━━━━━━━━━━━━━━━ 4s 12ms/step - loss: 0.1243 - val_loss: 0.1098
Epoch 4/5
235/235 ━━━━━━━━━━━━━━━━━━━━ 3s 11ms/step - loss: 0.1075 - val_loss: 0.0979
Epoch 5/5
235/235 ━━━━━━━━━━━━━━━━━━━━ 3s 12ms/step - loss: 0.0968 - val_loss: 0.0901
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 50ms/step

https://colab.research.google.com/drive/16zHOXNl1Qw9lD2K9YbmaMgftf3dau9Pp#printMode=true 1/2
11/11/24, 9:46 AM Untitled6.ipynb - Colab

https://colab.research.google.com/drive/16zHOXNl1Qw9lD2K9YbmaMgftf3dau9Pp#printMode=true 2/2
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist

# Load the MNIST dataset


(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize the pixel values (0 to 255) to range 0 to 1


x_train, x_test = x_train / 255.0, x_test / 255.0

# Reshape the data to add a channel dimension (needed for Conv2D


layers)
x_train = x_train.reshape(-1, 28, 28, 1)
x_test = x_test.reshape(-1, 28, 28, 1)

# Build the neural network model


model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28,
1)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax') # 10 classes for the 10
digits
])

# Compile the model


model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Train the model


history = model.fit(x_train, y_train, epochs=5,
validation_data=(x_test, y_test))

# Evaluate the model on test data


test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print(f'Test accuracy: {test_acc}')

# Visualize the accuracy and loss


plt.figure(figsize=(12, 4))

# Plot training & validation accuracy values


plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(['Train', 'Test'], loc='upper left')

# Plot training & validation loss values


plt.subplot(1, 2, 2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(['Train', 'Test'], loc='upper left')

plt.show()

# Visualize some predictions


predictions = model.predict(x_test)

# Function to plot some test images with predicted labels


def plot_images(images, labels, predictions, index, num=5):
plt.figure(figsize=(10, 5))
for i in range(num):
plt.subplot(1, num, i+1)
plt.imshow(images[index + i].reshape(28, 28), cmap='gray')
plt.title(f"True: {labels[index + i]}\nPred:
{np.argmax(predictions[index + i])}")
plt.axis('off')
plt.show()

# Plot 5 test images with predictions


plot_images(x_test, y_test, predictions, index=0)

/usr/local/lib/python3.10/dist-packages/keras/src/layers/
convolutional/base_conv.py:107: UserWarning: Do not pass an
`input_shape`/`input_dim` argument to a layer. When using Sequential
models, prefer using an `Input(shape)` object as the first layer in
the model instead.
super().__init__(activity_regularizer=activity_regularizer,
**kwargs)

Epoch 1/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 65s 33ms/step - accuracy: 0.8968 -
loss: 0.3309 - val_accuracy: 0.9856 - val_loss: 0.0448
Epoch 2/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 82s 33ms/step - accuracy: 0.9864 -
loss: 0.0458 - val_accuracy: 0.9882 - val_loss: 0.0350
Epoch 3/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 62s 33ms/step - accuracy: 0.9912 -
loss: 0.0286 - val_accuracy: 0.9898 - val_loss: 0.0310
Epoch 4/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 78s 31ms/step - accuracy: 0.9923 -
loss: 0.0245 - val_accuracy: 0.9906 - val_loss: 0.0341
Epoch 5/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 85s 33ms/step - accuracy: 0.9942 -
loss: 0.0197 - val_accuracy: 0.9907 - val_loss: 0.0287
313/313 - 3s - 8ms/step - accuracy: 0.9907 - loss: 0.0287
Test accuracy: 0.9907000064849854

313/313 ━━━━━━━━━━━━━━━━━━━━ 3s 9ms/step


!pip install tensorflow
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from tensorflow import keras
from tensorflow.keras import layers
# Generate synthetic dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=2,
random_state=42)

# Split the dataset


X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=42)

# Standardize the features


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
def create_model(dropout_rate=0.0):
model = keras.Sequential([
layers.Dense(64, activation='relu',
input_shape=(X_train.shape[1],)),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid') # Output layer for
binary classification
])
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
return model

# Train the model without dropout


model_no_dropout = create_model()
history_no_dropout = model_no_dropout.fit(X_train, y_train, epochs=50,
batch_size=32, validation_split=0.2)
def create_model_with_dropout(dropout_rate=0.5):
model = keras.Sequential([
layers.Dense(64, activation='relu',
input_shape=(X_train.shape[1],)),
layers.Dropout(dropout_rate), # Dropout layer
layers.Dense(64, activation='relu'),
layers.Dropout(dropout_rate), # Dropout layer
layers.Dense(1, activation='sigmoid') # Output layer for
binary classification
])
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
return model
# Train the model with dropout
model_with_dropout = create_model_with_dropout()
history_with_dropout = model_with_dropout.fit(X_train, y_train,
epochs=50, batch_size=32, validation_split=0.2)
# Plot the results
plt.figure(figsize=(12, 5))

# Without Dropout
plt.subplot(1, 2, 1)
plt.plot(history_no_dropout.history['accuracy'], label='Train
Accuracy')
plt.plot(history_no_dropout.history['val_accuracy'], label='Validation
Accuracy')
plt.title('Model Without Dropout')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

# With Dropout
plt.subplot(1, 2, 2)
plt.plot(history_with_dropout.history['accuracy'], label='Train
Accuracy')
plt.plot(history_with_dropout.history['val_accuracy'],
label='Validation Accuracy')
plt.title('Model With Dropout')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.tight_layout()
plt.show()
loss_no_dropout, accuracy_no_dropout =
model_no_dropout.evaluate(X_test, y_test)
loss_with_dropout, accuracy_with_dropout =
model_with_dropout.evaluate(X_test, y_test)

print(f"Test Accuracy (No Dropout): {accuracy_no_dropout:.4f}")


print(f"Test Accuracy (With Dropout): {accuracy_with_dropout:.4f}")

Requirement already satisfied: tensorflow in


/usr/local/lib/python3.10/dist-packages (2.17.0)
Requirement already satisfied: absl-py>=1.0.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (1.4.0)
Requirement already satisfied: astunparse>=1.6.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (1.6.3)
Requirement already satisfied: flatbuffers>=24.3.25 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (24.3.25)
Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1
in /usr/local/lib/python3.10/dist-packages (from tensorflow) (0.6.0)
Requirement already satisfied: google-pasta>=0.1.1 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (0.2.0)
Requirement already satisfied: h5py>=3.10.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (3.11.0)
Requirement already satisfied: libclang>=13.0.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (18.1.1)
Requirement already satisfied: ml-dtypes<0.5.0,>=0.3.1 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (0.4.1)
Requirement already satisfied: opt-einsum>=2.3.2 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (3.4.0)
Requirement already satisfied: packaging in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (24.1)
Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!
=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (3.20.3)
Requirement already satisfied: requests<3,>=2.21.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (2.32.3)
Requirement already satisfied: setuptools in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (71.0.4)
Requirement already satisfied: six>=1.12.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (1.16.0)
Requirement already satisfied: termcolor>=1.1.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (2.4.0)
Requirement already satisfied: typing-extensions>=3.6.6 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (4.12.2)
Requirement already satisfied: wrapt>=1.11.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (1.16.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (1.64.1)
Requirement already satisfied: tensorboard<2.18,>=2.17 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (2.17.0)
Requirement already satisfied: keras>=3.2.0 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (3.4.1)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (0.37.1)
Requirement already satisfied: numpy<2.0.0,>=1.23.5 in
/usr/local/lib/python3.10/dist-packages (from tensorflow) (1.26.4)
Requirement already satisfied: wheel<1.0,>=0.23.0 in
/usr/local/lib/python3.10/dist-packages (from astunparse>=1.6.0-
>tensorflow) (0.44.0)
Requirement already satisfied: rich in /usr/local/lib/python3.10/dist-
packages (from keras>=3.2.0->tensorflow) (13.9.1)
Requirement already satisfied: namex in
/usr/local/lib/python3.10/dist-packages (from keras>=3.2.0-
>tensorflow) (0.0.8)
Requirement already satisfied: optree in
/usr/local/lib/python3.10/dist-packages (from keras>=3.2.0-
>tensorflow) (0.13.0)
Requirement already satisfied: charset-normalizer<4,>=2 in
/usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0-
>tensorflow) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in
/usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0-
>tensorflow) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in
/usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0-
>tensorflow) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in
/usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0-
>tensorflow) (2024.8.30)
Requirement already satisfied: markdown>=2.6.8 in
/usr/local/lib/python3.10/dist-packages (from tensorboard<2.18,>=2.17-
>tensorflow) (3.7)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0
in /usr/local/lib/python3.10/dist-packages (from
tensorboard<2.18,>=2.17->tensorflow) (0.7.2)
Requirement already satisfied: werkzeug>=1.0.1 in
/usr/local/lib/python3.10/dist-packages (from tensorboard<2.18,>=2.17-
>tensorflow) (3.0.4)
Requirement already satisfied: MarkupSafe>=2.1.1 in
/usr/local/lib/python3.10/dist-packages (from werkzeug>=1.0.1-
>tensorboard<2.18,>=2.17->tensorflow) (2.1.5)
Requirement already satisfied: markdown-it-py>=2.2.0 in
/usr/local/lib/python3.10/dist-packages (from rich->keras>=3.2.0-
>tensorflow) (3.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in
/usr/local/lib/python3.10/dist-packages (from rich->keras>=3.2.0-
>tensorflow) (2.18.0)
Requirement already satisfied: mdurl~=0.1 in
/usr/local/lib/python3.10/dist-packages (from markdown-it-py>=2.2.0-
>rich->keras>=3.2.0->tensorflow) (0.1.2)

/usr/local/lib/python3.10/dist-packages/keras/src/layers/core/
dense.py:87: UserWarning: Do not pass an `input_shape`/`input_dim`
argument to a layer. When using Sequential models, prefer using an
`Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer,
**kwargs)

Epoch 1/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 10s 84ms/step - accuracy: 0.6430 - loss:
0.6481 - val_accuracy: 0.7750 - val_loss: 0.5422
Epoch 2/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 22ms/step - accuracy: 0.8231 - loss:
0.4992 - val_accuracy: 0.8250 - val_loss: 0.4421
Epoch 3/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 22ms/step - accuracy: 0.8607 - loss:
0.4148 - val_accuracy: 0.8562 - val_loss: 0.3733
Epoch 4/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 20ms/step - accuracy: 0.8604 - loss:
0.3505 - val_accuracy: 0.8562 - val_loss: 0.3330
Epoch 5/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 26ms/step - accuracy: 0.8815 - loss:
0.3260 - val_accuracy: 0.8500 - val_loss: 0.3102
Epoch 6/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 22ms/step - accuracy: 0.8861 - loss:
0.3155 - val_accuracy: 0.8562 - val_loss: 0.2971
Epoch 7/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 36ms/step - accuracy: 0.8925 - loss:
0.2850 - val_accuracy: 0.8562 - val_loss: 0.2886
Epoch 8/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 31ms/step - accuracy: 0.9048 - loss:
0.2825 - val_accuracy: 0.8562 - val_loss: 0.2850
Epoch 9/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 18ms/step - accuracy: 0.9051 - loss:
0.2756 - val_accuracy: 0.8687 - val_loss: 0.2818
Epoch 10/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 20ms/step - accuracy: 0.9014 - loss:
0.2510 - val_accuracy: 0.8562 - val_loss: 0.2766
Epoch 11/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 21ms/step - accuracy: 0.9138 - loss:
0.2324 - val_accuracy: 0.8625 - val_loss: 0.2791
Epoch 12/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 23ms/step - accuracy: 0.9210 - loss:
0.2429 - val_accuracy: 0.8625 - val_loss: 0.2725
Epoch 13/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 21ms/step - accuracy: 0.9153 - loss:
0.2370 - val_accuracy: 0.8625 - val_loss: 0.2705
Epoch 14/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step - accuracy: 0.9192 - loss:
0.2314 - val_accuracy: 0.8625 - val_loss: 0.2721
Epoch 15/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 29ms/step - accuracy: 0.9319 - loss:
0.2082 - val_accuracy: 0.8625 - val_loss: 0.2666
Epoch 16/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 22ms/step - accuracy: 0.9216 - loss:
0.2049 - val_accuracy: 0.8687 - val_loss: 0.2726
Epoch 17/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 23ms/step - accuracy: 0.9237 - loss:
0.2079 - val_accuracy: 0.8813 - val_loss: 0.2709
Epoch 18/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 23ms/step - accuracy: 0.9231 - loss:
0.2077 - val_accuracy: 0.8500 - val_loss: 0.2738
Epoch 19/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 20ms/step - accuracy: 0.9461 - loss:
0.1544 - val_accuracy: 0.8687 - val_loss: 0.2673
Epoch 20/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 23ms/step - accuracy: 0.9568 - loss:
0.1641 - val_accuracy: 0.8687 - val_loss: 0.2717
Epoch 21/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 27ms/step - accuracy: 0.9415 - loss:
0.1690 - val_accuracy: 0.8687 - val_loss: 0.2752
Epoch 22/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 1s 16ms/step - accuracy: 0.9543 - loss:
0.1475 - val_accuracy: 0.8750 - val_loss: 0.2680
Epoch 23/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.9591 - loss:
0.1356 - val_accuracy: 0.8750 - val_loss: 0.2750
Epoch 24/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9718 - loss:
0.1278 - val_accuracy: 0.8687 - val_loss: 0.2748
Epoch 25/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9663 - loss:
0.1365 - val_accuracy: 0.8875 - val_loss: 0.2821
Epoch 26/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9652 - loss:
0.1278 - val_accuracy: 0.8687 - val_loss: 0.2796
Epoch 27/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9838 - loss:
0.1022 - val_accuracy: 0.8875 - val_loss: 0.2832
Epoch 28/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9722 - loss:
0.1124 - val_accuracy: 0.8750 - val_loss: 0.2850
Epoch 29/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9733 - loss:
0.1173 - val_accuracy: 0.8750 - val_loss: 0.2845
Epoch 30/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9821 - loss:
0.0952 - val_accuracy: 0.8750 - val_loss: 0.2843
Epoch 31/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9787 - loss:
0.0995 - val_accuracy: 0.8625 - val_loss: 0.2976
Epoch 32/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9845 - loss:
0.0956 - val_accuracy: 0.8687 - val_loss: 0.2914
Epoch 33/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9847 - loss:
0.0814 - val_accuracy: 0.8750 - val_loss: 0.2946
Epoch 34/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9823 - loss:
0.0829 - val_accuracy: 0.8687 - val_loss: 0.2979
Epoch 35/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9848 - loss:
0.0744 - val_accuracy: 0.8750 - val_loss: 0.3102
Epoch 36/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9888 - loss:
0.0736 - val_accuracy: 0.8813 - val_loss: 0.2900
Epoch 37/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.9893 - loss:
0.0609 - val_accuracy: 0.8750 - val_loss: 0.3087
Epoch 38/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9873 - loss:
0.0616 - val_accuracy: 0.8750 - val_loss: 0.3063
Epoch 39/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9878 - loss:
0.0640 - val_accuracy: 0.8750 - val_loss: 0.3166
Epoch 40/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.9933 - loss:
0.0552 - val_accuracy: 0.8875 - val_loss: 0.3114
Epoch 41/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9884 - loss:
0.0527 - val_accuracy: 0.8750 - val_loss: 0.3219
Epoch 42/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9889 - loss:
0.0513 - val_accuracy: 0.8813 - val_loss: 0.3199
Epoch 43/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9965 - loss:
0.0425 - val_accuracy: 0.8875 - val_loss: 0.3195
Epoch 44/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9946 - loss:
0.0464 - val_accuracy: 0.8813 - val_loss: 0.3318
Epoch 45/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9953 - loss:
0.0387 - val_accuracy: 0.8750 - val_loss: 0.3297
Epoch 46/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9958 - loss:
0.0327 - val_accuracy: 0.8687 - val_loss: 0.3469
Epoch 47/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9978 - loss:
0.0314 - val_accuracy: 0.8750 - val_loss: 0.3387
Epoch 48/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.9994 - loss:
0.0327 - val_accuracy: 0.8813 - val_loss: 0.3388
Epoch 49/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 1.0000 - loss:
0.0321 - val_accuracy: 0.8813 - val_loss: 0.3462
Epoch 50/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 1.0000 - loss:
0.0229 - val_accuracy: 0.8875 - val_loss: 0.3498
Epoch 1/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 2s 16ms/step - accuracy: 0.5152 - loss:
0.7853 - val_accuracy: 0.7188 - val_loss: 0.5864
Epoch 2/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.6228 - loss:
0.6686 - val_accuracy: 0.7937 - val_loss: 0.5273
Epoch 3/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.6706 - loss:
0.6189 - val_accuracy: 0.8313 - val_loss: 0.4856
Epoch 4/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.6945 - loss:
0.5651 - val_accuracy: 0.8438 - val_loss: 0.4484
Epoch 5/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7278 - loss:
0.5187 - val_accuracy: 0.8562 - val_loss: 0.4172
Epoch 6/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7402 - loss:
0.5468 - val_accuracy: 0.8625 - val_loss: 0.3909
Epoch 7/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7315 - loss:
0.5248 - val_accuracy: 0.8625 - val_loss: 0.3676
Epoch 8/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.7987 - loss:
0.4617 - val_accuracy: 0.8687 - val_loss: 0.3507
Epoch 9/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.7858 - loss:
0.4741 - val_accuracy: 0.8687 - val_loss: 0.3344
Epoch 10/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8021 - loss:
0.4215 - val_accuracy: 0.8750 - val_loss: 0.3219
Epoch 11/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8331 - loss:
0.3714 - val_accuracy: 0.8750 - val_loss: 0.3135
Epoch 12/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8179 - loss:
0.4187 - val_accuracy: 0.8813 - val_loss: 0.3047
Epoch 13/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8565 - loss:
0.3632 - val_accuracy: 0.8750 - val_loss: 0.2969
Epoch 14/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8331 - loss:
0.3782 - val_accuracy: 0.8687 - val_loss: 0.2915
Epoch 15/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8367 - loss:
0.4313 - val_accuracy: 0.8750 - val_loss: 0.2847
Epoch 16/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8407 - loss:
0.3637 - val_accuracy: 0.8750 - val_loss: 0.2769
Epoch 17/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8581 - loss:
0.3331 - val_accuracy: 0.8750 - val_loss: 0.2717
Epoch 18/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8755 - loss:
0.3441 - val_accuracy: 0.8750 - val_loss: 0.2714
Epoch 19/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8439 - loss:
0.3797 - val_accuracy: 0.8750 - val_loss: 0.2688
Epoch 20/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8549 - loss:
0.3881 - val_accuracy: 0.8875 - val_loss: 0.2685
Epoch 21/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.8679 - loss:
0.3121 - val_accuracy: 0.8813 - val_loss: 0.2637
Epoch 22/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8549 - loss:
0.3219 - val_accuracy: 0.8687 - val_loss: 0.2599
Epoch 23/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - accuracy: 0.8244 - loss:
0.3870 - val_accuracy: 0.8687 - val_loss: 0.2599
Epoch 24/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.8538 - loss:
0.3253 - val_accuracy: 0.8687 - val_loss: 0.2603
Epoch 25/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.8482 - loss:
0.3312 - val_accuracy: 0.8687 - val_loss: 0.2591
Epoch 26/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.8662 - loss:
0.3036 - val_accuracy: 0.8687 - val_loss: 0.2552
Epoch 27/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.8561 - loss:
0.3585 - val_accuracy: 0.8625 - val_loss: 0.2550
Epoch 28/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.8735 - loss:
0.3187 - val_accuracy: 0.8687 - val_loss: 0.2524
Epoch 29/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.8841 - loss:
0.3073 - val_accuracy: 0.8687 - val_loss: 0.2500
Epoch 30/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.8419 - loss:
0.3552 - val_accuracy: 0.8687 - val_loss: 0.2533
Epoch 31/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.8861 - loss:
0.3104 - val_accuracy: 0.8687 - val_loss: 0.2538
Epoch 32/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.8707 - loss:
0.3173 - val_accuracy: 0.8687 - val_loss: 0.2510
Epoch 33/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.8880 - loss:
0.3089 - val_accuracy: 0.8625 - val_loss: 0.2485
Epoch 34/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 7ms/step - accuracy: 0.8619 - loss:
0.3288 - val_accuracy: 0.8625 - val_loss: 0.2529
Epoch 35/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8815 - loss:
0.3180 - val_accuracy: 0.8625 - val_loss: 0.2525
Epoch 36/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8688 - loss:
0.3215 - val_accuracy: 0.8625 - val_loss: 0.2504
Epoch 37/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8829 - loss:
0.2907 - val_accuracy: 0.8625 - val_loss: 0.2489
Epoch 38/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8665 - loss:
0.3169 - val_accuracy: 0.8625 - val_loss: 0.2502
Epoch 39/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8518 - loss:
0.3335 - val_accuracy: 0.8687 - val_loss: 0.2480
Epoch 40/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8722 - loss:
0.3199 - val_accuracy: 0.8687 - val_loss: 0.2475
Epoch 41/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8779 - loss:
0.2914 - val_accuracy: 0.8687 - val_loss: 0.2485
Epoch 42/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8829 - loss:
0.2764 - val_accuracy: 0.8687 - val_loss: 0.2472
Epoch 43/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.8906 - loss:
0.3278 - val_accuracy: 0.8687 - val_loss: 0.2463
Epoch 44/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8838 - loss:
0.2934 - val_accuracy: 0.8687 - val_loss: 0.2460
Epoch 45/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8769 - loss:
0.2908 - val_accuracy: 0.8687 - val_loss: 0.2435
Epoch 46/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8931 - loss:
0.2945 - val_accuracy: 0.8687 - val_loss: 0.2458
Epoch 47/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - accuracy: 0.9131 - loss:
0.2687 - val_accuracy: 0.8687 - val_loss: 0.2485
Epoch 48/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8891 - loss:
0.2726 - val_accuracy: 0.8750 - val_loss: 0.2484
Epoch 49/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8665 - loss:
0.3065 - val_accuracy: 0.8750 - val_loss: 0.2484
Epoch 50/50
20/20 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - accuracy: 0.8853 - loss:
0.2774 - val_accuracy: 0.8750 - val_loss: 0.2430
7/7 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.8196 - loss: 0.6601

7/7 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - accuracy: 0.8461 - loss: 0.4109

Test Accuracy (No Dropout): 0.8350


Test Accuracy (With Dropout): 0.8600

You might also like