DL Lab Manual A.Y 2022-23-1
DL Lab Manual A.Y 2022-23-1
DL Lab Manual A.Y 2022-23-1
For
B.E - VI Semester
Prepared By
…………………………………….
Assistant Professor
Name: …K.SRINIVAS………………………………….
Academic Year: 2022-2023……………………………..
VISION & MISSION
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (AI & ML)
VISION
MISSION
DEPARTMENT VISION
DEPARTMENT MISSION
➢ To provide the state-of-the-art infrastructure to the faculty and students that facilitates
continuous professional development and research in fundamental aspects and
emerging computing trends alike.
➢ To forge collaborative research between academia and industry for seamless transfer
of knowledge resulting in sponsored projects and consultancy.
➢ To inculcate environmental sense with research, industry and community to set high
standards in academic excellence and in fulfilling societal responsibilities.
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (AI & ML)
The following Program Educational Objectives (PEOs) have been adopted to attain the
vision and mission of the Institution and the Department of Computer Science and
Engineering.
➢ PEO1: Graduates will build successful career in software related industries or will
pursue higher studies in elite institutions in India/ abroad.
➢ PEO2: Graduates will apply the computer engineering principles learnt to provide
solutions for the challenging problems in their profession.
➢ PEO3: Graduates will adapt the challenging work environment through life-long
learning for the continuous professional development.
➢ PEO4: Graduates will articulate to work in the teams and exhibit high level of
professionalism and ethical standards.
LIST POs
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (AI & ML)
PO1 Engineering Knowledge: Apply the knowledge of mathematics science, engineering fundamentals and an
engineering specialization to the solution of complex engineering problem
PO2 Problem Analysis: Identify, formulate, review research literature and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural science and
engineering, sciences.
PO3 Design/Development of Solutions: Design solutions for complex engineering problems and design system
components or processes that meet the specified needs with appropriate consideration for the public health
and safety and the cultural, societal, and environmental considerations.
PO4 Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions.
PO5 Modern tool usage: Create, select and apply appropriate techniques, resources and modern engineering and
IT tools including prediction and modeling to complex engineering activities with an understanding of the
limitations.
PO6 The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal, health,
safety, legal and cultural issues and the consequent responsibilities relevant to the professional engineering
practice
PO7 Environment and sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.
PO8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice
PO9 Individual and team work: Function effectively as an individual and as a member or leader in diverse teams
and in multidisciplinary settings
PO10 Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and receive clear instructions.
PO11 Project Management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to manage
projects and in multidisciplinary environments.
PO12 Life-long learning: Recognize the need for and have the preparation and ability to engage in independent
and life-long learning in the broadest context of technological change.
LIST PSOs
At the end of 4 years course period, Computer Science and Engineering graduates at NGIT will
be able to:
PSO1 Shall have the ability to find or create opportunities to design and develop appropriate Computer
Science solutions for improving the living standards of the people.
PSO2 Shall have expertise in few of the trending technologies like Python, Machine Learning, Deep
Learning, Internet of Things (IOT), Data Science, Full stack development, Social Networks,
Cyber Security, Big Data, Mobile Apps, CRM, ERP etc.
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING (AI & ML)
Course Code Core/
Course Title
Elective
PC 651 CSM DEEP LEARNING TECHNIQUES LAB CORE
Contact Hours Per Week
Prerequisite CIE SEE Credits
L T D P
- - - 2 25 50 1
Course Objectives
1. Understand the concepts of Artificial Neural Networks and Deep Learning concepts.
2. Implement ANN and DL algorithms with Tensorflow and Keras.
3. Gain knowledge on Sequence learning with RNN.
4. Gain knowledge on Image processing and analysis with CNN
5. Get information on advanced concepts of computer vision.
Course Outcomes
After learning the concepts of this course, the student is able to
1. Develop ANN without using Machine Learning/Deep learning librarie
2. Understand the Training ANN model with back propagation
3. Develop model for sequence learning using RNN
4. Develop image classification model using ANN and CNN.
5. Generate a new image with auto-encoder and GAN.
List of Programs
Text Books:
1. Data Science for Beginners- Comprehensive Guide to Most Important Basics in Data Science, Alex Campbell.
2. Artificial Intelligence Technologies, Applications, and Challenges- Lavanya Sharma, Amity University , Pradeep
Kumar Garg, IIT Roorkee, India.
3. Artificial Intelligence Fundamentals and Applications- Cherry Bhargava and Pardeep Kumar Sharma, CRC Press.
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING(AI & ML)
1. Students are advised to come to the laboratory at least 5 minutes before (to
starting time), those who come after 5 minutes will not be allowed into the lab.
2. Plan your task properly much before to the commencement, come prepared to the
lab with the program / experiment details.
3. Student should enter into the laboratory with:
a. Laboratory observation notes with all the details (Problem statement, Aim,
Algorithm, Procedure, Program, Expected Output, etc.,) filled in for the lab
session.
b. Laboratory Record updated up to the last session experiments.
c. Formal dress code and Identity card.
4. Sign in the laboratory login register, write the TIME-IN, and occupy the computer
system allotted to you by the faculty.
5. Execute your task in the laboratory, and record the results / output in the lab observation
note book, and get certified by the concerned faculty.
6. All the students should be polite and cooperative with the laboratory staff, mustmaintain
the discipline and decency in the laboratory.
7. Computer labs are established with sophisticated and high end branded systems,which
should be utilized properly.
8. Students / Faculty must keep their mobile phones in SWITCHED OFF mode during the
lab sessions. Misuse of the equipment, misbehaviours with the staff and systems etc.,
will attract severe punishment.
9. Students must take the permission of the faculty in case of any urgency to go out. If
anybody found loitering outside the lab / class without permission during working hours
will be treated seriously and punished appropriately.
10. Students should SHUT DOWN the computer system before he/she leaves the lab after
completing the task (experiment) in all aspects. He/she must ensure the system / seat is
kept properly.
• All students must observe the dress code while in the laboratory
• Footwear is NOT allowed
• Foods, drinks and smoking are NOT allowed
• All bags must be left at the indicated place
• The lab timetable must be strictly followed
• Be PUNCTUAL for your laboratory session
• All programs must be completed within the given time
• Noise must be kept to a minimum
• Workspace must be kept clean and tidy at all time
• All students are liable for any damage to system due to their own negligence
• Students are strictly PROHIBITED from taking out any items from the laboratory
• Report immediately to the lab programmer if any damages to equipment
Lab In – charge
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
LIST OF EXPERIMENTS
In physics, tensors are used to represent physical quantities that have both
magnitude and direction, such as velocity, force, and stress. In machine learning
and deep learning, tensors are used to represent data, such as images, sound,
and text, as well as the weights and biases of neural networks.
TensorFlow :
is an open-source machine learning framework that was developed by
the Google Brain team. It allows developers to build and train machine learning
models using a variety of techniques, such as neural networks, decision trees, and
clustering algorithms.
Keras :
is a high-level neural network API that is written in Python and runs
on top of TensorFlow. Keras is designed to make it easy to build and experiment
with deep neural networks. It provides a simple interface for defining and training
neural networks, which allows developers to quickly prototype and test their ideas.
Proportion
The actual number of individuals in any given category is called the frequency for
that category. A proportion, or relative frequency, represents the percentage of
individuals that falls into each category. The proportion of a given category,
denoted by p, is the frequency divided by the total sample size.
Page 13
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
So to calculate the proportion, you
1. Count up all the individuals in the sample who fall into the specified category.
2. Divide by n, the number of individuals in the sample.
Page 14
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Page 15
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
import numpy as np
t = np.array([[1, 2, 3], [4, 5, 6]])
This creates a 2x3 tensor with values 1, 2, 3, 4, 5, and 6.
/*
import numpy as np
t = np.array([[1, 2, 3], [4, 5, 6]])
t1 = t * 2
print(" \n",t1)
# Element-wise addition
t2 = t + 3
print(" \n" , t2)
# Element-wise exponentiation
Page 17
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
t3 = np.power(t, 2)
# Transpose
print(" \n" ,t3)
t4 = np.transpose(t)
print(" \n" ,t4)
*/
Output:
[[ 2 4 6]
[ 8 10 12]]
[[4 5 6]
[7 8 9]]
[[ 1 4 9]
[16 25 36]]
[[1 4]
[2 5]
[3 6]]
Assuming you already have a tensor with numerical values, here are some
basic operations you can perform:
1. Mean: You can calculate the mean of a tensor by summing all of its values
and then dividing by the total number of values. In Python, this can be
done using the mean function in the NumPy library.
2. Standard deviation: The standard deviation of a tensor can be calculated
using the std function in NumPy. This measures the spread of the values
in the tensor.
3. Variance: Variance is another measure of spread, and can be calculated
using the var function in NumPy.
Page 18
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
4. Maximum and minimum: You can find the maximum and minimum values
in a tensor using the max and min functions in NumPy.
5. Reshaping: You can reshape a tensor using the reshape function in
NumPy. This allows you to change the dimensions of the tensor while
maintaining the same number of values.
6. Transpose: You can transpose a tensor using the transpose function in
NumPy. This swaps the rows and columns of the tensor.
7. Dot product: You can perform a dot product between two tensors using
the dot function in NumPy. This calculates the sum of the products of the
corresponding elements in each tensor.
import numpy as np
output:
Page 20
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
3)Create tensors and apply split and merge operations by taking input
from user
import numpy as np
output:
Page 21
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Iteration: 0 Cost: 50
Iteration: 1 Cost: 50
Iteration: 2 Cost: 50
Iteration: 3 Cost: 50
Iteration: 4 Cost: 50
Iteration: 5 Cost: 50
Page 23
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Iteration: 6 Cost: 50
Iteration: 7 Cost: 50
Iteration: 8 Cost: 50
Iteration: 9 Cost: 50
Iteration: 10 Cost: 50
Iteration: 11 Cost: 50
Iteration: 12 Cost: 50
Iteration: 13 Cost: 50
Iteration: 14 Cost: 50
Iteration: 15 Cost: 50
Iteration: 16 Cost: 50
Iteration: 17 Cost: 50
Iteration: 18 Cost: 50
Iteration: 19 Cost: 47
Iteration: 20 Cost: 50
Iteration: 21 Cost: 47
Iteration: 22 Cost: 50
Iteration: 23 Cost: 44
Iteration: 24 Cost: 50
Iteration: 25 Cost: 45
Iteration: 26 Cost: 50
Iteration: 27 Cost: 46
Iteration: 28 Cost: 50
Iteration: 29 Cost: 45
Iteration: 30 Cost: 50
Iteration: 31 Cost: 45
Iteration: 32 Cost: 50
Iteration: 33 Cost: 45
Iteration: 34 Cost: 50
Iteration: 35 Cost: 45
Iteration: 36 Cost: 50
Iteration: 37 Cost: 45
Iteration: 38 Cost: 50
Iteration: 39 Cost: 45
Iteration: 40 Cost: 50
Iteration: 41 Cost: 46
Iteration: 42 Cost: 50
Iteration: 43 Cost: 45
Iteration: 44 Cost: 48
Iteration: 45 Cost: 44
Page 24
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Iteration: 46 Cost: 46
Iteration: 47 Cost: 42
Iteration: 48 Cost: 46
Iteration: 49 Cost: 42
Iteration: 50 Cost: 46
Iteration: 51 Cost: 40
Iteration: 52 Cost: 39
Iteration: 53 Cost: 21
Iteration: 54 Cost: 2
Iteration: 55 Cost: 1
Iteration: 56 Cost: 1
Iteration: 57 Cost: 1
Iteration: 58 Cost: 1
Iteration: 59 Cost: 1
Iteration: 60 Cost: 1
Iteration: 61 Cost: 2
Iteration: 62 Cost: 2
Iteration: 63 Cost: 1
Iteration: 64 Cost: 2
Iteration: 65 Cost: 2
Iteration: 66 Cost: 1
Iteration: 67 Cost: 2
Iteration: 68 Cost: 2
Iteration: 69 Cost: 2
Iteration: 70 Cost: 1
Iteration: 71 Cost: 2
Iteration: 72 Cost: 2
Iteration: 73 Cost: 2
Iteration: 74 Cost: 1
Iteration: 75 Cost: 2
Iteration: 76 Cost: 2
Iteration: 77 Cost: 2
Iteration: 78 Cost: 1
Iteration: 79 Cost: 2
Iteration: 80 Cost: 2
Iteration: 81 Cost: 1
Iteration: 82 Cost: 2
Iteration: 83 Cost: 2
Iteration: 84 Cost: 2
Iteration: 85 Cost: 1
Page 25
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Iteration: 86 Cost: 2
Iteration: 87 Cost: 2
Iteration: 88 Cost: 2
Iteration: 89 Cost: 1
Iteration: 90 Cost: 2
Iteration: 91 Cost: 2
Iteration: 92 Cost: 2
Iteration: 93 Cost: 1
Iteration: 94 Cost: 2
Iteration: 95 Cost: 2
Iteration: 96 Cost: 1
Iteration: 97 Cost: 2
Iteration: 98 Cost: 2
Iteration: 99 Cost: 2
Page 26
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
The key feature of RNNs is the use of a hidden state that is updated at each time
step, based on the current input and the previous hidden state. The hidden state
is then used to produce an output and to update the hidden state at the next time
step. This allows the network to capture temporal dependencies in the input data,
making it well-suited for tasks that involve predicting future values or generating
new sequences of data.
Regenerate response
Page 27
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
5 ). Design ,train and test the MLP for tabular data and verify various
activation functions and optimizers tensor flow
Sol)
In this example, we load the breast cancer dataset and split it into
training and test sets. We define a function create_model that takes an
activation function and optimizer as arguments and returns an MLP with two
hidden layers and dropout regularization. We then define a list of activation
functions and optimizers to try, and loop over them to train and test models
with different combinations of activation functions and optimizers. We print
the test loss and accuracy for each model.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import SGD, Adam, RMSprop
from tensorflow.keras.activations import relu, sigmoid, tanh
# train and test the models with different activation functions and
optimizers
for activation_func in activation_funcs:
for optimizer in optimizers:
model = create_model(activation_func, optimizer)
print(f'Training model with activation function
{activation_func.__name__} and optimizer
{optimizer.__class__.__name__}...')
model.fit(X_train, y_train, epochs=50, batch_size=16, verbose=0)
loss, accuracy = model.evaluate(X_test, y_test)
print(f'Test loss: {loss:.3f}, Test accuracy: {accuracy:.3f}\n')
OUTPUT:
Test loss: 0.202, Test accuracy: 0.939
NOTE: https://www.youtube.com/watch?v=iajq0xQZ2cQ
6.) Design and implement a simple RNN model with tensorflow and
check accuracy
Ans)
Implementation of a Recurrent Neural Network (RNN) using TensorFlow.
We'll be using the MNIST dataset for training and testing the model.
Let's get started!
//First, let's import the necessary libraries:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
//Next, let's load the MNIST dataset:
(x_train, y_train), (x_test, y_test) = mnist.load_data()
//Now, we need to normalize the input data and convert the labels to
one-hot encoding:
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
//Let's define the hyperparameters:
input_shape = (28, 28)
Page 29
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
num_classes = 10
hidden_size = 128
batch_size = 128
epochs = 10
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=input_shape),
tf.keras.layers.Reshape(target_shape=(input_shape[0],
input_shape[1]*1)),
tf.keras.layers.LSTM(units=hidden_size, activation='tanh'),
tf.keras.layers.Dense(num_classes, activation='softmax')
])
//After training, we can evaluate the accuracy of the model on the test
data:
OUTPUT:
Test loss: 0.049647483974695206
Test accuracy: 0.9848999977111816
Page 30
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
// Working RNN model for the MNIST dataset in TensorFlow.
7). Design and implement a simple LSTM model with tensorflow and
check accuracy
A ns) .
We defined here a simple LSTM model with one layer of 32
units and a dense output layer with a sigmoid activation function. We
compile the model using the Adam optimizer and binary cross-entropy
loss. We then generate some random data and train the model for 10
epochs using a batch size of 32. Finally, we evaluate the model on the same
data and print the test loss and accuracy.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
Output:
Page 31
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Test loss: 0.6935951709747314, Test accuracy: 0.5
GRUs use a gating mechanism to control the flow of information through the
network, allowing it to selectively remember or forget information from
previous time steps. Specifically, a GRU has two types of gates: an update gate
and a reset gate. The update gate determines how much of the previous hidden
state should be retained, while the reset gate determines how much of the new
input should be combined with the previous hidden state.
where:
In summary, the GRU model is a type of RNN that uses gating mechanisms to
selectively retain or forget information from previous time steps, allowing it to
better capture long-term dependencies in sequential data.
Page 32
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
8). Design and implement a simple GRU model with tensorflow and
check accuracy.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import GRU, Dense
output:
Test loss: 0.6837542653083801, Test accuracy: 0.5699999928474426
Page 33
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Sample Solution:-
Python Code:
import numpy as np
l = [12.23, 13.32, 100, 36.32]
print("Original List:",l)
a = np.array(l)
print("One-dimensional NumPy array: ",a)
Sample Output:
Page 34
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Sample Solution:-
Python Code:
import numpy as np
x = np.arange(2, 11).reshape(3,3)
print(x)
Sample Output:
[[ 2 3 4]
[ 5 6 7]
[ 8 9 10]]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Update sixth value to 11
[ 0. 0. 0. 0. 0. 0. 11. 0. 0. 0.]
Sample Solution:-
Python Code:
import numpy as np
Page 35
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
x = np.zeros(10)
print(x)
x[6] = 11
print(x)
Sample Output:
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Update sixth value to 11
[ 0. 0. 0. 0. 0. 0. 11. 0. 0. 0.]
Sample Solution:-
Python Code:
import numpy as np
print("Add:")
print(np.add(1.0, 4.0))
print("Subtract:")
print(np.subtract(1.0, 4.0))
Page 36
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
print("Multiply:")
print(np.multiply(1.0, 4.0))
print("Divide:")
print(np.divide(1.0, 4.0))
Sample Output:
Add:
5.0
Subtract:
-3.0
Multiply:
4.0
Divide:
0.25
Sample Output:
original matrix:
[[1, 0], [0, 1]]
[[1, 2], [3, 4]]
Result of the said matrix multiplication:
[[1 2]
[3 4]]
Sample Solution :
import numpy as np
p = [[1, 0], [0, 1]]
q = [[1, 2], [3, 4]]
print("original matrix:")
print(p)
print(q)
result1 = np.dot(p, q)
print("Result of the said matrix multiplication:")
print(result1)
Page 37
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Sample Solution:-
import numpy as np
import numpy as np
x = np.arange(12, 38)
print("Original array:")
print(x)
print("Reverse array:")
x = x[::-1]
print(x)
Sample Output:
Original array:
[12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36
37]
Reverse array:
[37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14
13
12]
Sample Output:
Original array
[1, 2, 3, 4]
Array converted to a float type:
[ 1. 2. 3. 4.]
Expected Output:
Size of the array: 3
Length of one array element in bytes: 8
Total bytes consumed by the elements of the array: 24
Sample Solution:-
NumPy Code:
import numpy as np
x = np.array([1,2,3], dtype=np.float64)
print("Size of the array: ", x.size)
Page 39
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
print("Length of one array element in bytes: ",
x.itemsize)
print("Total bytes consumed by the elements of the array:
",
x.nbytes)
Expected Output:
(6,)
[[1 2 3]
[4 5 6]
[7 8 9]]
[[1 2 3]
[4 5 6]
[7 8 9]]
Page 40
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Sample Solution:-
NumPy Code:
import numpy as np
x = np.array([1, 2, 3, 4, 5, 6])
print(x.shape)
print(y)
x = np.array([1,2,3,4,5,6,7,8,9])
x.shape = (3, 3)
print(x)
Sample Output:
(6,)
[[1 2 3]
Page 41
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
[4 5 6]
[7 8 9]]
[[1 2 3]
[4 5 6]
[7 8 9]]
1 case = 1
2 output =
8
3 [ 760 870 1290 1290 1450 2510 2900 2900]
4 [2900 2900]
import numpy as np
L2=[2900,2900,2510,1450,1290,1290,870,760]
Larr1=np.array(L2)
print(len(L2))
b=Larr1[::-1]
print(b)
b=Larr1[0:2]
print(b)
12:
1 Create a numpy array with the dimension 4X4X4 using
arange()
2 Create a View with second row of each element of
0th dimension
Page 43
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
3 Display the View and display the shape of the array
4
1 case = 1
2 output = [[[ 4 5 6 7]]
3
4 [[20 21 22 23]]
5
6 [[36 37 38 39]]
7
8 [[52 53 54 55]]]
9 (4, 4, 4)
import numpy as np
arr = np.arange(64).reshape(4,4,4)
a=arr[:,1:2,:]
print(a)
print(arr.shape)
13 :
Page 44
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
5 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54]
6 15
7 54
import numpy as np
v = np.arange(15,55)
print(v[1:-1])
print(np.insert(v,11,55))
print(np.min(v))
print(np.max(v))
11 :
#read the problem statement carefully and write a pytho
program
2 import numpy as np
3 m=4 #dont edit this
4 n=201 #dont edit this
Create a numpy array with the following
2 1. Elements from 'm' to 'n'-------m&n is already
defined
3 2. Data type is float m=4 n=201
4 3. Print Only Even numbers
case = 1
2 output = [ 4. 6. 8. 10. 12. 14. 16. 18.
20. 22. 24. 26. 28. 30.
3 32. 34. 36. 38. 40. 42. 44. 46. 48. 50.
52. 54. 56. 58.
4 60. 62. 64. 66. 68. 70. 72. 74. 76. 78.
80. 82. 84. 86.
5 88. 90. 92. 94. 96. 98. 100. 102. 104. 106.
108. 110. 112. 114.
Page 45
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
6 116. 118. 120. 122. 124. 126. 128. 130. 132. 134.
136. 138. 140. 142.
7 144. 146. 148. 150. 152. 154. 156. 158. 160. 162.
164. 166. 168. 170.
8 172. 174. 176. 178. 180. 182. 184. 186. 188. 190.
192. 194. 196. 198.
9 200.]
import numpy as np
m=4
n=201
a=np.arange(m,n,dtype=float)
even=a[a%2==0]
print(even)
12 :
1 #read the problem statement carefully and write a
python program
1 Create 1 Dimension array A
2 2. Reshape A with 4 dimensions 1D = 2 Elements 2D =
3 Elements 3D = 4 Elements 4D = 5 Elements 2. Data type
is int32
3 3. Display A
1 case = 1
2 output = [ 5 6 7 8 9]
3 [ 10 11 12 13 14]
4 [ 15 16 17 18 19]]
5
6 [[ 20 21 22 23 24]
7 [ 25 26 27 28 29]
8 [ 30 31 32 33 34]
9 [ 35 36 37 38 39]]
10
11 [[ 40 41 42 43 44]
12 [ 45 46 47 48 49]
13 [ 50 51 52 53 54]
14 [ 55 56 57 58 59]]]
15
17 [[[ 60 61 62 63 64]
Page 46
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
18 [ 65 66 67 68 69]
19 [ 70 71 72 73 74]
20 [ 75 76 77 78 79]]
21
22 [[ 80 81 82 83 84]
23 [ 85 86 87 88 89]
24 [ 90 91 92 93 94]
25 [ 95 96 97 98 99]]
26
27 [[100 101 102 103 104]
28 [105 106 107 108 109]
29 [110 111 112 113 114]
30 [115 116 117 118 119]]]]
import numpy as np
A=np.arange(120,dtype=int)
A=A.reshape(2,3,4,5)
print(A)
13 :
1 #read the problem statement carefully and write a
python program
1 A and B are 2 pre-defined lists. Consider the given
lists and create arrays a, b of numpy
2 Apply the arithmetic operators like +,-,*,/,% on
them Display the output as given in the sample output.
3
A=[4,3,6,8,2,9,1,45,34,87,22,98,34,62,71,23,67,37,82,45,1
1,23,37,47,98]
4
B=[5,8,67,43,22,54,33,12,36,73,89,32,12,67,44,87,33,65,22
,89,22,39,22,44,33]
5 SAMPLE OUTPUT:
6 Array a = [ 4 3 6 …]
7 Array b = [ 5 8 67 43 …]
8 a + b = [ 9 11 73 …]
9 a - b = [ -1 -5 -61 …]
10 a * b = [ 20 24 402 …]
11 a / b = [0.8 0.375 0.08955224 0.18604651 …]
Page 47
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
12 a % b = [ 4 3 6 …]
1 case = 1
2 output = Array a = [ 4 3 6 8 2 9 1 45 34 87
22 98 34 62 71 23 67 37 82 45 11 23 37 47
3 98]
4 Array b = [ 5 8 67 43 22 54 33 12 36 73 89 32 12
67 44 87 33 65 22 89 22 39 22 44
5 33]
6 a + b = [ 9 11 73 51 24 63 34 57 70 160
111 130 46 129 115 110 100 102
7 104 134 33 62 59 91 131]
8 a - b = [ -1 -5 -61 -35 -20 -45 -32 33 -2 14 -
67 66 22 -5 27 -64 34 -28
9 60 -44 -11 -16 15 3 65]
10 a * b = [ 20 24 402 344 44 486 33 540
1224 6351 1958 3136 408 4154
11 3124 2001 2211 2405 1804 4005 242 897 814 2068
3234]
12 a / b = [0.8 0.375 0.08955224
0.18604651 0.09090909 0.16666667
13 0.03030303 3.75 0.94444444 1.19178082
0.24719101 3.0625
14 2.83333333 0.92537313 1.61363636 0.26436782
2.03030303 0.56923077
15 3.72727273 0.50561798 0.5 0.58974359
1.68181818 1.06818182
16 2.96969697]
17 a % b = [ 4 3 6 8 2 9 1 9 34 14 22 2 10 62
27 23 1 37 16 45 11 23 15 3
18 32]
import numpy as np
A=[4,3,6,8,2,9,1,45,34,87,22,98,34,62,71,23,67,37,82,45,11,23,37,47,98]
B=[5,8,67,43,22,54,33,12,36,73,89,32,12,67,44,87,33,65,22,89,22,39,22,44,3
3]
a=np.array(A,dtype=int)
b=np.array(B,dtype=int)
print("Array a = {0}".format(a))
print("Array b = {0}".format(b))
c=a+b
Page 48
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
d=a-b
e=a*b
f=a/b
g=a%b
print("a + b = {0}".format(c))
print("a - b = {0}".format(d))
print("a * b = {0}".format(e))
print("a / b = {0}".format(f))
print("a % b = {0}".format(g))
14 :
1 #read the problem statement carefully and write a
python program
1 Create a numpy array convert it into 3X3X3 dimension
2 Create a View "v1" with the elements present in
second row of each element of 0th dimension
3 Create a View "v2" with the elements present in
second col of each element of 0th dimension
4 Add v1 and v2 and store it in v3
5 Display v,1v2, v3
6
1 case = 1
2 output = [[[ 3 4 5]]
3
4 [[12 13 14]]
5
6 [[21 22 23]]]
7 [[[ 1]
8 [ 4]
9 [ 7]]
10
11 [[10]
12 [13]
Page 49
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
13 [16]]
14
15 [[19]
16 [22]
17 [25]]]
18 [[[ 4 5 6]
19 [ 7 8 9]
20 [10 11 12]]
21
22 [[22 23 24]
23 [25 26 27]
24 [28 29 30]]
25
26 [[40 41 42]
27 [43 44 45]
28 [46 47 48]]]
import numpy as np
a = np.arange(27).reshape(3,3,3)
b = a[:,1:2,:]
c = a[:,:,1:2]
print(b)
print(c)
print(b+c)
14 :
1 #read the problem statement carefully and write a
python program
The table below provides the population of daily order
volumes for a recent week.
2 Calculate the mean, variance, and standard
deviation of this population and display them as expected
3 Day Order Volume
4 1 16
5 2 10
6 3 15
7 4 12
8 5 11
9 EXPECTED OUTPUT:
10 mean = xx
Page 50
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
11 median = yy
12 variance = zz
13 standard deviation = ss
14
1 case = 1
2 output = mean = 12.8
3 median = 12.0
4 variance = 5.359999999999999
5 standard deviation = 2.315167380558045
import numpy as np
ordervolume=[16,10,15,12,11]
arr=np.array(ordervolume)
print("mean = ",np.mean(arr))
print("median = ",np.median(arr))
print("variance = ",np.var(arr))
print("standard deviation = ",np.std(arr))
15 :
1 # read the problem statement carefully and write the
python program
A well-known manufacturer of sugarless food products has
invested a great deal of time and money in developing
2 the formula for a new kind of sweetener. Although
costly to develop, this sweetener is significantly less
3 expensive to produce than the sweeteners the
manufacturer had been using. The manufacturer would like
to know
4 if the new sweetener is as good as the traditional
product. The manufacturer knows that when consumers are
asked
5 to indicate their level of satisfaction with the
traditional sweeteners, they respond that on average
their level
Page 51
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
6 of satisfaction is 5.5.The manufacturer conducts
market research to determine the level of acceptance of
this
7 new product. Consumer taste acceptance data are
collected from 25 consumers of sugarless products.
8 The data collected can be seen in the LIST below
9
list_sati=[5,6,7,5,6,5,7,4,5,5,6,6,7,5,5,7,5,6,6,7,7,7,6,
5,7]note:all values in list are float values
10 Display the following as expected:
11 Average satisfaction of all consumers
12 The value in the middle of the satisfaction.
13 The average of total squared differences of all
elements with mean.
14 How far the elements from mean?
15 expected output:
16 Mean = 5.84
17 Median = 6.0
18 Variance = 0.7744
19 Standard deviation = 0.88
case = 1
2 output = Mean = 5.88
3 Median = 6.0
4 Variance = 0.8256
5 Standard deviation = 0.9086253353280438
import numpy as np
list_sati=[5,6,7,5,6,5,7,4,5,5,6,6,7,5,5,7,5,6,6,7,7,7,6,5,7]
list_satisifaction=np.array(list_sati,dtype=float)
print("Mean = ",np.mean(list_satisifaction))
print("Median = ",np.median(list_satisifaction))
print("Variance = ",np.var(list_satisifaction))
print("Standard deviation = ",np.std(list_satisifaction)
Page 52
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator
Page 53
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical',
subset='training'
)
validation_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical',
subset='validation'
)
Page 54
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical'
)
accuracy = model.evaluate(test_generator)
print("Test accuracy:", accuracy[1])
predictions = model.predict(new_image)
predicted_label = tf.argmax(predictions, axis=1)[0]
print("Predicted label:", predicted_label)
import tensorflow as tf
from tensorflow.keras import layers, models, regularizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical',
subset='training'
)
validation_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical',
subset='validation'
)
# Train the model
history = model.fit(
train_generator,
validation_data=validation_generator,
epochs=num_epochs
)
# Evaluate the model
test_data_dir = 'path/to/test/directory'
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical'
)
accuracy = model.evaluate(test_generator)
print("Test accuracy:", accuracy[1])
Page 57
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
# Check for overfitting or underfitting
import matplotlib.pyplot as plt
train_loss = history.history['loss']
val_loss = history.history['val_loss']
train_acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
epochs_range = range(num_epochs)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.show()
Note that TIFF images are not natively supported by the Keras ImageDataGenerator. You would
need to preprocess your TIFF images and convert them to a compatible format (e.g., JPEG) before
using this code.
This code includes a dropout layer with a dropout rate of 0.5 and a dense layer with L2
regularization to help prevent overfitting. The model is trained using the fit() function, and the
evaluation is performed using the evaluate() function.
After training, the code plots the training and validation loss as well as the training and validation
accuracy over the epochs to help you analyze whether the model is overfitting, underfitting, or
achieving a good fit.
Please ensure that you have TensorFlow and Keras installed in your Python environment before
running this code.
Page 58
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
10. Implement a CNN architectures (LeNet, Alexnet, VGG, etc) model to classify multi
category Satellite images with tensorflow / keras and check the accuracy. Check
whether your model is overfit / underfit / perfect fit and apply the techniques to
avoid overfit and underfit.
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import EarlyStopping
Page 60
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
# Select the model architecture
model = build_vgg_model() # Change the function name to choose a different
architecture
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical',
subset='training'
)
validation_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical',
subset='validation'
)
train_generator_augmented = train_datagen_augmented.flow_from_directory(
train_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical',
subset='training'
)
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(image_height, image_width),
batch_size=batch_size,
class_mode='categorical'
)
accuracy = model.evaluate(test_generator)
print("Test accuracy:", accuracy[1])
train_loss = history.history['loss']
val_loss = history.history['val_loss']
train_acc = history.history['accuracy']
Page 62
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
val_acc = history.history['val_accuracy']
epochs_range = range(len(train_loss))
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.show()
Page 63
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Autoencoders are a type of neural network that can learn to reconstruct input
data by encoding it into a lower-dimensional
representation and then decoding it back to the original shape. Denoising
autoencoders are specifically designed to remove
noise from the input data. Let's go through the steps:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Encoder
x = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
# Decoder
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = layers.UpSampling2D((2, 2))(x)
decoded = layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
Page 64
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
We use the Adam optimizer and binary cross-entropy loss since we are
treating the problem as a pixel-wise binary classification task.
for i in range(n):
# Original images
ax = plt.subplot(2, n, i + 1)
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.title("Original + Noise")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Decoded images
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(tf.squeeze(decoded_imgs[i]))
plt.title("Denoised")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
This code will display a comparison between the original images with added
noise and the denoised images generated by the autoencoder.
Page 66
NGIT DEEP LEARNING TECHNIQUES LAB MANUAL
Page 67