LP4 Lab Manual
LP4 Lab Manual
CLASS:B.E. SEMESTER:I
SUBJECT: 414447: Lab Practice IV
LAB
EXPT.NO PROBLEMSTATEMENT
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Assignment No.1
Title : Study of Deep learning Packages: Tensorflow, Keras, Theano and PyTorch.
Document the distinctfeatures and functionality of the packages.
Steps/ Algorithm
Installation of Tensorflow On Ubntu:
1. 1. Install the Python Development Environment:
You need to download Python, the PIP package, and a virtual environment. If these packages are
already installed, you can skip this step.
You can download and install what is needed by visiting the following links:
https://www.python.org/
https://pip.pypa.io/en/stable/installing/
https://docs.python.org/3/library/venv.html
To install these packages, run the following commands in the terminal:
sudo apt update
sudo apt install python3-dev python3-pip python3-venv
2. Create a Virtual Environment
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Navigate to the directory where you want to store your Python 3.0 virtual environment. It can be
in your home directory, or any other directory where your user can read and write permissions.
mkdir tensorflow_files
cd tensorflow_files
Now, you are inside the directory. Run the following command to create a virtual environment:
python3 -m venv virtualenv
The command above creates a directory named virtualenv. It contains a copy of the Python
binary, the PIP package manager, the standard Python library, and other supporting files.
3. Activate the Virtual Environment
source virtualenv/bin/activate
Once the environment is activated, the virtual environment’s bin directory will be added to the
beginning of the $PATH variable. Your shell’s prompt will alter, and it will show the name of the
virtual environment you are currently using, i.e. virtualenv.
4. Update PIP
pip install --upgrade pip
5. 5. Install TensorFlow
The virtual environment is activated, and it’s up and running. Now, it’s time to install the
TensorFlow package.
pip install -- upgrade TensorFlow
Installation of Keras on Ubntu :
Prerequisite : Python version 3.5 or above.
Verify the installation was successful by checking the software package information:
pip3 show tensorflow
STEP 4: Install Keras
pip3 install keras
Verify the installation by displaying the package information:
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
First, check if you are using python’s latest version or not.Because PyGame requires python
3.7 or a higher version
python3 –version
pip3 –version
[Ref : https://www.geeksforgeeks.org/install-pytorch-on-linux/]
1. Tensorflow,keras
numpy : NumPy is a Python library used for working with arrays. It also has functions for
working in domain of linear algebra, fourier transform, and matrices. NumPy stands for
Numerical Python. To import numpy use
import numpy as np
pandas: pandas is a fast, powerful, flexible and easy to use open source data analysis and
manipulation tool, built on top of the Python programming language. To import pandas use
import pandas as pd
sklearn : Scikit-learn (Sklearn) is the most useful and robust library for machine learning in
Python. It provides a selection of efficient tools for machine learning and statistical modeling
including classification, regression, clustering and dimensionality reduction via a consistence
interface in Python. This library, which is largely written in Python, is built upon NumPy,
SciPy and Matplotlib. For importing train_test_ split use
2. For TheaonRequirements:
•Python3
•Python3-pip
•NumPy
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
•SciPy
•BLAS
#
# Load MNIST data
#
(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data()
#
# Check the dataset loaded
#
train_images.shape, test_images.shape
3. Theano test program
# Python program showing
# addition of two scalars
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
f = function([x, y], z)
f(5, 7)
4. Test program for PyTorch
Output of Code:
Note: Run the code and attach your output of the code here.
Conclusion :
Tensorflow , PyTorch,Keras and Theano all these packages are installed and ready for Deep
learning applications . As per application domain and dataset we can choose the appropriate
package and build required type of Neural Network.
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Assignment No.2
Steps/ Algorithm
1. Dataset link and libraries :
Dataset : MNIST or CIFAR 10 : kaggel.com
You can download dataset from above mentioned website.
Libraries required :
Pandas and Numpy for data manipulation
Tensorflow/Keras for Neural Networks
Scikit-learn library for splitting the data into train-test samples, and for some basic model
evaluation
https://pyimagesearch.com/2021/05/06/implementing-feedforward-neural-networks-with-keras-
and-tensorflow/
a) Import following libraries from SKlearn : i) LabelBinarizer (sklearn.preprocessing) ii)
classification_report (sklearn.metrics) .
b) Import Following libraries from tensorflow.keras : models , layers,optimizers,datasets
,baclend and set to respective values.
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Sample Code with comments and Output : Attach Printout with Output .
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Assignment No.3
Aim: Build the Image classification model by dividing the model into following 4 stages:
a. Loading and pre-processing the image data
Steps/ Algorithm
1. Choose a dataset of your interest or you can also create your own image dataset
(Ref : https://www.kaggle.com/datasets/) Import all necessary files.
( Ref : https://www.analyticsvidhya.com/blog/2021/01/image-classification-using-convolutional-
neural-networks-a-step-by-step-guide/)
Libraries and functions required
1. Tensorflow,keras
numpy : NumPy is a Python library used for working with arrays. It also has functions for
working in domain of linear algebra, fourier transform, and matrices. NumPy stands for
Numerical Python. To import numpy use
import numpy as np
pandas: pandas is a fast, powerful, flexible and easy to use open source data analysis and
manipulation tool, built on top of the Python programming language. To import pandas use
import pandas as pd
sklearn : Scikit-learn (Sklearn) is the most useful and robust library for machine learning in
Python. It provides a selection of efficient tools for machine learning and statistical modeling
including classification, regression, clustering and dimensionality reduction via a consistence
interface in Python. This library, which is largely written in Python, is built upon NumPy, SciPy
and Matplotlib. For importing train_test_ split use
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
2. Prepare Dataset for Training : //Preparing our dataset for training will involve assigning paths
and creating categories(labels), resizing our images.
3. Create a Training a Data : // Training is an array that will contain image pixel values and the
index at which the image in the CATEGORIES list.
4. Shuffle the Dataset
5. Assigning Labels and Features
6. Normalising X and converting labels to categorical data
7. Split X and Y for use in CNN
8. Define, compile and train the CNN Model
9. Accuracy and Score of model.
Sample Code with comments and Output : Attach Printout with Output .
Conclusion :
As per the evalution of model write down in line with your output about accuracy and other
evaluation parameters.
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Assignment No.4
Aim: Use Autoencoder to implement anomaly detection. Build the model by using:
a. Import required libraries
b. Upload / access the dataset
c. Encoder converts it into latent representation
d. Decoder networks convert it back to the original input
e. Compile the models with Optimizer, Loss, and Evaluation Metrics
Theory : 1)What is Anomaly Detectection ?
2) What are Autoencoders in Deep learning ?
3) Enlist different applications with Autoencoders in DL.
4) Enlist different types of anomaly detection Algorithms.
5)What is difference between Anomaly detection and Novelty Detection.
6) Explain different blocks and working of Autoencoders.
7) What is reconstruction and Reconstruction errors .
8) What is Minmaxscaler from sklearn.
8) Explain . train_test_split from sklearn.
9) What is anomaly scores.
10) Explain tensorfloe dataset.
11) Describe the ECG Dataset.
12) Explain keras Optimizers
13) Explain keras layers dense and dropouts
14 ) Explain keras losses and meansquarelogarthmicerror
15) Explain Relu activation function
Steps/ Algorithm
1. Dataset link and libraries :
Dataset : http://storage.googleapis.com/download.tensorflow.org/data/ecg.csv
Libraries required :
Pandas and Numpy for data manipulation
Tensorflow/Keras for Neural Networks
Scikit-learn library for splitting the data into train-test samples, and for some basic model
evaluation
For Model building and evaluation following libraries:
sklearn.metrics import accuracy_score
tensorflow.keras.optimizers import Adam
sklearn.preprocessing import MinMaxScaler
tensorflow.keras import Model, Sequential
tensorflow.keras.layers import Dense, Dropout
tensorflow.keras.losses import MeanSquaredLogarithmicError
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Ref:https://www.analyticsvidhya.com/blog/2021/05/anomaly-detection-using-autoencoders-a-
walk-through-in-python/
history = model.fit(
x_train_scaled,
x_train_scaled,
epochs=20,
batch_size=512,
validation_data=(x_test_scaled, x_test_scaled)
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Assignment No.5
Aim: Implement the Continuous Bag of Words (CBOW) Model. Stages can be:
a. Data preparation
b. Generate training data
c. Train model
d. Output
Theory : 1)What is NLP ?
2) What is Word embedding related to NLP ?
3) Explain Word2Vec techniques.
4) Enlist applications of Word embedding in NLP.
5) Explain CBOW architecture.
6) What will be input to CBOW model and Output to CBW model.
7) What is Tokenizer .
8) Explain window size parameter in detail for CBOW model.
9) Explain Embedding and Lmbda layer from keras
10) What is yield()
Steps/ Algorithm
1. Dataset link and libraries :
Create any English 5 to 10 sententece paragraph as input
Import following data from keras :
keras.models import Sequential
keras.layers import Dense, Embedding, Lambda
keras.utils import np_utils
keras.preprocessing import sequence
keras.preprocessing.text import Tokenizer
Import Gensim for NLP operations : requirements :
Gensim runs on Linux, Windows and Mac OS X, and should run on any other platform that
supports Python 3.6+ and NumPy. Gensim depends on the following software: Python, tested
with versions 3.6, 3.7 and 3.8. NumPy for number crunching.
Ref: https://analyticsindiamag.com/the-continuous-bag-of-words-cbow-model-in-nlp-hands-on-
implementation-with-codes/
a) Import following libraries gemsim and numpy set i.e. text file created . It should be
preprocessed.
b) Tokenize the every word from the paragraph . You can call in built tokenizer present in
Gensim
c) Fit the data to tokenizer
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
e.g. cbow_output =
gensim.models.KeyedVectors.load_word2vec_format('/content/gdrive/My
Drive/vectors.txt', binary=False)
j) choose the word to get similar type of words:
cbow_output.most_similar(positive=['Your word'])
Conclusion: Explain how Neural network is useful for CBOW text analysis.
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Assignment No.6
Steps/ Algorithm
1. Dataset link and libraries :
https://data.caltech.edu/records/mzrjq-6wc02
separate the data into training, validation, and testing sets with a 50%, 25%, 25% split and
then structured the directories as follows:
/datadir
/train
/class1
/class2
.
.
/valid
/class1
/class2
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
.
.
/test
/class1
/class2
.
Libraries required :
PyTorch
torchvision import transforms
torchvision import d
atasets
torch.utils.data import DataLoader
torchvision import models
torch.nn as nn
torch import optim
Ref: https://towardsdatascience.com/transfer-learning-with-convolutional-neural-networks-in-
pytorch-dd09190245ce
m) Prepare the dataset in splitting in three directories Train , alidation and test with 50 25 25
n) Do pre-processing on data with transform from Pytorch
Training dataset transformation as follows :
transforms.Compose([
transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)),
transforms.RandomRotation(degrees=15),
transforms.ColorJitter(),
transforms.RandomHorizontalFlip(),
transforms.CenterCrop(size=224), # Image net standards
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225]) # Imagenet standards
Validation Dataset transform as follows :
transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
[Type here]
BE IT (2019 course) Sub: 414447 : Laboratory Practice-IV (Deep Learning)
Conclusion: Explain how Transfer training increases the accuracy of Object detection
https://www.google.com/url?q=https://towardsdatascience.com/transfer-learning-with-
convolutional-neural-networks-in-pytorch-dd0
[Type here]