0% found this document useful (0 votes)
92 views14 pages

DL Practical

The document discusses several popular Python libraries for tasks like data science, machine learning and deep learning. It explains libraries like NumPy, Pandas, Matplotlib, TensorFlow, Theano and PyTorch for tasks like numerical computing, data analysis and deep learning. It also covers OpenCV for computer vision, and Flask and Django for web development.

Uploaded by

singhbusiness143
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views14 pages

DL Practical

The document discusses several popular Python libraries for tasks like data science, machine learning and deep learning. It explains libraries like NumPy, Pandas, Matplotlib, TensorFlow, Theano and PyTorch for tasks like numerical computing, data analysis and deep learning. It also covers OpenCV for computer vision, and Flask and Django for web development.

Uploaded by

singhbusiness143
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Practical No 01

Aim –Implement Perceptron Algorithm to Simulate Any Logic Gate

Program –
import pandas as pd

import numpy as np

def threshold(x):

return 1 if x >= 1 else 0

def fire(data, weights, output):

for x in data:

(weighted_sum) = np.inner(x, weights)

output.append(threshold(weighted_sum))

data = [[0,0], [0,1], [1,0], [1,1]]

weights = [1, 1]

output = []

fire(data, weights, output)

t = pd.DataFrame(index=None)

t['X1'] = [0, 0, 1, 1]

t['x2'] = [0, 1, 0, 1]

t['y'] = pd.Series(output)

print(t)

Output –

Conclusion – Successfully implemented Perceptron Algorithm to Simulate OR Gate.


Practical No 02

Aim –Implement Multilayer Perceptron Algorithm to Simulate XOR Logic Gate

Program –
import numpy as np

# Activation function (sigmoid)


def sigmoid(x):
return 1 / (1 + np.exp(-x))

# Derivative of the activation function


def sigmoid_derivative(x):
return x * (1 - x)

# Multilayer Perceptron class


class MLP:
def __init__(self, input_dim, hidden_dim, output_dim):
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.output_dim = output_dim

# Initialize weights and biases for the hidden and output layers
self.weights_hidden = np.random.uniform(size=(self.input_dim, self.hidden_dim))
self.bias_hidden = np.random.uniform(size=(1, self.hidden_dim))
self.weights_output = np.random.uniform(size=(self.hidden_dim, self.output_dim))
self.bias_output = np.random.uniform(size=(1, self.output_dim))

def train(self, X, y, epochs=1000, learning_rate=0.1):


for epoch in range(epochs):
# --- Forward Propagation ---
hidden_layer_input = np.dot(X, self.weights_hidden) + self.bias_hidden
hidden_layer_output = sigmoid(hidden_layer_input)

output_layer_input = np.dot(hidden_layer_output, self.weights_output) + self.bias_output


predicted_output = sigmoid(output_layer_input)

# --- Backpropagation ---


error = y - predicted_output
output_gradient = sigmoid_derivative(predicted_output) * error
hidden_error = output_gradient.dot(self.weights_output.T)
hidden_gradient = sigmoid_derivative(hidden_layer_output) * hidden_error

# --- Update Weights and Biases ---


self.weights_output += learning_rate * hidden_layer_output.T.dot(output_gradient)
self.bias_output += learning_rate * np.sum(output_gradient, axis=0, keepdims=True)
self.weights_hidden += learning_rate * X.T.dot(hidden_gradient)
self.bias_hidden += learning_rate * np.sum(hidden_gradient, axis=0, keepdims=True)

def predict(self, X):


hidden_layer_input = np.dot(X, self.weights_hidden) + self.bias_hidden
hidden_layer_output = sigmoid(hidden_layer_input)
output_layer_input = np.dot(hidden_layer_output, self.weights_output) + self.bias_output
predicted_output = sigmoid(output_layer_input)
return np.round(predicted_output)

# XOR Gate Input and Output


X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])

# Create and train the MLP for XOR Gate


mlp = MLP(input_dim=2, hidden_dim=2, output_dim=1)
mlp.train(X, y, epochs=10000, learning_rate=0.1)

# Test the XOR Gate


predictions = mlp.predict(X)
print("Predictions:")
print(predictions)

Output –

Conclusion – Successfully Implemented Multilayer Perceptron Algorithm for XOR Logic Gate
Practical No 03

Aim –To Explore Python Libraries for Deep Learning e.g. Theano, TensorFlow etc.

THEORY:

What are python libraries?

Python libraries are collections of pre-written functions, modules, and tools that extend the language's
capabilities and make it easier for developers to perform specific tasks. These libraries cover various domains,
including data manipulation, scientific computing, web development, machine learning, and more. Some
popular Python libraries include:

1) Numpy 2) Pandas 3) Matplotlib 4) Tensorflow

5) Theano 6) Opencv 7) Pytorch 8) Flask and Django

Numpy:

- NumPy (Numerical Python) is a powerful open-source library for numerical computing in Python.

- It provides a versatile n-dimensional array object (ndarray) that allows efficient storage and manipulation of
large datasets.

- NumPy offers a wide range of mathematical functions for element-wise operations, linear algebra, statistical
computations, and random number generation.

- Its fast array processing capabilities, coupled with integration support for other scientific and data-related
libraries, make NumPy a fundamental tool for scientific computing, data analysis, and machine learning
applications in Python.

Pandas:

- Pandas is an open-source Python library that provides high-performance data structures and data analysis
tools. Its two primary data structures, Series (1-dimensional) and Data Frame (2-dimensional), allow easy
handling and manipulation of structured data.

- Pandas offers functionalities for data cleaning, merging, filtering, and grouping, making it indispensable for
data wrangling and preparation tasks. Additionally, Pandas integrates well with other libraries, allowing
seamless data integration and analysis workflows, making it a go-to choice for data scientists, analysts, and
developers working with tabular data in Python.

Matplotlib :

- Matplotlib is a widely-used Python library for creating static, interactive, and publication-quality
visualizations. With a flexible and easy-to-use interface, Matplotlib enables the creation of various 2D plots,
including line plots, scatter plots, bar plots, histograms, and more.
- It allows customization of plots with labels, titles, legends, color maps, and various styles. As a core
component of the scientific Python ecosystem, Matplotlib plays a crucial role in data exploration, presentation,
and communication, making it an essential tool for researchers, data scientists, and engineers.

Tensorflow

- TensorFlow is an open-source deep learning library developed by Google.

- It provides a flexible and efficient framework for building, training, and deploying various types of artificial
neural networks, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs)

- TensorFlow's computational graph paradigm allows for distributed computing and optimization on both CPUs
and GPUs, making it suitable for large-scale machine learning tasks. Its extensive ecosystem, ease of use, and
support for production deployment have made TensorFlow a popular choice among researchers, developers, and
enterprises for deep learning applications.

Theano:

- Theano is an open-source numerical computation library that specializes in optimizing and evaluating
mathematical expressions efficiently, particularly those involving multi-dimensional arrays Developed by the
Montreal Institute for Learning Algorithms (MILA), Theano is widely used in deep learning. research and is
often regarded as a precursor to TensorFlow.

- It can automatically compile mathematical expressions into optimized CPU or GPU code, providing
substantial speed improvements, While not actively maintained anymore (as of my knowledge cutoff in
September 2021), Theano's impact on the deep learning community and its influence on the development of
other libraries, like TensorFlow, remains significant.

Opencv:

- OpenCV (Open Source Computer Vision Library) is a popular and open-source computer vision and image
processing library. It provides a vast collection of tools, algorithms, and functionalities to handle image and
video analysis tasks.

- OpenCV enables tasks such as image/video manipulation, object detection and recognition, feature extraction,
camera calibration, and more. With support for multiple programming languages, including Python, C++, and
Java, OpenCV has become a go-to library for computer vision researchers, developers, and enthusiasts due to its
ease of use, efficiency, and wide-ranging capabilities.

Pytorch

- PyTorch is an open-source deep learning library developed by Facebook's Al Research lab. It is known for its
dynamic computation graph and ease of use, making it highly popular among researchers and practitioners in
the field of machine learning and artificial intelligence.

- PyTorch provides a flexible and intuitive framework for building, training, and deploying deep neural
networks. Its automatic differentiation capabilities simplify the process of computing gradients during
backpropagation, enabling rapid prototyping and experimentation.
- PyTorch's support for both CPU and GPU computations, along with its vibrant community and extensive set
of pre-trained models, makes it a preferred choice for various machine learning tasks, including image
classification, natural language processing, and more.

Flask and Django

- Flask is a lightweight and minimalist micro-framework that offers flexibility and simplicity.

-It provides essential tools for building web applications and APIs, allowing developers to have more control
over the project structure and components. Flask's "micro" design philosophy makes it easy to get started, and
its modular nature allows developers to add specific functionalities as needed. It is suitable for small to
medium-sized projects, RESTful APIs, and prototypes.

- On the other hand, Django is a robust and full-featured web framework designed for larger and more complex
projects. It follows the "batteries-included" philosophy, providing a comprehensive set of built- in tools and
functionalities for handling common web development tasks like authentication, admin interface, ORM (Object-
Relational Mapping), and URL routing Django promotes the principle of "Don't Repeat Yourself" (DRY) and
encourages best practices, making it ideal for scalable and maintainable applications, content-heavy websites,
and e-commerce platforms.

- Choosing between Flask and Django depends on the project's size, complexity, and specific needs. Flask's
lightweight nature and flexibility make it a great choice for smaller projects and quick prototyping, while
Django's comprehensive features and structure make it well-suited for larger, production-grade applications.

Conclusion: Thus studies and explored python libraries for deep learning.
Practical No 04

Aim – Apply any of the following learning algorithms to learn the parameters of the
supervised single layer feed forward neural network.

a. Stochastic Gradient Descent b. Mini Batch Gradient Descent c. Momentum GD

d. Nestorev GD e. Adagrad GD f. Adam Learning GD

Program –
import numpy as np

#Sample input data and corresponding labels


X= np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # Input features
y=np.array([[0],[1],[1],[0]])
#Initialize random weights and bias
np.random.seed(42)
input_dim = X.shape[1]
output_dim=y.shape[1]
hidden_dim = 2
W1= np.random.randn(input_dim, hidden_dim) #Weight matrix for input to hidden layer

W2= np.random.randn(hidden_dim, output_dim) # Weight matrix for hidden to output layer

b1=np.zeros((1, hidden_dim)) #Bias for hidden layer


b2= np.zeros((1,output_dim)) #Bias for output layer
#Hyperparameters
learning_rate = 0.01
epochs= 1000
batch_size =1
#Stochastic Gradient Descent
for epoch in range(epochs):
#Shuffle the data
indices=np.random.permutation(X.shape[0])
X_shuffled=X[indices]
y_shuffled=y[indices]
for i in range(0, X.shape[0], batch_size):
#Mini-batch training data
X_batch=X_shuffled[i:i + batch_size]
y_batch = y_shuffled[i:i + batch_size]
#Forward pass
hidden_layer_output = np.dot(X_batch, W1) + b1
hidden_layer_activation=1/(1+np.exp(-hidden_layer_output))
output_layer_output= np.dot(hidden_layer_activation, W2) + b2
predicted_output=1/(1+ np.exp(-output_layer_output))
# Backward pass
output_error=y_batch-predicted_output
output_delta= output_error *(predicted_output* (1-predicted_output))
hidden_error= np.dot(output_delta, W2.T)
hidden_delta = hidden_error*(hidden_layer_activation* (1-hidden_layer_activation))
#Update weights and biases
W2+=learning_rate * np.dot(hidden_layer_activation.T, output_delta)
b2+=learning_rate * np.sum(output_delta,axis=0, keepdims=True)
W1+=learning_rate * np.dot(X_batch.T,hidden_delta)
b1+=learning_rate*np.sum(hidden_delta,axis=0,keepdims=True)
#Prediction after training
hidden_layer_output = np.dot(X, W1)+b1
hidden_layer_activation=1/(1+ np.exp(-hidden_layer_output))
output_layer_output = np.dot(hidden_layer_activation, W2) + b2
predicted_output=1/(1+ np.exp(-output_layer_output))
print("Predicted Output:")
print(predicted_output)

Output –

Conclusion – Successfully implemented learning algorithms of the supervised single layer feed forward
neural network.
Practical No 05

Aim – Implement a Back Propagation Algorithm to Train a DNN with at least Two
Hidden Layers.

Program –

Code1 –
import numpy as np

def sigmoid(x):
return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
return x * (1 - x)

# Sample input data and corresponding labels


X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])

# Initialize random weights and biases for hidden layers


np.random.seed(42)
input_dim = X.shape[1]
output_dim = y.shape[1]
hidden_dim1 = 4
hidden_dim2 = 3
W1 = np.random.randn(input_dim, hidden_dim1)
b1 = np.zeros((1, hidden_dim1))
W2 = np.random.randn(hidden_dim1, hidden_dim2)
b2 = np.zeros((1, hidden_dim2))
W3 = np.random.randn(hidden_dim2, output_dim)
b3 = np.zeros((1, output_dim))

# Hyperparameters
learning_rate = 0.1
epochs = 10000

# Training the DNN with backpropagation


for epoch in range(epochs):
# Forward pass
hidden_layer1_output = sigmoid(np.dot(X, W1) + b1)
hidden_layer2_output = sigmoid(np.dot(hidden_layer1_output, W2) + b2)
output_layer_output = sigmoid(np.dot(hidden_layer2_output, W3) + b3)

# Backward pass
output_error = y - output_layer_output
output_delta = output_error * sigmoid_derivative(output_layer_output)
hidden_error2 = np.dot(output_delta, W3.T)
hidden_delta2 = hidden_error2 * sigmoid_derivative(hidden_layer2_output)
hidden_error1 = np.dot(hidden_delta2, W2.T)
hidden_delta1 = hidden_error1 * sigmoid_derivative(hidden_layer1_output)

# Update weights and biases


W3 += learning_rate * np.dot(hidden_layer2_output.T, output_delta)
b3 += learning_rate * np.sum(output_delta, axis=0, keepdims=True)
W2 += learning_rate * np.dot(hidden_layer1_output.T, hidden_delta2)
b2 += learning_rate * np.sum(hidden_delta2, axis=0, keepdims=True)
W1 += learning_rate * np.dot(X.T, hidden_delta1)
b1 += learning_rate * np.sum(hidden_delta1, axis=0, keepdims=True)

# Prediction after training


predicted_output = sigmoid(np.dot(sigmoid(np.dot(sigmoid(np.dot(X, W1) + b1), W2) + b2), W3) + b3)
print("Predicted Output:")
print(predicted_output)

Output –

Code2 –
import numpy as np

X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float)


y = np.array(([92], [86], [89]), dtype=float)
X = X / np.amax(X, axis=0) # maximum of X array longitudinally
y = y / 100

# Sigmoid Function
def sigmoid(x):
return 1 / (1 + np.exp(-x))

# Derivative of Sigmoid Function


def derivatives_sigmoid(x):
return x * (1 - x)

# Variable initialization
epoch = 5 # Setting training iterations
lr = 0.1 # Setting learning rate

inputlayer_neurons = 2 # number of features in data set


hiddenlayer_neurons = 3 # number of hidden layers neurons
output_neurons = 1 # number of neurons at output layer
# weight and bias initialization

wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))
bout = np.random.uniform(size=(1, output_neurons))

# draws a random range of numbers uniformly of dim x*y


for i in range(epoch):
# Forward Propogation
hinp1 = np.dot(X, wh)
hinp = hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1 = np.dot(hlayer_act, wout)
outinp = outinp1 + bout
output = sigmoid(outinp)

# Backpropagation
EO = y - output
outgrad = derivatives_sigmoid(output)
d_output = EO * outgrad
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act) # how much hidden layer wts contributed to error
d_hiddenlayer = EH * hiddengrad

wout += hlayer_act.T.dot(d_output) * lr # dotproduct of nextlayererror and currentlayerop


wh += X.T.dot(d_hiddenlayer) * lr

print("-----------Epoch-", i + 1, "Starts----------")
print("Input: \n" + str(X))
print("Actual Output: \n" + str(y))
print("Predicted Output: \n", output)
print("-----------Epoch-", i + 1, "Ends----------\n")

print("Input: \n" + str(X))


print("Actual Output: \n" + str(y))
print("Predicted Output: \n", output)
Output –
Conclusion – Successfully Implemented a Back Propagation Algorithm to Train a DNN.

You might also like