DL Practical
DL Practical
Program –
import pandas as pd
import numpy as np
def threshold(x):
for x in data:
output.append(threshold(weighted_sum))
weights = [1, 1]
output = []
t = pd.DataFrame(index=None)
t['X1'] = [0, 0, 1, 1]
t['x2'] = [0, 1, 0, 1]
t['y'] = pd.Series(output)
print(t)
Output –
Program –
import numpy as np
# Initialize weights and biases for the hidden and output layers
self.weights_hidden = np.random.uniform(size=(self.input_dim, self.hidden_dim))
self.bias_hidden = np.random.uniform(size=(1, self.hidden_dim))
self.weights_output = np.random.uniform(size=(self.hidden_dim, self.output_dim))
self.bias_output = np.random.uniform(size=(1, self.output_dim))
Output –
Conclusion – Successfully Implemented Multilayer Perceptron Algorithm for XOR Logic Gate
Practical No 03
Aim –To Explore Python Libraries for Deep Learning e.g. Theano, TensorFlow etc.
THEORY:
Python libraries are collections of pre-written functions, modules, and tools that extend the language's
capabilities and make it easier for developers to perform specific tasks. These libraries cover various domains,
including data manipulation, scientific computing, web development, machine learning, and more. Some
popular Python libraries include:
Numpy:
- NumPy (Numerical Python) is a powerful open-source library for numerical computing in Python.
- It provides a versatile n-dimensional array object (ndarray) that allows efficient storage and manipulation of
large datasets.
- NumPy offers a wide range of mathematical functions for element-wise operations, linear algebra, statistical
computations, and random number generation.
- Its fast array processing capabilities, coupled with integration support for other scientific and data-related
libraries, make NumPy a fundamental tool for scientific computing, data analysis, and machine learning
applications in Python.
Pandas:
- Pandas is an open-source Python library that provides high-performance data structures and data analysis
tools. Its two primary data structures, Series (1-dimensional) and Data Frame (2-dimensional), allow easy
handling and manipulation of structured data.
- Pandas offers functionalities for data cleaning, merging, filtering, and grouping, making it indispensable for
data wrangling and preparation tasks. Additionally, Pandas integrates well with other libraries, allowing
seamless data integration and analysis workflows, making it a go-to choice for data scientists, analysts, and
developers working with tabular data in Python.
Matplotlib :
- Matplotlib is a widely-used Python library for creating static, interactive, and publication-quality
visualizations. With a flexible and easy-to-use interface, Matplotlib enables the creation of various 2D plots,
including line plots, scatter plots, bar plots, histograms, and more.
- It allows customization of plots with labels, titles, legends, color maps, and various styles. As a core
component of the scientific Python ecosystem, Matplotlib plays a crucial role in data exploration, presentation,
and communication, making it an essential tool for researchers, data scientists, and engineers.
Tensorflow
- It provides a flexible and efficient framework for building, training, and deploying various types of artificial
neural networks, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
- TensorFlow's computational graph paradigm allows for distributed computing and optimization on both CPUs
and GPUs, making it suitable for large-scale machine learning tasks. Its extensive ecosystem, ease of use, and
support for production deployment have made TensorFlow a popular choice among researchers, developers, and
enterprises for deep learning applications.
Theano:
- Theano is an open-source numerical computation library that specializes in optimizing and evaluating
mathematical expressions efficiently, particularly those involving multi-dimensional arrays Developed by the
Montreal Institute for Learning Algorithms (MILA), Theano is widely used in deep learning. research and is
often regarded as a precursor to TensorFlow.
- It can automatically compile mathematical expressions into optimized CPU or GPU code, providing
substantial speed improvements, While not actively maintained anymore (as of my knowledge cutoff in
September 2021), Theano's impact on the deep learning community and its influence on the development of
other libraries, like TensorFlow, remains significant.
Opencv:
- OpenCV (Open Source Computer Vision Library) is a popular and open-source computer vision and image
processing library. It provides a vast collection of tools, algorithms, and functionalities to handle image and
video analysis tasks.
- OpenCV enables tasks such as image/video manipulation, object detection and recognition, feature extraction,
camera calibration, and more. With support for multiple programming languages, including Python, C++, and
Java, OpenCV has become a go-to library for computer vision researchers, developers, and enthusiasts due to its
ease of use, efficiency, and wide-ranging capabilities.
Pytorch
- PyTorch is an open-source deep learning library developed by Facebook's Al Research lab. It is known for its
dynamic computation graph and ease of use, making it highly popular among researchers and practitioners in
the field of machine learning and artificial intelligence.
- PyTorch provides a flexible and intuitive framework for building, training, and deploying deep neural
networks. Its automatic differentiation capabilities simplify the process of computing gradients during
backpropagation, enabling rapid prototyping and experimentation.
- PyTorch's support for both CPU and GPU computations, along with its vibrant community and extensive set
of pre-trained models, makes it a preferred choice for various machine learning tasks, including image
classification, natural language processing, and more.
- Flask is a lightweight and minimalist micro-framework that offers flexibility and simplicity.
-It provides essential tools for building web applications and APIs, allowing developers to have more control
over the project structure and components. Flask's "micro" design philosophy makes it easy to get started, and
its modular nature allows developers to add specific functionalities as needed. It is suitable for small to
medium-sized projects, RESTful APIs, and prototypes.
- On the other hand, Django is a robust and full-featured web framework designed for larger and more complex
projects. It follows the "batteries-included" philosophy, providing a comprehensive set of built- in tools and
functionalities for handling common web development tasks like authentication, admin interface, ORM (Object-
Relational Mapping), and URL routing Django promotes the principle of "Don't Repeat Yourself" (DRY) and
encourages best practices, making it ideal for scalable and maintainable applications, content-heavy websites,
and e-commerce platforms.
- Choosing between Flask and Django depends on the project's size, complexity, and specific needs. Flask's
lightweight nature and flexibility make it a great choice for smaller projects and quick prototyping, while
Django's comprehensive features and structure make it well-suited for larger, production-grade applications.
Conclusion: Thus studies and explored python libraries for deep learning.
Practical No 04
Aim – Apply any of the following learning algorithms to learn the parameters of the
supervised single layer feed forward neural network.
Program –
import numpy as np
Output –
Conclusion – Successfully implemented learning algorithms of the supervised single layer feed forward
neural network.
Practical No 05
Aim – Implement a Back Propagation Algorithm to Train a DNN with at least Two
Hidden Layers.
Program –
Code1 –
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
# Hyperparameters
learning_rate = 0.1
epochs = 10000
# Backward pass
output_error = y - output_layer_output
output_delta = output_error * sigmoid_derivative(output_layer_output)
hidden_error2 = np.dot(output_delta, W3.T)
hidden_delta2 = hidden_error2 * sigmoid_derivative(hidden_layer2_output)
hidden_error1 = np.dot(hidden_delta2, W2.T)
hidden_delta1 = hidden_error1 * sigmoid_derivative(hidden_layer1_output)
Output –
Code2 –
import numpy as np
# Sigmoid Function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Variable initialization
epoch = 5 # Setting training iterations
lr = 0.1 # Setting learning rate
wh = np.random.uniform(size=(inputlayer_neurons, hiddenlayer_neurons))
bh = np.random.uniform(size=(1, hiddenlayer_neurons))
wout = np.random.uniform(size=(hiddenlayer_neurons, output_neurons))
bout = np.random.uniform(size=(1, output_neurons))
# Backpropagation
EO = y - output
outgrad = derivatives_sigmoid(output)
d_output = EO * outgrad
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act) # how much hidden layer wts contributed to error
d_hiddenlayer = EH * hiddengrad
print("-----------Epoch-", i + 1, "Starts----------")
print("Input: \n" + str(X))
print("Actual Output: \n" + str(y))
print("Predicted Output: \n", output)
print("-----------Epoch-", i + 1, "Ends----------\n")