0% found this document useful (0 votes)
10 views

Assignment 01

Uploaded by

raoashar021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Assignment 01

Uploaded by

raoashar021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Assignment: Deep Learning Concepts and Implementation

Introduction:
Deep learning is a subset of machine learning that mimics the structure and function of the
human brain through artificial neural networks. These networks are capable of learning and
adapting by adjusting internal parameters through optimization techniques. This assignment
explores fundamental deep learning concepts, including neuron computation, activation
functions, and neural network layers, implemented using Python and NumPy.

1. Multi-Neuron Output Using NumPy

Neural networks are made up of multiple neurons organized into layers. Using NumPy, we
efficiently compute the output of multiple neurons. The dot product operation allows for
vectorized computation, optimizing neural network performance.

import numpy as np

inputs = [1, 2, 3, 2.5]


weights = [
[0.2, 0.8, -0.5, 1.0],
[0.5, -0.91, 0.26, -0.5],
[-0.26, -0.27, 0.17, 0.87]
]

biases = [2, 3, 0.5]


output = np.dot(weights, inputs) + biases
print(output)

Explanation: This code processes multiple inputs across three neurons simultaneously using
matrix multiplication. The biases adjust the final values, and NumPy’s dot product simplifies the
calculations. Using this approach, neural networks can handle large datasets efficiently, allowing
complex operations such as image recognition and text classification.

2. Single Neuron Calculation with Multiple Inputs

A single neuron processes inputs by applying corresponding weights and adding a bias. This is
the basic building block of a deep learning network, where multiple neurons form layers.

inputs = [1.2, 5.1, 2.1]


weights = [3.1, 2.1, 8.7]
bias = 3

output = sum(i * w for i, w in zip(inputs, weights)) + bias


print(output)
Explanation: Each input is multiplied by a specific weight that determines its importance, and
the sum of these weighted inputs is added to a bias to produce the neuron's final output. The bias
prevents neurons from being dependent solely on inputs, improving learning efficiency.

3. Implementing a Two-Layer Neural Network

Deep learning networks often involve multiple layers of neurons, where each layer refines the
output of the previous layer. Below, we implement a two-layer network.

layer1_outputs = np.dot(inputs, np.array(weights).T) + biases


weights2 = [[0.1, -0.14, 0.5], [-0.5, 0.12, -0.33], [-0.44, 0.73, -0.13]]
biases2 = [-1, 2, -0.5]
layer2_outputs = np.dot(layer1_outputs, np.array(weights2).T) + biases2

print(layer2_outputs)

Explanation: This network consists of an initial layer that processes inputs, followed by another
layer that refines and transforms the intermediate output further. This is the fundamental
principle behind deep networks, allowing them to learn hierarchical representations of data.

4. Activation Functions: ReLU and Softmax

Activation functions introduce non-linearity, allowing neural networks to learn complex patterns.
Below, we implement the ReLU and Softmax activation functions.

class Activation_ReLU:
def forward(self, inputs):
self.output = np.maximum(0, inputs)

class Activation_Softmax:
def forward(self, inputs):
exp_values = np.exp(inputs - np.max(inputs, axis=1, keepdims=True))
self.output = exp_values / np.sum(exp_values, axis=1, keepdims=True)

Explanation: ReLU (Rectified Linear Unit) replaces negative values with zero to enhance
learning stability, while Softmax normalizes output values into probabilities for classification
tasks.

5. Loss Function: Categorical Cross-Entropy

Loss functions quantify how well the model's predictions align with actual values. The
categorical cross-entropy loss function is widely used in classification problems.
import math

softmax_output = [0.7, 0.1, 0.2]


target_output = [1, 0, 0]

loss = -sum(math.log(softmax_output[i]) * target_output[i] for i in


range(len(target_output)))
print(loss)

Explanation: Cross-entropy loss penalizes incorrect predictions more significantly when the
model is confident but wrong. It ensures that the neural network adjusts its parameters
effectively.

6. The Role of Batch Processing in Deep Learning

Batch processing improves efficiency by computing multiple input samples in parallel. This is
crucial for training deep networks on large datasets.

inputs = [[1, 2, 3, 2.5],


[2.0, 5.0, -1.0, 2.0],
[-1.5, 2.7, 3.3, -0.8]]

weights = [[0.2, 0.8, -0.5, 1.0],


[0.5, -0.91, 0.26, -0.5],
[-0.26, -0.27, 0.17, 0.87]]

biases = [2, 3, 0.5]


output = np.dot(inputs, np.array(weights).T) + biases
print(output)

Explanation: Processing multiple inputs at once reduces computation time and improves
training performance. This approach is used in real-world applications such as autonomous
driving and voice recognition.

Conclusion:

This assignment covered essential deep learning concepts, from single neuron computations to
multi-layer networks. By implementing these techniques, we gain deeper insights into how
neural networks process data and make predictions. Understanding these foundations is crucial
for further exploration into advanced AI topics such as convolutional and recurrent neural
networks.

You might also like