0% found this document useful (0 votes)
9 views8 pages

Group 8 Practical

The document outlines the differences between biological neural networks (BNNs) and artificial neural networks (ANNs), highlighting aspects such as origin, complexity, and learning processes. It also details the mathematical structure of ANNs, including components like neurons, weights, biases, activation functions, and the training process using techniques like backpropagation and optimization algorithms. Additionally, the document includes a code implementation for creating and training an ANN using Python and TensorFlow.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views8 pages

Group 8 Practical

The document outlines the differences between biological neural networks (BNNs) and artificial neural networks (ANNs), highlighting aspects such as origin, complexity, and learning processes. It also details the mathematical structure of ANNs, including components like neurons, weights, biases, activation functions, and the training process using techniques like backpropagation and optimization algorithms. Additionally, the document includes a code implementation for creating and training an ANN using Python and TensorFlow.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

EXPERIMENT NO 8: NEURAL NETWORK

OBJECTIVES:

❖ BASIC DIFFERENCE BETWEEN BIOLOGICAL NEURAL NETWORK AND ARTIFICIAL NEURAL


NETWORK.

The main differences between biological neural networks (BNNs) and artificial neural
networks (ANNs) are:

1. Origin: BNNs are found in living organisms, while ANNs are human-made
models.
2. Hardware: BNNs use biological hardware (neurons), while ANNs use digital or
analog computers.
3. Complexity: BNNs are highly complex and large-scale, while ANNs are simpler
and smaller.
4. Learning: BNNs learn through complex biological processes, while ANNs use
learning algorithms.
5. Speed: BNNs are fast and energy-efficient, ANNs are slower and less efficient.
6. Generalization: BNNs can generalize their learning, while ANNs are task-specific.
7. Biological Constraints: BNNs have biological limitations, while ANNs are flexible.
8. Purpose: BNNs support the survival and function of organisms, ANNs are
designed for specific computational tasks.

❖ To lean the mathematical structure for Artificial Neural Network.

Artificial neural networks (ANNs) have a specific mathematical structure that defines
how they process and transform data. The basic mathematical structure of an
artificial neural network consists of the following components:

1. Neurons (Nodes:
- Neurons are the fundamental units of an ANN.
- Each neuron receives one or more input values, processes them, and produces an
output.

2. Weights (W):
- Weights are associated with the connections between neurons.
- They represent the strength of the connection and are used to scale and
transform the input data.
3. Bias (b):
- Each neuron often has an associated bias term.
- Bias helps shift the output of a neuron and is crucial for the flexibility of the
network.

4. Activation Function (f):


- Activation functions introduce non-linearity into the network.
- They determine the output of a neuron based on the weighted sum of its inputs
and the bias term.

5. Layers:
- ANNs are typically organized into layers: input layer, hidden layers, and output
layer.
- The input layer receives the initial data, hidden layers process it, and the output
layer produces the final result.

6. Connections (Synapses):
- Connections represent the pathways through which data flows from one neuron
to another.
- Each connection has an associated weight that multiplies the input data.

7. Feedforward Propagation:
- This process involves the forward flow of data from the input layer through the
hidden layers to the output layer.
- Neurons in each layer calculate their output using weighted inputs and the
activation function.

8. Loss Function (L):


- The loss function quantifies the error between the network's output and the
desired target.
- It is used during training to adjust the network's weights and biases to minimize
the error.

9. Backpropagation:
- Backpropagation is an algorithm used to update the network's weights and biases
during training.
- It calculates the gradients of the loss function with respect to the network's
parameters and adjusts them to minimize the error.

10. Optimization Algorithm:


- Optimization algorithms (e.g., gradient descent) are used to update the weights
and biases during training to minimize the loss function.
11. Architecture:
- The overall architecture of an ANN includes the number of layers, the number of
neurons in each layer, and the connections between them.

CODE IMPLEMENTATION:

[1]
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
[2]
# Importing the dataset
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:13]
y = dataset.iloc[:, 13]
[3]
#Create dummy variables
geography=pd.get_dummies(X["Geography"],drop_first=True)
gender=pd.get_dummies(X['Gender'],drop_first=True)
[4]
## Concatenate the Data Frames

X=pd.concat([X,geography,gender],axis=1)

## Drop Unnecessary columns


X=X.drop(['Geography','Gender'],axis=1)

# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state
= 0)

[5]
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

[6]
# Part 2 - Now let's make the ANN!
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LeakyReLU,PReLU,ELU
from tensorflow.keras.layers import Dropout

[7]
# Initialising the ANN
classifier = Sequential()
[8]
# Adding the input layer and the first hidden layer
classifier.add(Dense(units=11,activation='relu'))
[9]
# Adding the input layer and the first hidden layer
classifier.add(Dense(units=6,activation='relu'))
[10]
# Adding the input layer and the first hidden layer
classifier.add(Dense(units=1,activation='relu'))
[11]
classifier.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
[12]
model_history=classifier.fit(X_train,y_train,validation_split=0.33,batch_size=10,epochs=
50)
Epoch 1/50
536/536 [==============================] - 2s 5ms/step - loss: 0.5597 - accuracy: 0.7970
- val_loss: 0.5563 - val_accuracy: 0.7974
Epoch 2/50
536/536 [==============================] - 2s 4ms/step - loss: 0.5125 - accuracy: 0.8022
- val_loss: 0.5131 - val_accuracy: 0.7997
Epoch 3/50
536/536 [==============================] - 3s 6ms/step - loss: 0.4929 - accuracy: 0.8057
- val_loss: 0.4987 - val_accuracy: 0.8020
Epoch 4/50
536/536 [==============================] - 2s 4ms/step - loss: 0.4816 - accuracy: 0.8100
- val_loss: 0.4883 - val_accuracy: 0.8080
Epoch 5/50
536/536 [==============================] - 2s 4ms/step - loss: 0.4718 - accuracy: 0.8125
- val_loss: 0.4482 - val_accuracy: 0.8046
Epoch 6/50
536/536 [==============================] - 2s 4ms/step - loss: 0.4447 - accuracy: 0.8127
- val_loss: 0.4484 - val_accuracy: 0.8092
Epoch 7/50
536/536 [==============================] - 3s 5ms/step - loss: 0.4299 - accuracy: 0.8169
- val_loss: 0.4466 - val_accuracy: 0.8084
Epoch 8/50
536/536 [==============================] - 2s 5ms/step - loss: 0.4388 - accuracy: 0.8128
- val_loss: 0.4603 - val_accuracy: 0.8076
Epoch 9/50
536/536 [==============================] - 2s 4ms/step - loss: 0.4248 - accuracy: 0.8197
- val_loss: 0.4468 - val_accuracy: 0.8080
Epoch 10/50
536/536 [==============================] - 2s 4ms/step - loss: 0.4213 - accuracy: 0.8283
- val_loss: 0.4314 - val_accuracy: 0.8107
Epoch 11/50
536/536 [==============================] - 2s 4ms/step - loss: 0.4081 - accuracy: 0.8324
- val_loss: 0.4253 - val_accuracy: 0.8179
Epoch 12/50
536/536 [==============================] - 2s 4ms/step - loss: 0.4035 - accuracy: 0.8304
- val_loss: 0.4237 - val_accuracy: 0.8236
Epoch 13/50
536/536 [==============================] - 2s 4ms/step - loss: 0.3912 - accuracy: 0.8388
- val_loss: 0.4200 - val_accuracy: 0.8258
Epoch 14/50
536/536 [==============================] - 2s 5ms/step - loss: 0.3858 - accuracy: 0.8418
- val_loss: 0.4319 - val_accuracy: 0.8277
Epoch 15/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3804 - accuracy: 0.8425
- val_loss: 0.4199 - val_accuracy: 0.8289
Epoch 16/50
536/536 [==============================] - 2s 4ms/step - loss: 0.3759 - accuracy: 0.8442
- val_loss: 0.3901 - val_accuracy: 0.8326
Epoch 17/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3717 - accuracy: 0.8481
- val_loss: 0.4056 - val_accuracy: 0.8319
Epoch 18/50
536/536 [==============================] - 2s 4ms/step - loss: 0.3721 - accuracy: 0.8477
- val_loss: 0.4032 - val_accuracy: 0.8349
Epoch 19/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3726 - accuracy: 0.8466
- val_loss: 0.3961 - val_accuracy: 0.8364
Epoch 20/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3678 - accuracy: 0.8498
- val_loss: 0.4030 - val_accuracy: 0.8368
Epoch 21/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3695 - accuracy: 0.8498
- val_loss: 0.4010 - val_accuracy: 0.8387
Epoch 22/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3683 - accuracy: 0.8498
- val_loss: 0.3882 - val_accuracy: 0.8395
Epoch 23/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3642 - accuracy: 0.8455
- val_loss: 0.4100 - val_accuracy: 0.8410
Epoch 24/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3632 - accuracy: 0.8472
- val_loss: 0.4010 - val_accuracy: 0.8406
Epoch 25/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3621 - accuracy: 0.8500
- val_loss: 0.4081 - val_accuracy: 0.8417
Epoch 26/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3599 - accuracy: 0.8524
- val_loss: 0.4136 - val_accuracy: 0.8444
Epoch 27/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3617 - accuracy: 0.8520
- val_loss: 0.4347 - val_accuracy: 0.8398
Epoch 28/50
536/536 [==============================] - 1s 2ms/step - loss: 0.3681 - accuracy: 0.8517
- val_loss: 0.4385 - val_accuracy: 0.8413
Epoch 29/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3682 - accuracy: 0.8546
- val_loss: 0.3996 - val_accuracy: 0.8425
Epoch 30/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3674 - accuracy: 0.8548
- val_loss: 0.4219 - val_accuracy: 0.8451
Epoch 31/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3673 - accuracy: 0.8533
- val_loss: 0.4320 - val_accuracy: 0.8425
Epoch 32/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3603 - accuracy: 0.8545
- val_loss: 0.4138 - val_accuracy: 0.8466
Epoch 33/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3560 - accuracy: 0.8556
- val_loss: 0.4125 - val_accuracy: 0.8459
Epoch 34/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3629 - accuracy: 0.8559
- val_loss: 0.4274 - val_accuracy: 0.8410
Epoch 35/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3542 - accuracy: 0.8539
- val_loss: 0.4125 - val_accuracy: 0.8429
Epoch 36/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3537 - accuracy: 0.8558
- val_loss: 0.4278 - val_accuracy: 0.8459
Epoch 37/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3545 - accuracy: 0.8569
- val_loss: 0.4347 - val_accuracy: 0.8470
Epoch 38/50
536/536 [==============================] - 3s 6ms/step - loss: 0.3547 - accuracy: 0.8565
- val_loss: 0.4205 - val_accuracy: 0.8466
Epoch 39/50
536/536 [==============================] - 2s 4ms/step - loss: 0.3552 - accuracy: 0.8569
- val_loss: 0.4494 - val_accuracy: 0.8451
Epoch 40/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3573 - accuracy: 0.8580
- val_loss: 0.4412 - val_accuracy: 0.8497
Epoch 41/50
536/536 [==============================] - 3s 5ms/step - loss: 0.3620 - accuracy: 0.8582
- val_loss: 0.4336 - val_accuracy: 0.8474
Epoch 42/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3557 - accuracy: 0.8571
- val_loss: 0.4443 - val_accuracy: 0.8504
Epoch 43/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3514 - accuracy: 0.8569
- val_loss: 0.4362 - val_accuracy: 0.8512
Epoch 44/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3493 - accuracy: 0.8559
- val_loss: 0.4511 - val_accuracy: 0.8470
Epoch 45/50
536/536 [==============================] - 1s 2ms/step - loss: 0.3498 - accuracy: 0.8615
- val_loss: 0.4405 - val_accuracy: 0.8451
Epoch 46/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3502 - accuracy: 0.8569
- val_loss: 0.4295 - val_accuracy: 0.8501
Epoch 47/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3498 - accuracy: 0.8574
- val_loss: 0.4436 - val_accuracy: 0.8497
Epoch 48/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3505 - accuracy: 0.8580
- val_loss: 0.4218 - val_accuracy: 0.8455
Epoch 49/50
536/536 [==============================] - 2s 3ms/step - loss: 0.3556 - accuracy: 0.8587
- val_loss: 0.4294 - val_accuracy: 0.8470
Epoch 50/50
536/536 [==============================] - 1s 3ms/step - loss: 0.3440 - accuracy: 0.8587
- val_loss: 0.4300 - val_accuracy: 0.8455

[13]
# list all data in history

print(model_history.history.keys())
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])

[14]
# Part 3 - Making the predictions and evaluating the model

# Predicting the Test set results


y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)

[15]
# Calculate the Accuracy
from sklearn.metrics import accuracy_score
score=accuracy_score(y_pred,y_test)
[16]
score
0.858

RELATED QUESTIONS:
Ques1) What are the different types of neural networks and their applications?

Ans1) Certainly, here are the main types of neural networks and their core applications:

1. Feedforward Neural Networks (FNN): Mainly used for regression and classification tasks, such as
image and text classification.

2. Convolutional Neural Networks (CNN): Primarily applied to image and video analysis, including object
recognition and segmentation.

3. Recurrent Neural Networks (RNN): Mainly used for natural language processing (NLP) and time
series forecasting.

4. Long Short-Term Memory Networks (LSTM): Commonly used in NLP, machine translation, and
speech synthesis.
5. Autoencoders: Primarily employed for dimensionality reduction, anomaly detection, and image
denoising.

6. Generative Adversarial Networks (GAN): Mainly used for image generation, style transfer, and super-
resolution.

These are the main types of neural networks and their core applications.

Ques2) What is regularization is neural network and why is it important?

Ans2) Regularization in neural networks is a set of techniques used to prevent overfitting, where a model
fits the training data too closely and doesn't generalize well to new data. Regularization techniques add
constraints during training to encourage the network to learn important patterns while avoiding noise.
Common regularization methods include L1 and L2 regularization, dropout, early stopping, data
augmentation, batch normalization, and weight decay. Regularization is important because it helps neural
networks find a balance between complexity and generalization, resulting in models that perform better on
unseen data and are more robust in practical applications.

Ques3) What are some common activation functions used in neural network?
Ans3) Activation functions are mathematical functions used in artificial neural networks to introduce non-
linearity and enable the network to learn complex patterns. Common activation functions include the
sigmoid, hyperbolic tangent (tanh), rectified linear unit (ReLU), Leaky ReLU, Parametric ReLU (PReLU),
Exponential Linear Unit (ELU), Swish, and gated functions (used in RNNs like LSTM and GRU). The choice of
activation function depends on the specific problem and architecture, but ReLU variants are popular due to
their simplicity and effectiveness. It's essential to experiment with different functions to determine the most
suitable one for a given task.

Ques4) What are some real-world applications of neural network?


Ans4) Neural networks, with their capacity to learn from data, have diverse real-world applications. They
are used in computer vision for image analysis, natural language processing for sentiment analysis and
translation, recommendation systems, healthcare for disease diagnosis, autonomous vehicles, finance for
fraud detection, gaming, manufacturing quality control, energy management, social media, astronomy, art
and music generation, and environmental monitoring. Their adaptability and problem-solving capabilities
make them a vital technology in a wide range of fields.

You might also like