0% found this document useful (0 votes)
114 views5 pages

ML Exp-7

The document discusses implementing a single layer perceptron learning algorithm in Python. It provides theory on perceptrons and their components, types of perceptrons, and concludes by explaining how to build and train a single layer perceptron model in code.

Uploaded by

godizlatan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views5 pages

ML Exp-7

The document discusses implementing a single layer perceptron learning algorithm in Python. It provides theory on perceptrons and their components, types of perceptrons, and concludes by explaining how to build and train a single layer perceptron model in code.

Uploaded by

godizlatan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

EXPERIMENT 7

AIM:
Implement Single Layer Perceptron Learning Algorithm in Python

THEORY:

A single-layer feedforward neural network was introduced in the late 1950s by


Frank Rosenblatt. It was the starting phase of Deep Learning and Artificial neural
networks. During that time for prediction, Statistical machine learning, or Traditional
code Programming is used. Perceptron is one of the first and most straightforward
models of artificial neural networks. Despite being a straightforward model, the
perceptron has been proven to be successful in solving specific categorization issues.

Perceptron is one of the simplest Artificial neural network architectures. It was introduced by
Frank Rosenblatt in 1957s. It is the simplest type of feedforward neural network, consisting
of a single layer of input nodes that are fully connected to a layer of output nodes. It can
learn the linearly separable patterns. it uses slightly different types of artificial neurons
known as threshold logic units (TLU). it was first introduced by McCulloch and Walter Pitts in
the 1940s.

Types of Perceptron

Single-Layer Perceptron: This type of perceptron is limited to learning linearly separable


patterns. effective for tasks where the data can be divided into distinct categories through a
straight line.
Multilayer Perceptron: Multilayer perceptrons possess enhanced processing capabilities as
they consist of two or more layers, adept at handling more complex patterns and
relationships within the data.

Basic Components of Perceptron

A perceptron, the basic unit of a neural network, comprises essential components that
collaborate in information processing.

Input Features: The perceptron takes multiple input features, each input feature represents a
characteristic or attribute of the input data.
Weights: Each input feature is associated with a weight, determining the significance of each
input feature in influencing the perceptron’s output. During training, these weights are
adjusted to learn the optimal values.
Summation Function: The perceptron calculates the weighted sum of its inputs using the
summation function. The summation function combines the inputs with their respective
weights to produce a weighted sum.
Activation Function: The weighted sum is then passed through an activation function.
Perceptron uses Heaviside step function functions. which take the summed values as input
and compare with the threshold and provide the output as 0 or 1.
Output: The final output of the perceptron, is determined by the activation function’s result.
For example, in binary classification problems, the output might represent a predicted class
(0 or 1).
Bias: A bias term is often included in the perceptron model. The bias allows the model to
make adjustments that are independent of the input. It is an additional parameter that is
learned during training.
Learning Algorithm (Weight Update Rule): During training, the perceptron learns by adjusting
its weights and bias based on a learning algorithm. A common approach is the perceptron
learning algorithm, which updates weights based on the difference between the predicted
output and the true output.

Implementations code

Build the single Layer Perceptron Model


Initialize the weight and learning rate, Here we are considering the weight values number of
input + 1. i.e +1 for bias.
Define the first linear layer
Define the activation function. Here we are using the Heaviside Step function.
Define the Prediction
Define the loss function.
Define training, in which weight and bias are updated accordingly.
define fitting the model.
CONCLUSION:
The Single Layer Perceptron Learning Algorithm facilitates the training of a single-layer
neural network, suitable for linearly separable data classification tasks. It iteratively adjusts
weights to minimize classification errors, offering a simple yet effective approach to binary
classification problems.

You might also like