0% found this document useful (0 votes)
19 views18 pages

Eng PPT Tech

The document provides an overview of neural networks and machine learning, detailing their structure, types, and functionalities. It discusses the differences between biological and artificial neurons, the process of forward propagation, loss functions, and the training algorithm of backpropagation. Additionally, it highlights various neural network types, tools, frameworks, and challenges faced in the field, concluding with future trends and ethical considerations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views18 pages

Eng PPT Tech

The document provides an overview of neural networks and machine learning, detailing their structure, types, and functionalities. It discusses the differences between biological and artificial neurons, the process of forward propagation, loss functions, and the training algorithm of backpropagation. Additionally, it highlights various neural network types, tools, frameworks, and challenges faced in the field, concluding with future trends and ethical considerations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

TEAM-04

DEEP PULSE NETWORK_


NEURAL NETWORK(MACHINE
LEARNING) TEAM MEMBERS
DHILOTHI.B
DHIVYADHARSHINI.V
AKSHAYA.G
BHARGAVI RAO.A.B
AKSHITA.S
DIIVYASRI.A
INTRODUCTION TO NEURAL NETWORKS:
• A neural system is a network that processes and transfers information.
It plays a vital role both in biology (how our brain works) and in
machine learning (how computers learn).

• Handle complex tasks like pattern recognition, language


understanding, and prediction.

• Capable of learning from large data sets.


WHAT IS MACHINE LEARNING ?
• Definition: "Machine Learning is a field of AI that enables computers to
learn from data and improve over time without being explicitly
programmed."

• Categories:

Supervised Learning
Unsupervised Learning
Reinforcement Learning
BIOLOGICAL VS ARTIFICIAL NEURONS
Aspect Biological Neuron Artificial Neuron

Origin Human brain Inspired by biology, used in ML

Inputs Signals from other neurons Numeric data (features)

Weighted sum + activation


Processing Electrochemical activity
function

Output Signal to next neuron Output to next layer in network

Weights/biases updated via


Learning Synapse strength adapts
training
STRUCTURE OF A NEURAL NETWORK
1. Input Layer
• Receives raw data (e.g., pixel values, features)
• One node per input feature
2. Hidden Layers
• Perform intermediate computations
• Multiple layers = deep neural networks
• Each node applies weights, bias, and activation
3. Output Layer
• Produces the final result (e.g., class label, value)
• One node per output category (for classification)
4. Connections
• Each neuron is connected to neurons in the next layer
• Connections have weights that are adjusted during training
5. Bias
• Added to each neuron’s input to improve learning flexibility
HOW A NEURON WORKS ?
• Receives Inputs (x₁, x₂, ..., xₙ)
• Applies Weights (w₁, w₂, ..., wₙ) to each
input
• Adds Bias (b)
• Computes Weighted Sum:
z = w₁x₁ + w₂x₂ + ... + wₙxₙ + b
• Passes Through Activation Function:
y = f(z)
• Output is sent to the next layer
FORWARD PROPAGATION
• Purpose: Pass input data through the network to generate output
• Steps:
• Input is fed into the network
• Each neuron computes:
z = Σ(wᵢ.xᵢ) + b
a = activation(z)
• Activations flow layer by layer (input → hidden → output)
• Output Layer gives the final prediction
• No learning happens in this step—just computation
LOSS FUNCTIONS

• A loss function measures how far the model's prediction is


from the actual result.
• It helps the network learn and improve during training
• Mean Squared Error (MSE) – Used in regression.
• Cross-Entropy Loss – Used in classification
• Lower loss = better predictions
BACKPROPAGATION AND TRAINING BACKPROPAGATION

• Core algorithm for training neural networks.


• GRADIENT DESCENT:
Minimizes the loss function by adjusting weights
• PROCESS
1. Calculate output via forward pass.
2. Compare output to target (loss).
3. Propagate error backward.
4. Update weights using gradients.
• EPOCHS: One complete pass over the training dataset .
TYPES OF NEURAL NETWORK:
• FEEDFORWARD NEURAL NETWORK (FNN):
No cycles; data moves in one direction
• CONVOLUTIONAL NEURAL NETWORK (CNN):
Great for image and spatial data.
• RECURRENT NEURAL NETWORK(RNN):
Suitable for sequence data.
• TRANSFORMER:
Attention-based; powerful for NLP.
CNN-CONVOLUTIONAL NEURAL NETWORK

*Used for:
Image classification, object detection, etc.
*Layers:
Convolution: Extracts features using filters.
*Pooling:
Downsamples to reduce dimensionality.
*Flattening:
Converts data for fully connected layers.
*Example:
Classifying digits from images.
RNN-RECURRENT NEURAL NETWORK

*DESIGNED FOR:
Sequence data (e.g., time series, text).
*LOOPING MECHANISM:
Remembers previous steps using hidden states.
*VARIANTS:
LSTM (Long Short-Term Memory), GRU (Gated
Recurrent Unit).
*EXAMPLE:
Predicting next word in a sentence.
TRANSFORMERS
Replaces recurrence with attention.
*KEY CONCEPTS:
Self-attention: Learns relationships between
all tokens.
*POSITIONAL ENCODING:
Adds order info to input.
*IMPACT:
Revolutionized NLP (e.g., ChatGPT, BERT, T5).
*STRENGTH:
Processes entire sequences in parallel.
COMPARING NEURAL NETWORK TYPES
Network Type Used For Main Features

Simple structure, data flows in


FNN (Feedforward) Classification, regression
one direction

Detects spatial features using


CNN (Convolutional) Image & video recognition
filters and pooling layers

Maintains memory of past


RNN (Recurrent) Time series, speech, language inputs, good for sequential
data

Many hidden layers, increased


DNN (Deep Neural Net) Complex patterns, big datasets capacity to model deep
relationships
TOOLS AND FRAMEWORKS

*TENSOR FLOW:
Developed by Google; versatile and scalable.
*PYTORCH:
Facebook’s framework; dynamic and flexible.
*KERAS:
User-friendly API; runs on TensorFlow backend.
*SCIKIT-LEARN:
Traditional ML with some NN support.
EXAMPLES USE CASE:DIGIT CLASSIFICATION

*Dataset:
MNIST – 28x28 pixel images of handwritten digits.
*Model:
Simple ANN with input, hidden, and output layers.
*Code Overview:
1. Load and preprocess data.
2.Build and compile the model.
3.Train and evaluate accuracy.
*Goal: Classify digits (0–9).
CHALLENGES IN NEURAL NETWORK
*OVERFITTING:
Model performs well on training but poorly on new data.
*LARGE DATA REQUIREMENTS:
Needs lots of labeled data.
*COMPUTATIONAL COST:
High memory and GPU usage.
*INTERPERTABILITY:
Difficult to understand model.
CONCLUSION AND FUTURE DIRECTIONS

*SUMMARY:
Neural networks are powerful ML tools.
Different architectures suit different tasks.
*TRENDS:
Transformer models dominating NLP.
Growing interest in efficient and explainable AI.
*ETHICS:
Bias, transparency, and accountability.

You might also like