0% found this document useful (0 votes)
13 views17 pages

FED Learning Mini Project

Uploaded by

harendra8587
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views17 pages

FED Learning Mini Project

Uploaded by

harendra8587
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

LEVERAGING

FEDERATED LEARNING
FOR EDGE
INTELLIGENCE IN
RESOURCE-
CONSTRAINED
ENVIRONMENTS
OUR TEAM:
• Mohniish Gurnani
(IIB2021013)
• Harendra Singh (IIB2021015)
• Harsh Tirhekar (IIB2021016)
• Atharv Badgujar (IIB2021017)
• Suhail Khan (IIB2021020)
Under guidance of:
Dr. Bibhas Ghoshal
Sir
FEDERATED
LEARNING?Federated learning operates by
training a central model across
distributed devices or servers. Rather
than transferring all data to a central
repository, the model is trained locally
on each device, with only model
updates transmitted. This
decentralized approach enhances
privacy, scalability, and efficiency in
machine learning systems.
RESEARCHES ON FEDERATED
LEARNING
PROJECT OVERVIEW

OUR DATASET OUR MODEL


This dataset comprises 500 bird images, Our model, MobileNet excels in federated
divided into 3 classes. It includes original learning setups due to its design and high
camera trap and internet-sourced images. efficiency. With a focus on resource-
constrained environments, it ensures
optimal performance while minimising
computational overhead, ideal for
DEPLOYMENT distributed learning.
Our clients are deployed on Raspberry Pi 3
B+, while server is deployed on Desktop.
ARCHITECTURE
All are connected on LAN network Our model utilizes a 1 server, 2 client
communicating via sockets. architecture, enabling distributed learning.
The server coordinates model aggregation
while clients conduct local training.
OUR DATASET
NECESSARY OPTIMISATIONS FOR
DEPLOYMENT ON RASPBERRY PI
Specs -
Broadcom BCM2837B0, Cortex-A53 (ARMv8) 64-bit SoC @
1.4GHz
1GB LPDDR2 SDRAM
• Swap size: Given the specs it was
• Operating System: We
extremely resource intensive to
installed Raspberry OS 64-bit even install Tensorflow on the
Lite with no desktop clients.
environment so as to • For smooth training we increased
completely remove GUI the default swap size of Raspberry
overhead from Operating from 100 MB to 1024 MB, we gave
system. it a huge performance boost and a
smooth training on our model.
OPTIMIZATIONS FOR REDUCED LATENCY

Gradient compression uses the approach of delaying the


synchronization of weight updates which are small.
Although small weight updates might not be sent for that
batch, this information is not discarded. Once the weight
updates for this location accumulate to become a larger
value, they will be propagated.

Since there is no information loss, but only delayed


updates, it does not lead to a significant loss in accuracy or
convergence rate.

Combining quantization and gradient compression offers


significant efficiency boosts in federated learning with
minimal accuracy tradeoff. Quantization reduces model size
whereas compression slashes communication bandwidth
significantly.. This translates to both reduction in model size
and bandwidth use, improving scalability and performance.
These techniques are essential for making federated
learning work better on devices with limited resources.
WHY FEDERATED LEARNING?
Training models on multiple devices
SCALABILITY enables efficient use of computational
power using edge computing environments
that facilitate on-device training and
inference.
Federated learning allows training models
on decentralized data, avoiding the need to
PRIVACY centralize sensitive information. This
preserves user privacy by keeping data
local and only sharing model updates

Collaborative federated learning breaks down


COLLABORATION data silos, enabling decentralized model
training across industries. It preserves privacy,
facilitates innovation, and complies with
regulations, unlocking new possibilities for AI
applications.
HOW ?
FED AVERAGE?

Initialize Training Rounds Aggregation Evaluate


The server initializes the After training, each The server aggregates the Clients evaluates the global
model parameters and client computes the received model updates, model's performance using a
distributes them to the gradients using its local averaging the gradients from test dataset.
participating clients data and sends these participating clients, weighted The server collects evaluation
gradients back to the by the size of their datasets. metrics from these clients to
server The server updates the global monitor the model's progress.
model parameters using the
aggregated gradients
CHALLENGES OF FEDERATED
LEARNING
• Statistical Heterogeneity: • Expensive Communication:
Devices frequently generate Federated networks are potentially
and collect data in a non- comprised of a massive number of
identically distributed devices (e.g., millions of smart
manner across the network. phones), and communication in the
the number of data points network can be slower than local
across devices may vary computation by many orders of
significantly, and there may magnitude. In order to fit a model
be an underlying structure to data generated by the devices in
present that captures the a federated network, it is therefore
relationship amongst devices necessary to develop
and their associated communication-efficient methods
distributions. that iteratively send small
messages or model updates as part
of the training process, as opposed
to sending the entire dataset over
the network.
WHY RESOURCE
CONSTRAINT DEVICES?
It enhances data privacy by processing Deploying machine learning models
sensitive information locally, minimizing on resource-constrained devices
the need for data transmission over presents a cost-effective solution,
networks. Additionally, deploying on such diverging from the necessity of high-
devices promotes edge computing, end hardware. By leveraging sensors
distributing computation tasks closer to and small chipsets with lower
data sources, which is crucial for computational capacities, the
applications requiring low latency or offline deployment process becomes more
functionality. Overall, it optimizes economically viable.
efficiency and reliability in resource-limited
environments
USE CASES
ENHANCED LANGUAGE MODELS IN HEALTHCARE ANALYTICS
MOBILE DEVICES

FRAUD PREVENTION IN FINANCIAL AUTONOMOUS VEHICLES


SERVICES

OUR FOCUS
Camera traps deployed in remote locations provide an effective
method for ecologists to monitor and study wildlife in a non-
invasive way. However, current camera traps suffer from two
problems. First, the images are manually classified and
counted, which is expensive. Second, due to manual coding, the
results are often stale by the time they get to the ecologists.
Using the Internet of Things (IoT) combined with federated
learning represents a good solution for both these problems, as
the images can be classified automatically, and the results
immediately made available to ecologists.
OUR WORK
METRICS We tabulated our model’s accuracy across multiple
Federated Learning Count values.
With highest accuracy of 78 in 6 federated learning
counts.
We also compared it with VGG16 trained on same
dataset to compare with classical deep learning
THANK YOU

You might also like