Untitled Document
Untitled Document
Bullets
● New Tasks
● Stronger Privacy
Speaker Notes
“We will move from our single-computer tests to real hospital servers, each keeping
its own images private. We’ll try different federated methods—FedAvg, FedProx,
FedBN, and a personalized variant—to see which works best. Then we’ll expand
beyond simply classifying images by training the model to outline tumors and catch
rare cases. To protect privacy, we’ll add techniques so no one can reverse-engineer
patient data from updates. Finally, we’ll bring in basic patient info alongside the
images to train on both kinds of data together.”
Slide: Limitations
Bullets
● Limited Variety
○ 25 000 images all come from just 1 250 originals, so they look very similar
● One-Box Testing
● Only Classification
● Scaling Unknown
Speaker Notes
“We recognize that our images are all variations of 1 250 originals, so we may miss
some tissue differences. All our experiments ran on one GPU, not on real-world
hospital networks, so we didn’t face network delays or servers dropping out. We
only used MobileNetV2 and focused on classifying images—we haven’t yet done
tumor outlining or rare-case spotting. Finally, we don’t yet know how well this will
scale if dozens of hospitals join.”
Slide: Conclusion
Bullets
➔ High Accuracy
Speaker Notes
“In summary, we trained MobileNetV2 on the LC25000 dataset and achieved nearly
99.7 % accuracy across five tissue types. We built a full pipeline with a 70/15/15
split, early stopping, and learning-rate scheduling. A literature review showed few
prior federated-learning work on this dataset, so we’ve decided to go with this. We
also laid out clear next steps for real multi-site FL, model personalization, fair
performance across hospitals, and strong privacy protections.”