0% found this document useful (0 votes)
18 views45 pages

ITPSG03

Uploaded by

swetha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views45 pages

ITPSG03

Uploaded by

swetha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

EXPLORING DEEP LEARNING ARCHITECTURES FOR

EYE FUNDUS DISEASE SEGMENTATION

ABSTRACT:

Eye fundus diseases are critical conditions that can lead to severe vision
impairment and even permanent blindness if not diagnosed and treated
promptly. Manual diagnosis of these diseases is time-consuming and heavily
reliant on the expertise of ophthalmologists. This research aims to develop an
efficient and accurate diagnostic system for eye fundus disease classification
and segmentation using artificial intelligence techniques. The study involves
the compilation of a comprehensive dataset of eye fundus images,
encompassing various types of diseases, including diabetic retinopathy, age-
related macular degeneration, glaucoma, and others. Each image is
accompanied by corresponding ground-truth annotations provided by expert
ophthalmologists for segmentation. The experimental results demonstrate the
effectiveness and reliability of the proposed system in accurately classifying
eye fundus diseases and segmenting affected regions within the images. The
AI models achieve high accuracy and provide valuable insights into the
presence and extent of various fundus diseases.

Keywords: Eye Fundus Disease, Artificial Intelligence, Deep Learning,


Disease Classification, Segmentation, Convolutional Neural Networks, U-Net,
Ophthalmology, Medical Imaging, Diagnostic System.
Existing System:

To predict physician fixations specifically on ophthalmology optical coherence


tomography (OCT) reports from eye tracking data using CNN based saliency
prediction methods in order to aid in the education of ophthalmologists and
ophthalmologists-in-training. Methods: Fifteen ophthalmologists were
recruited to examine 20 randomly selected OCT reports and evaluate the
likelihood of glaucoma for each report on a scale of 0-100. Eye movements
were collected using a Pupil Labs Core eye-tracker. Fixation heat maps were
generated using fixation data. Results: A model trained with traditional
saliency mapping resulted in a correlation coefficient (CC) value of 0.208, a
Normalized Scan path Saliency (NSS) value of 0.8172, a Kullback–Leibler
(KLD) value of 2.573, and a Structural Similarity Index (SSIM) of 0.169.
Conclusions: The TranSalNet model was able to predict fixations within
certain regions of the OCT report with reasonable accuracy, but more data is
needed to improve model accuracy. Future steps include increasing data
collection, improving the quality of data, and modifying the model
architecture.

Drawback:

 The existing system's performance is hindered by a lack of sufficient


data, which indicates the need for more extensive data collection efforts.
 There's a need to improve the quality of data utilized by the existing
system to enhance model performance and reliability.
 The current model architecture might not be optimized for capturing the
intricate details and nuances present in OCT reports.
 The existing system's CNN-based saliency prediction method yields
relatively low correlation coefficients (CC) and Structural Similarity
Index (SSIM), indicating limited accuracy in predicting fixations.

INTRODUCTION
The diagnosis of eye fundus diseases plays a crucial role in the early
detection and management of various ocular conditions, such as diabetic
retinopathy, macular degeneration, and glaucoma. Timely and accurate
identification of these diseases is essential to prevent vision loss and improve
patient outcomes. In recent years, the integration of artificial intelligence (AI)
techniques has revolutionized the field of ophthalmology, offering advanced
tools for the classification and segmentation of eye fundus images.Artificial
intelligence, particularly machine learning algorithms, has shown remarkable
capabilities in analyzing large datasets of eye fundus images. These techniques
enable automated identification and classification of subtle pathological
changes that might be challenging for human observers to detect. The
development of AI models for eye fundus disease diagnosis involves training
algorithms on diverse datasets, encompassing a wide range of retinal
pathologies and normal variations. This training allows the AI models to learn
patterns and features indicative of specific diseases, paving the way for robust
and accurate automated diagnostics.

PROPOSED SYSTEM:

The proposed system aims to develop an advanced diagnostic tool for


eye fundus disease classification and segmentation using cutting-edge artificial
intelligence techniques. Leveraging the power of deep learning algorithms, the
system offers an efficient and accurate solution to assist ophthalmologists in
diagnosing and treating various eye fundus diseases promptly. A
comprehensive dataset of eye fundus images is collected, comprising diverse
cases of different eye fundus diseases, including diabetic retinopathy, age-
related macular degeneration, glaucoma, and others. The dataset is annotated
by expert ophthalmologists to provide ground-truth segmentation masks for
each image. The proposed system employs Convolutional Neural Networks
(CNNs) for eye fundus disease classification. The CNN architecture is trained
on the pre-processed dataset, enabling it to learn distinctive disease patterns
and features from the images. The dataset is split into training, validation, and
testing sets. During the training process, both the disease classification CNN
and the segmentation U-Net are optimized using backpropagation and gradient
descent techniques to minimize the loss and maximize accuracy.

Advantages:

 AI techniques such as deep learning, the proposed system offers


advanced diagnostic tools for eye fundus disease classification and
segmentation, promising more accurate and efficient diagnoses.
 Utilizes a comprehensive dataset of annotated eye fundus images,
encompassing diverse cases of various eye fundus diseases.
 Deep learning models developed using Keras with the Django
framework applications.
 By employing Convolutional Neural Networks (CNNs) for disease
classification and U-Net for segmentation

PREPARING THE DATASET:


This dataset contains approximately 1200 train and 200
test image records of features extracted, which were then classified into 4
classes.

1) Cataract
2) Diabetic retinopathy
3) Glaucoma
4) Normal

8. LITERATURE SURVEY

General
A literature review is a body of text that aims to review the critical
points of current knowledge on and/or methodological approaches to a
particular topic. It is secondary sources and discuss published information in a
particular subject area and sometimes information in a particular subject area
within a certain time period.
Its ultimate goal is to bring the reader up to date with current literature
on a topic and forms the basis for another goal, such as future research that
may be needed in the area and precedes a research proposal and may be just a
simple summary of sources. Usually, it has an organizational pattern and
combines both summary and synthesis.
A summary is a recap of important information about the source, but a
synthesis is a re-organization, reshuffling of information. It might give a new
interpretation of old material or combine new with old interpretations or it
might trace the intellectual progression of the field, including major debates.
Depending on the situation, the literature review may evaluate the sources and
advise the reader on the most pertinent or relevant of them. Loan default trends
have been long studied from a socio-economic stand point.
Most economics surveys believe in empirical modeling of these complex
systems in order to be able to predict the loan default rate for a particular
individual. The use of machine learning for such tasks is a trend which it is
observing now. Some of the survey’s to understand the past and present
perspective of loan approval or not.

Review of Literature Survey:

Title: Ocular Diseases Diagnosis in Fundus Images using a Deep Learning:


Approaches, tools and Performance evaluation

Author: Yaroub Elloumi a,b,c , Mohamed Akila,* , Henda Boudegga

Year : 2023

Ocular pathology detection from fundus images presents an important challenge


on health care. In fact, each pathology has different severity stages that may be
deduced by verifying the existence of specific lesions. Each lesion is
characterized by morphological features. Moreover, several lesions of different
pathologies have similar features. We note that patient may be affected
simultaneously by several pathologies. Consequently, the ocular pathology
detection presents a multiclass classification with a complex resolution
principle. Several detection methods of ocular pathologies from fundus images
have been proposed. The methods based on deep learning are distinguished by
higher performance detection, due to their capability to configure the network
with respect to the detection objective. This work proposes a survey of ocular
pathology detection methods based on deep learning. First, we study the existing
methods either for lesion segmentation or pathology classification. Afterwards,
we extract the principle steps of processing and we analyze the proposed neural
network structures. Subsequently, we identify the hardware and software
environment required to employ the deep learning architecture. Thereafter, we
investigate about the experimentation principles involved to evaluate the
methods and the databases used either for training and testing phases. The
detection performance ratios and execution times are also reported and
discussed.

Title : Automatic Detection of Diabetic Eye Disease Through Deep


Learning Using Fundus Images

Author : Sarki, Rubina, Ahmed, Khandakar, Wang, Hua and Zhang

Year : 2020

Diabetes Mellitus, or Diabetes, is a disease in which a person’s body fails to


respond to insulin released by their pancreas, or it does not produce sufficient
insulin. People suffering from diabetes are at high risk of developing various
eye diseases over time. As a result of advances in machine learning techniques,
early detection of diabetic eye disease using an automated system brings
substantial benefits over manual detection. A variety of advanced studies
relating to the detection of diabetic eye disease have recently been published.
This article presents a systematic survey of automated approaches to diabetic
eye disease detection from several aspects, namely: i) available datasets, ii)
image preprocessing techniques, iii) deep learning models and iv) performance
evaluation metrics. The survey provides a comprehensive synopsis of diabetic
eye disease detection approaches, including state of the art field approaches,
which aim to provide valuable insight into research communities, healthcare
professionals and patients with diabetes

Title : Data Driven Approach for Eye Disease Classification with Machine
Learning
Author: Sadaf Malik , Nadia Kanwal , Mamoona Naveed Asghar

Year :2019

Medical health systems have been concentrating on artificial intelligence


techniques for speedy diagnosis. However, the recording of health data in a
standard form still requires attention so that machine learning can be more
accurate and reliable by considering multiple features. The aim of this study is
to develop a general framework for recording diagnostic data in an
international standard format to facilitate prediction of disease diagnosis based
on symptoms using machine learning algorithms. Efforts were made to ensure
error-free data entry by developing a user-friendly interface. Furthermore,
multiple machine learning algorithms including Decision Tree, Random
Forest, Naive Bayes and Neural Network algorithms were used to analyze
patient data based on multiple features, including age, illness history and
clinical observations. This data was formatted according to structured
hierarchies designed by medical experts, whereas diagnosis was made as per
the ICD-10 coding developed by the American Academy of Ophthalmology.
Furthermore, the system is designed to evolve through self-learning by adding
new classifications for both diagnosis and symptoms. The classification results
from tree-based methods demonstrated that the proposed framework performs
satisfactorily, given a sufficient amount of data. Owing to a structured data
arrangement, the random forest and decision tree algorithms’ prediction rate is
more than 90% as compared to more complex methods such as neural
networks and the naïve Bayes algorithm.

Title : Detection of glaucoma using artificial intelligence in fundus image: A


narrative review

Author: Eman Hassan Hagar

Year : 2023
Glaucoma is a serious disease usually called ʺsilent thief of sightʺ. The disease
develops with no observable signs and symptoms leading to blindness if not
kept under control and observed in the early stages. A lot of work has been
done over the years to increase the accuracy of detecting and predicting
glaucomatous changes within the eyes. Artificial intelligence models using
fundus imaging modalities are among the most promising tools to detect and
predict glaucoma with high accuracy

Title : OCULAR EYE DISEASE PREDICTION USING MACHINE


LEARNING

Author: Rachana Devanaboina, Sreeja Badri, Madhuri Reddy Depa, Dr.Sunil


Bhutad

Year : 2021

The eye is the most important sense organ which enables us to see the world.
Ocular eye diseases are some of the major problems for vision. In this ocular
eye disease comes the most common disease, Cataract. Cataract is a misty
form that affects the vision of the eye which causes blurriness. It is mostly
found in elderly people due to their age. Computer-aided diagnosis is a bit
complicated task for the detection of ocular eye diseases. In the present
paperwork, we predict the ocular eye diseases based on Machine Learning
algorithms which include Convolution Neural Networks (CNN) and image pre-
processing. The accuracy of the outcome is displayed through the confusion
matrix.

9. SYSTEM STUDY

9.1 project goal :


The goal in the diagnosis of eye fundus disease classification and segmentation
using artificial intelligence (AI) techniques is to enhance the efficiency,
accuracy, and speed of identifying and categorizing various pathological
conditions affecting the retina. The eye fundus, or the back of the eye, contains
critical information about the health of the retina and is instrumental in
diagnosing conditions such as diabetic retinopathy, macular degeneration, and
glaucoma. Leveraging AI techniques in this domain aims to revolutionize the
traditional methods of diagnosis, providing a more sophisticated and
automated approach to the analysis of fundus images.

9.2 Objectives:
1. Develop a robust artificial intelligence (AI) model for automated diagnosis
of eye fundus diseases with a focus on accurate disease classification.

2. Investigate and implement advanced machine learning algorithms to


enhance the efficiency and reliability of eye fundus disease detection in
medical images.

3. Explore innovative AI techniques for the segmentation of distinct regions


within the eye fundus images, enabling precise localization of pathological
features.

4. Evaluate the performance of the developed AI model in terms of sensitivity,


specificity, and overall diagnostic accuracy to ensure its clinical utility.

5. Investigate the integration of deep learning architectures to improve the


depth of feature extraction for more nuanced detection and differentiation of
eye fundus diseases.
9.3 Scope:

The scope of utilizing artificial intelligence (AI) techniques in the diagnosis of


eye fundus diseases encompasses a comprehensive approach to disease
classification and segmentation. By leveraging advanced machine learning
algorithms, deep learning models, and image processing techniques, AI
systems can analyze intricate features present in fundus images, aiding in the
accurate identification and categorization of various eye conditions. This scope
involves the development of intelligent systems capable of discerning subtle
abnormalities, such as diabetic retinopathy, macular degeneration, and
glaucoma, from fundus images. Additionally, segmentation techniques play a
crucial role in precisely delineating anatomical structures within the eye,
facilitating targeted analysis and localized disease detection. The application of
AI in this domain holds the potential to enhance diagnostic accuracy,
streamline healthcare workflows, and contribute to early intervention strategies
for improved patient outcomes.
DESIGN ARCHITECTURE
Work flow diagram:

Data Collection

Pre-processing

Testing
Training
Dataset
Dataset

U-NET Architecture Model

Segmentation

Workflow Diagram
USECASE DIAGRAM:
CLASS DIAGRAM:

%3CmxGraphModel%3E%3Croot%3E%3CmxCell%20id%3D%220%22%2F%3E%3CmxCell%20id%3D%221%22%20parent%3D%220%22%2F%3E%3CmxCell%20id%3D%222%22%20va

lue%3D%22Image%22%20style%3D%22swimlane%3BfontStyle%3D0%3Balign%3Dcenter%3BverticalAlign%3Dtop%3BchildLayout%3DstackLayout%3Bhorizontal%3D1%3BstartSize%3D26%3BhorizontalStack%3D0%3BresizeParent%3D1%3BresizeLast%3D0%3Bcollapsible%3D1%3BmarginBottom%3D0%3Brounded%3D0%3Bshadow%3D0%3BstrokeWidth%3D1%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%2293.96000000000001%22%20y%3D%2230.180000000000007%22%20width%3D%22160%22%20height%3D%2286%22%20as%3D%22geometry%22%3E%3CmxRectangle%20x%3D%22230%22%20y%3D%22140%22%20width%3D%22160%22%20height%3D%2226%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%223%22%20value%3D
%22%2BColor%22%20style%3D%22text%3Balign%3Dleft%3BverticalAlign%3Dtop%3BspacingLeft%3D4%3BspacingRight%3D4%3Boverflow%3Dhidden%3Brotatable%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Brounded%3D0%3Bshadow%3D0%3Bhtml%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%222%22%3E%3CmxGeometry%20y%3D%2226%22%20width%3D%22160%22%20height%3D%2234%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%224%22%20value%3D%22%2BPixel%22%20style%3D%22text%3Balign%3Dleft%3BverticalAlign%3Dtop%3BspacingLeft%3D4%3BspacingRight%3D4%3Boverflow%3Dhidden%3Brotatable%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow
%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%222%22%3E%3CmxGeometry%20y%3D%2260%22%20width%3D%22160%22%20height%3D%2226%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%225%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BexitX%3D-0.006%3BexitY%3D0.912%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter%3D0%3BentryX%3D1.013%3BentryY%3D-0.115%3BentryDx%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%222%22%20source%3D%223%22%20target%3D%224%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x
%3D%22318%22%20y%3D%22197%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22368%22%20y%3D%22147%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%226%22%20value%3D%22Out%20put%22%20style%3D%22swimlane%3BfontStyle%3D0%3Balign%3Dcenter%3BverticalAlign%3Dtop%3BchildLayout%3DstackLayout%3Bhorizontal%3D1%3BstartSize%3D26%3BhorizontalStack%3D0%3BresizeParent%3D1%3BresizeLast%3D0%3Bcollapsible%3D1%3BmarginBottom%3D0%3Brounded%3D0%3Bshadow%3D0%3BstrokeWidth%3D1%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%22490%22%20y%3D%22360%22%20width%3D%22178%22%20height%3D%2279%22%20as%3D%22geometry%22%3E%3CmxRectangle%20x%3D
%22130%22%20y%3D%22380%22%20width%3D%22160%22%20height%3D%2226%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%227%22%20value%3D%22Flood%20and%20landside%20segmention%22%20style%3D%22text%3Balign%3Dleft%3BverticalAlign%3Dtop%3BspacingLeft%3D4%3BspacingRight%3D4%3Boverflow%3Dhidden%3Brotatable%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Brounded%3D0%3Bshadow%3D0%3Bhtml%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%226%22%3E%3CmxGeometry%20y%3D%2226%22%20width%3D%22178%22%20height%3D%2244%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%228%22%20value%3D%22Tuning%20model%22%20style%3D
%22swimlane%3BfontStyle%3D0%3Balign%3Dcenter%3BverticalAlign%3Dtop%3BchildLayout%3DstackLayout%3Bhorizontal%3D1%3BstartSize%3D26%3BhorizontalStack%3D0%3BresizeParent%3D1%3BresizeLast%3D0%3Bcollapsible%3D1%3BmarginBottom%3D0%3Brounded%3D0%3Bshadow%3D0%3BstrokeWidth%3D1%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%22322%22%20y%3D%22261%22%20width%3D%22140%22%20height%3D%2252%22%20as%3D%22geometry%22%3E%3CmxRectangle%20x%3D%22340%22%20y%3D%22380%22%20width%3D%22170%22%20height%3D%2226%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%229%22%20value%3D%22%2BDeep%20learning%22%20style%3D%22text%3Balign%3Dleft
%3BverticalAlign%3Dtop%3BspacingLeft%3D4%3BspacingRight%3D4%3Boverflow%3Dhidden%3Brotatable%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%228%22%3E%3CmxGeometry%20y%3D%2226%22%20width%3D%22140%22%20height%3D%2226%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2210%22%20value%3D%22Input%20information%22%20style%3D%22swimlane%3BfontStyle%3D0%3Balign%3Dcenter%3BverticalAlign%3Dtop%3BchildLayout%3DstackLayout%3Bhorizontal%3D1%3BstartSize%3D26%3BhorizontalStack%3D0%3BresizeParent%3D1%3BresizeLast%3D0%3Bcollapsible%3D1%3BmarginBottom%3D0%3Brounded%3D0%3Bshadow%3D0%3BstrokeWidth
%3D1%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%2294.96000000000001%22%20y%3D%22157.18%22%20width%3D%22160%22%20height%3D%2272%22%20as%3D%22geometry%22%3E%3CmxRectangle%20x%3D%22550%22%20y%3D%22140%22%20width%3D%22160%22%20height%3D%2226%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2211%22%20value%3D%22%2BField%C2%A0%22%20style%3D%22text%3Balign%3Dleft%3BverticalAlign%3Dtop%3BspacingLeft%3D4%3BspacingRight%3D4%3Boverflow%3Dhidden%3Brotatable%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Brounded%3D0%3Bshadow%3D0%3Bhtml%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D
%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2210%22%3E%3CmxGeometry%20y%3D%2226%22%20width%3D%22160%22%20height%3D%2226%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2212%22%20value%3D%22%2BFrame%22%20style%3D%22text%3Bhtml%3D1%3Balign%3Dleft%3BverticalAlign%3Dmiddle%3Bresizable%3D0%3Bpoints%3D%5B%5D%3Bautosize%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2210%22%3E%3CmxGeometry%20y%3D%2252%22%20width%3D%22160%22%20height%3D%2220%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2213%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BexitX%3D0.006%3BexitY%3D0.15%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter
%3D0%3BentryX%3D1%3BentryY%3D0.1%3BentryDx%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%2210%22%20source%3D%2212%22%20target%3D%2212%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22113%22%20y%3D%22189%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22163%22%20y%3D%22139%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2214%22%20value%3D%22%22%20style%3D%22shape%3Dtable%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3BstartSize%3D0%3Bcontainer%3D1%3Bcollapsible%3D0%3BchildLayout%3DtableLayout%3Bshadow%3D0%3BfillColor%3D%23ffe6cc
%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%22318%22%20y%3D%2295.18%22%20width%3D%22140%22%20height%3D%22133.75%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2215%22%20value%3D%22%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bcollapsible%3D0%3BdropTarget%3D0%3BpointerEvents%3D0%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2214%22%3E%3CmxGeometry%20width%3D%22140%22%20height%3D%2228%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id
%3D%2216%22%20value%3D%22Tensorflow%20model%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bconnectable%3D0%3Boverflow%3Dhidden%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Balign%3Dcenter%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2215%22%3E%3CmxGeometry%20width%3D%22140%22%20height%3D%2228%22%20as%3D%22geometry%22%3E%3CmxRectangle%20width%3D%22140%22%20height%3D%2228%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2217%22%20value%3D%22%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bcollapsible%3D0%3BdropTarget%3D0%3BpointerEvents%3D0%3Btop%3D0%3Bleft%3D0%3Bbottom
%3D0%3Bright%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2214%22%3E%3CmxGeometry%20y%3D%2228%22%20width%3D%22140%22%20height%3D%2248%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2218%22%20value%3D%22%2BPackage%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bconnectable%3D0%3Boverflow%3Dhidden%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Balign%3Dleft%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2217%22%3E%3CmxGeometry%20width%3D%22140%22%20height%3D%2248%22%20as%3D%22geometry
%22%3E%3CmxRectangle%20width%3D%22140%22%20height%3D%2248%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2219%22%20value%3D%22%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bcollapsible%3D0%3BdropTarget%3D0%3BpointerEvents%3D0%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2214%22%3E%3CmxGeometry%20y%3D%2276%22%20width%3D%22140%22%20height%3D%2258%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2220%22%20value%3D%22%2Binformation%22%20style%3D%22shape
%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bconnectable%3D0%3Boverflow%3Dhidden%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Balign%3Dleft%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2219%22%3E%3CmxGeometry%20width%3D%22140%22%20height%3D%2258%22%20as%3D%22geometry%22%3E%3CmxRectangle%20width%3D%22140%22%20height%3D%2258%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2221%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BexitX%3D0%3BexitY%3D1%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D
%2214%22%20source%3D%2219%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22-100%22%20y%3D%22210%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22141.00000000000003%22%20y%3D%22131.82%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2222%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BentryX%3D1%3BentryY%3D-0.042%3BentryDx%3D0%3BentryDy%3D0%3BexitX%3D1%3BexitY%3D1%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter%3D0%3BentryPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%2214%22%20source%3D%2219%22%20target%3D%2215%22%3E%3CmxGeometry
%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22-100%22%20y%3D%22210%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22-50%22%20y%3D%22160%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2223%22%20value%3D%22%22%20style%3D%22shape%3Dtable%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3BstartSize%3D0%3Bcontainer%3D1%3Bcollapsible%3D0%3BchildLayout%3DtableLayout%3BstrokeWidth%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%22527%22%20y%3D%2295.18%22%20width%3D%22120%22%20height%3D%22130%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D
%2224%22%20value%3D%22%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bcollapsible%3D0%3BdropTarget%3D0%3BpointerEvents%3D0%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2223%22%3E%3CmxGeometry%20width%3D%22120%22%20height%3D%2228%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2225%22%20value%3D%22Test%20data%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bconnectable%3D0%3Boverflow%3Dhidden%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc
%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2224%22%3E%3CmxGeometry%20width%3D%22120%22%20height%3D%2228%22%20as%3D%22geometry%22%3E%3CmxRectangle%20width%3D%22120%22%20height%3D%2228%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2226%22%20value%3D%22%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bcollapsible%3D0%3BdropTarget%3D0%3BpointerEvents%3D0%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2223%22%3E%3CmxGeometry%20y%3D%2228%22%20width%3D
%22120%22%20height%3D%2237%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2227%22%20value%3D%22%2BType%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bconnectable%3D0%3Boverflow%3Dhidden%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Balign%3Dleft%3BstrokeWidth%3D8%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2226%22%3E%3CmxGeometry%20width%3D%22120%22%20height%3D%2237%22%20as%3D%22geometry%22%3E%3CmxRectangle%20width%3D%22120%22%20height%3D%2237%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2228%22%20value%3D%22%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace
%3Dwrap%3Bcollapsible%3D0%3BdropTarget%3D0%3BpointerEvents%3D0%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2223%22%3E%3CmxGeometry%20y%3D%2265%22%20width%3D%22120%22%20height%3D%2265%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2229%22%20value%3D%22%2BClassified%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bconnectable%3D0%3Boverflow%3Dhidden%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Balign%3Dleft%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D
%221%22%20parent%3D%2228%22%3E%3CmxGeometry%20width%3D%22120%22%20height%3D%2265%22%20as%3D%22geometry%22%3E%3CmxRectangle%20width%3D%22120%22%20height%3D%2265%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2230%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BentryX%3D0.992%3BentryY%3D-0.015%3BentryDx%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%2223%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22-1%22%20y%3D%2264%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22121.03999999999996%22%20y%3D
%2264.02500000000003%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2231%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%2223%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20y%3D%22133%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20y%3D%221%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2232%22%20value%3D%22%22%20style%3D%22shape%3Dtable%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3BstartSize%3D0%3Bcontainer%3D1%3Bcollapsible%3D0%3BchildLayout%3DtableLayout%3Bshadow%3D0%3BfillColor%3D
%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20x%3D%22323.24%22%20y%3D%22348%22%20width%3D%22129.51%22%20height%3D%2280%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2233%22%20value%3D%22%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bcollapsible%3D0%3BdropTarget%3D0%3BpointerEvents%3D0%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2232%22%3E%3CmxGeometry%20width%3D%22129.51%22%20height%3D%2230%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E
%3CmxCell%20id%3D%2234%22%20value%3D%22Test%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bconnectable%3D0%3Boverflow%3Dhidden%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2233%22%3E%3CmxGeometry%20width%3D%22130%22%20height%3D%2230%22%20as%3D%22geometry%22%3E%3CmxRectangle%20width%3D%22130%22%20height%3D%2230%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2235%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%2232%22%3E%3CmxGeometry
%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22-1%22%20y%3D%2281%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22-1%22%20y%3D%22-2%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2236%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BentryX%3D1.009%3BentryY%3D0%3BentryDx%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3BexitX%3D-0.023%3BexitY%3D0%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter%3D0%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%2232%22%20source%3D%2233%22%20target%3D%2233%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D
%22geometry%22%3E%3CmxPoint%20x%3D%2214%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22-13.490000000000009%22%20y%3D%22-65%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2237%22%20value%3D%22%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bcollapsible%3D0%3BdropTarget%3D0%3BpointerEvents%3D0%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bpoints%3D%5B%5B0%2C0.5%5D%2C%5B1%2C0.5%5D%5D%3BportConstraint%3Deastwest%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2232%22%3E%3CmxGeometry%20y%3D%2230%22%20width%3D%22129.51%22%20height%3D%2250%22%20as%3D%22geometry%22%2F%3E%3C%2FmxCell%3E%3CmxCell%20id%3D
%2238%22%20value%3D%22%2BTesting%20the%20machine%22%20style%3D%22shape%3DpartialRectangle%3Bhtml%3D1%3BwhiteSpace%3Dwrap%3Bconnectable%3D0%3Boverflow%3Dhidden%3Btop%3D0%3Bleft%3D0%3Bbottom%3D0%3Bright%3D0%3Bshadow%3D0%3Balign%3Dleft%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20vertex%3D%221%22%20parent%3D%2237%22%3E%3CmxGeometry%20width%3D%22130%22%20height%3D%2250%22%20as%3D%22geometry%22%3E%3CmxRectangle%20width%3D%22130%22%20height%3D%2250%22%20as%3D%22alternateBounds%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2239%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BentryX%3D1.018%3BentryY%3D1.02%3BentryDx%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D
%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20target%3D%2237%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22333.51%22%20y%3D%22428%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22300.51%22%20y%3D%22283%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2240%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22452.75%22%20y%3D
%22429.5%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22452.75%22%20y%3D%22346.5%22%20as%3D%22targetPoint%22%2F%3E%3CArray%20as%3D%22points%22%3E%3CmxPoint%20x%3D%22455.24%22%20y%3D%22388.5%22%2F%3E%3C%2FArray%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2241%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22323.24%22%20y%3D%22378%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22450.75%22%20y%3D%22378%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E
%3CmxCell%20id%3D%2242%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22317.96000000000004%22%20y%3D%2295.18000000000006%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22317.96000000000004%22%20y%3D%22224.18000000000006%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2243%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BentryX%3D1%3BentryY%3D0%3BentryDx%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc
%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20target%3D%2215%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22317%22%20y%3D%2295.18000000000006%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22675.96%22%20y%3D%2226.360000000000014%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2244%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D
%22458.96000000000004%22%20y%3D%22122.36000000000001%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22318.96000000000004%22%20y%3D%22122.36000000000001%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2245%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22321%22%20y%3D%22169.18000000000006%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22461%22%20y%3D%22169.18000000000006%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E
%3CmxCell%20id%3D%2246%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BexitX%3D1.017%3BexitY%3D1.031%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20source%3D%2228%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22247.99999999999997%22%20y%3D%22305.18%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22528%22%20y%3D%22225.18000000000006%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2247%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BentryX%3D1.008%3BentryY%3D0%3BentryDx
%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20target%3D%2224%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22649%22%20y%3D%22225.18000000000006%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22298%22%20y%3D%22255.18%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2248%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative
%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22528%22%20y%3D%2295.18000000000006%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22648%22%20y%3D%2295.18000000000006%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2249%22%20value%3D%22%22%20style%3D%22endArrow%3Dnone%3Bhtml%3D1%3BexitX%3D-0.008%3BexitY%3D1.107%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter%3D0%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3Bdashed%3D1%3B%22%20edge%3D%221%22%20source%3D%2224%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22247.99999999999997%22%20y%3D%22305.18%22%20as%3D%22sourcePoint%22%2F%3E
%3CmxPoint%20x%3D%22648%22%20y%3D%22125.18000000000006%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2250%22%20value%3D%22%22%20style%3D%22endArrow%3Dclassic%3Bhtml%3D1%3Bshadow%3D0%3BexitX%3D0.469%3BexitY%3D1.108%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter%3D0%3BentryX%3D0.463%3BentryY%3D0.025%3BentryDx%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22169%22%20y%3D%22116.98800000000006%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22169.03999999999996%22%20y%3D%22156.98000000000002%22%20as%3D
%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2251%22%20value%3D%22%22%20style%3D%22endArrow%3Dclassic%3Bhtml%3D1%3Bshadow%3D0%3BexitX%3D1%3BexitY%3D0.5%3BexitDx%3D0%3BexitDy%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3B%22%20edge%3D%221%22%20source%3D%2211%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22484%22%20y%3D%22218%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22318%22%20y%3D%22196%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2252%22%20value%3D%22%22%20style%3D%22endArrow%3Dclassic%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D
%23d79b00%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22525%22%20y%3D%22171%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22462%22%20y%3D%22171%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2253%22%20value%3D%22%22%20style%3D%22endArrow%3Dclassic%3Bhtml%3D1%3Bshadow%3D0%3BentryX%3D0.4%3BentryY%3D0%3BentryDx%3D0%3BentryDy%3D0%3BentryPerimeter%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3B%22%20edge%3D%221%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22388%22%20y%3D
%22225%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22387%22%20y%3D%22261%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D%2254%22%20value%3D%22%22%20style%3D%22endArrow%3Dclassic%3Bhtml%3D1%3Bshadow%3D0%3BexitX%3D0.45%3BexitY%3D1.115%3BexitDx%3D0%3BexitDy%3D0%3BexitPerimeter%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3B%22%20edge%3D%221%22%20source%3D%229%22%20parent%3D%221%22%3E%3CmxGeometry%20width%3D%2250%22%20height%3D%2250%22%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22484%22%20y%3D%22318%22%20as%3D%22sourcePoint%22%2F%3E%3CmxPoint%20x%3D%22385%22%20y%3D%22349%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3CmxCell%20id%3D
%2255%22%20value%3D%22%22%20style%3D%22edgeStyle%3DorthogonalEdgeStyle%3Brounded%3D0%3BorthogonalLoop%3D1%3BjettySize%3Dauto%3Bhtml%3D1%3Bshadow%3D0%3BfillColor%3D%23ffe6cc%3BstrokeColor%3D%23d79b00%3B%22%20edge%3D%221%22%20source%3D%2237%22%20parent%3D%221%22%3E%3CmxGeometry%20relative%3D%221%22%20as%3D%22geometry%22%3E%3CmxPoint%20x%3D%22491%22%20y%3D%22403.5%22%20as%3D%22targetPoint%22%2F%3E%3C%2FmxGeometry%3E%3C%2FmxCell%3E%3C%2Froot%3E%3C%2FmxGraphModel%3E

Class diagram is basically a graphical representation of the static view


of the system and represents different aspects of the application. So a
collection of class diagrams represent the whole system. The name of the class
diagram should be meaningful to describe the aspect of the system. Each
element and their relationships should be identified in advance Responsibility
(attributes and methods) of each class should be clearly identified for each
class minimum number of properties should be specified and because,
unnecessary properties will make the diagram complicated. Use notes
whenever required to describe some aspect of the diagram and at the end of
the drawing it should be understandable to the developer/coder. Finally, before
making the final version, the diagram should be drawn on plain paper and
rework as many times as possible to make it correct.

ACTIVITY DIAGRAM:

Activity is a particular operation of the system. Activity diagrams are


not only used for visualizing dynamic nature of a system but they are also
used to construct the executable system by using forward and reverse
engineering techniques. The only missing thing in activity diagram is the
message part. It does not show any message flow from one activity to another.
Activity diagram is some time considered as the flow chart. Although the
diagrams looks like a flow chart but it is not. It shows different flow like
parallel, branched, concurrent and single.
SEQUENCE DIAGRAM:

Sequence diagrams model the flow of logic within your system in a


visual manner, enabling you both to document and validate your logic, and are
commonly used for both analysis and design purposes. Sequence diagrams are
the most popular UML artifact for dynamic modelling, which focuses on
identifying the behaviour within your system. Other dynamic modelling
techniques include activity diagramming, communication
diagramming, timing diagramming, and interaction overview diagramming.
Sequence diagrams, along with class diagrams and physical data models are in
my opinion the most important design-level models for modern business
application development.
ER DIAGRAM:

An entity relationship diagram (ERD), also known as an entity


relationship model, is a graphical representation of an information system that
depicts the relationships among people, objects, places, concepts or events
within that system. An ERD is a data modeling technique that can help define
business processes and be used as the foundation for a relational database.
Entity relationship diagrams provide a visual starting point for database design
that can also be used to help determine information system requirements
throughout an organization. After a relational database is rolled out, an ERD
can still serve as a referral point, should any debugging or business process re-
engineering be needed later.
COLLABORATION DIAGRAM:

A collaboration diagram show the objects and relationships involved in


an interaction, and the sequence of messages exchanged among the objects
during the interaction.

The collaboration diagram can be a decomposition of a class, class diagram, or


part of a class diagram.it can be the decomposition of a use case, use case
diagram, or part of a use case diagram.

The collaboration diagram shows messages being sent between classes and
object (instances). A diagram is created for each system operation that relates
to the current development cycle (iteration).
METHODOLOGY

Preprocessing and Training the model (CNN): The dataset is


preprocessed such as Image reshaping, resizing and conversion to an array
form. Similar processing is also done on the test image. A dataset
consisting of about 4 different Cataract, Diabetic retinopathy, Glaucoma,
Normal out of which any image can be used as a test image for the
software.

CNN Weights

Raw image Build a sequential CNN train Eye disease segmentation


model

The train dataset is used to train the model (CNN) so that it can identify the
test image and the disease it has CNN has different layers that are Dense,
Dropout, Activation, Flatten, Convolution2D, and MaxPooling2D. After the
model is trained successfully, the software can identify the different Cataract,
Diabetic retinopathy, Glaucoma, Normal Classification image contained in the
dataset. After successful training and preprocessing, comparison of the test
image and trained model takes place to predict the
LIST OF MODULES

1. Data Analysis
2. Manual Architecture
3. LeNet Architecture
4. U-Net Architecture
5. Deployment

COMPARES ARCHITECTURE:

COMPARISON FOR LENET AND U-NET:


FEATURE LENET U-NET

Classification (Traditional Segmentation (Encoder-


Model Type
CNN) Decoder Architecture)

Input Shape (224, 224, 3) (Variable, e.g., (256, 256, 3))

15 (including convolutional,
7 (including convolutional,
Number of Layers pooling, and transpose
pooling, and dense layers)
convolution layers)

8 (in encoder and decoder


Convolutional Layers 2 (with different filters)
blocks)

4 (MaxPooling2D in encoder
Pooling Layers 2 (MaxPooling2D)
blocks)

2 dense layers (one hidden No dense layers; uses


Dense Layers with 256 units, one output convolutional layers for both
with 4 units) encoding and decoding

Activation Function ReLU, Softmax ReLU, Sigmoid

Optimizer Adam (learning rate = 0.001) Adam

Loss Function Categorical Crossentropy Binary Crossentropy

Metrics Accuracy, Precision (Not specified)

Epochs 50 20

Accuracy 0.89688 0.94158 to 0.94325


MODULE DESCRIPTION

IMPORT THE GIVEN IMAGE FROM DATASET:

We have to import our data set using keras preprocessing image data
generator function also we create size, rescale, range, zoom range, horizontal
flip. Then we import our image dataset from folder through the data generator
function. Here we set train, test, and validation also we set target size, batch
size and class-mode from this function we have to train using our own created
network by adding layers of CNN.

TO TRAIN THE MODULE BY GIVEN

1. Data Analysis

Data analysis is the process of cleaning, changing, and processing raw


data, and extracting actionable, relevant information that helps businesses
make informed decisions. The procedure helps reduce the risks inherent in
decision-making by providing useful insights. The data analysis process, or
alternately, data analysis steps, involves gathering all the information,
processing it, exploring the data, and using it to find patterns and other
insights.
In data analysis we analyse the data that how the image data is available.
We analyse how many data are available and we check whether the normal
data is available corresponding to the mask data.
Manual Architecture :

Creating a manual architecture for image segmentation typically involves


designing a neural network or algorithm that can process an input image and
produce pixel-level segmentation masks or regions of interest. Here's a
simplified manual architecture for image segmentation:

1. Input Image: The architecture takes an input image as its primary input. This
can be a grayscale or color image depending on your application.

2. Preprocessing: Preprocess the input image to enhance features and reduce


noise. Common preprocessing steps include resizing, normalization, and data
augmentation.

3. Convolutional Neural Network (CNN): Use a convolutional neural network


as the backbone of your segmentation architecture. CNNs are highly effective
at capturing local patterns and spatial features in images.
- You can use a pre-trained CNN architecture like VGG, ResNet, or U-Net as
a starting point, or design a custom architecture.

4. Encoder-Decoder Architecture: Many segmentation architectures follow an


encoder-decoder structure:

- Encoder: The encoder part of the network extracts features from the input
image through a series of convolutional layers. These layers reduce spatial
dimensions while increasing the number of feature maps.

- Decoder: The decoder part of the network upsamples the feature maps to
the original image size while reducing the number of channels. This process
helps generate the segmentation mask.

5. Skip Connections: To improve segmentation accuracy, consider adding skip


connections that concatenate feature maps from the encoder to the
corresponding layers in the decoder. This allows the network to capture both
high-level and low-level features.

6. Convolutional Transpose (Deconvolution) Layers: Use convolutional


transpose layers (sometimes called deconvolution layers) to upsample feature
maps in the decoder. These layers expand the spatial resolution of the feature
maps.

7. Activation Function: Apply an activation function, typically a softmax or


sigmoid, to the final layer of the decoder to produce the segmentation mask.
For binary segmentation, sigmoid is often used; for multi-class segmentation,
softmax is common.

8. Loss Function: Define an appropriate loss function to measure the difference


between the predicted segmentation mask and the ground truth mask. Common
loss functions for segmentation include binary cross-entropy and categorical
cross-entropy.
9. Optimization Algorithm: Use an optimization algorithm like stochastic
gradient descent (SGD), Adam, or RMSprop to update the network's weights
and minimize the loss function.

10. Training Data: Train the network using a dataset of annotated images. The
dataset should include input images and corresponding segmentation masks.

11. Post-processing: Apply post-processing techniques if needed, such as


morphological operations (erosion, dilation) to refine the segmentation mask.

12. Inference: During inference, feed an unseen image through the trained
network to obtain the segmentation mask.

13. Evaluation: Evaluate the segmentation accuracy using metrics like


Intersection over Union (IoU), Dice coefficient, or pixel accuracy.

This manual architecture provides a high-level overview of the components


and steps involved in image segmentation. Depending on your specific task
and dataset, you may need to customize and fine-tune the architecture for
optimal results. Additionally, modern architectures may incorporate more
advanced techniques, such as attention mechanisms or conditional random
fields, to further enhance segmentation performance.
Le-Net Archietecture:

LeNet, short for "LeNet-5," is a classic convolutional neural network (CNN)


architecture developed by Yann LeCun in the early 1990s. While LeNet is
renowned for its role in image classification tasks, it can be adapted for image
segmentation, including applications like Eye fundus diseases cancer
segmentation. Here's a high-level overview of how LeNet can be modified for
this purpose:

1. Input Image: The input to the network is a Eye fundus diseases cancer
image, typically in grayscale or color, depending on your dataset and
requirements.

2. Preprocessing: Preprocess the input images as needed. Common


preprocessing steps include resizing to a consistent input size, normalization,
and data augmentation.

3. LeNet Architecture: LeNet consists of a series of convolutional and pooling


layers followed by fully connected layers. For Eye fundus diseases cancer
segmentation, you will need to modify the architecture to produce pixel-wise
segmentation instead of classification.

4. Convolutional Layers: Retain the convolutional layers from the original


LeNet architecture. These layers are responsible for learning hierarchical
features from the input image.

5. Pooling Layers: Keep the max-pooling layers from the original LeNet.
Pooling helps reduce the spatial dimensions of feature maps.

6. Encoder-Decoder Modification: Modify the fully connected layers of the


original LeNet into an encoder-decoder architecture for segmentation. Remove
the final fully connected layers that were used for classification.
7. Decoder: Design a decoder portion that mirrors the encoder. It consists of
transposed convolutional (also known as deconvolutional) layers that upsample
the feature maps back to the original image size.

8. Activation Function: Apply an appropriate activation function, such as


sigmoid (for binary segmentation) or softmax (for multi-class segmentation),
to the output layer of the decoder to obtain pixel-wise segmentation masks.

9. Loss Function: Define a suitable loss function for segmentation tasks, such
as binary cross-entropy or categorical cross-entropy, depending on the nature
of your data.

10. Optimization Algorithm: Utilize an optimization algorithm, like stochastic


gradient descent (SGD), Adam, or RMSprop, to train the network by
minimizing the defined loss function.

11. Training Data: Train the network using a labeled dataset of Eye fundus
diseases cancer images and their corresponding pixel-wise segmentation
masks.

12. Post-processing (Optional): Depending on the quality of the segmentation


masks, you may apply post-processing techniques such as morphological
operations (erosion, dilation) or connected component analysis to refine the
segmentations.

13. Evaluation: Evaluate the segmentation performance using standard metrics


like Intersection over Union (IoU), Dice coefficient, or pixel accuracy.

It's essential to note that while LeNet can serve as a starting point for
segmentation tasks, modern architectures, such as U-Net or FCN (Fully
Convolutional Network), are generally more suitable for segmentation due to
their specialized design for pixel-wise predictions. These architectures often
yield better results and are more commonly used in contemporary computer
vision tasks, including medical image segmentation.
LeNet Architecture:

U-Net Architecture:
The U-Net architecture is a popular deep learning architecture for image
segmentation tasks, including Eye fundus diseases cancer segmentation. It was
originally developed for biomedical image segmentation and has since found
applications in various medical image analysis tasks. The name "U-Net" is
derived from the U-shaped architecture of the network.

Here's an overview of the U-Net architecture for Eye fundus diseases cancer
segmentation:

Encoder-Decoder Structure:

U-Net follows an encoder-decoder structure. It consists of two main parts: the


encoder and the decoder.

The encoder captures features from the input image at multiple scales by using
convolutional and pooling layers. It gradually reduces the spatial dimensions
while increasing the number of feature maps.

The decoder then takes these features and upsamples them to the original
image size while reducing the number of feature maps. This helps generate a
detailed segmentation mask.

Skip Connections:

One of the key innovations of U-Net is the use of skip connections that connect
corresponding layers between the encoder and decoder.
These skip connections allow the network to capture both high-level and low-
level features, which is crucial for accurate segmentation.

Skip connections concatenate feature maps from the encoder to the


corresponding layers in the decoder.

Contracting and Expansive Paths:

The encoder path is often referred to as the contracting path because it reduces
spatial dimensions.

The decoder path is called the expansive path because it increases the spatial
dimensions.

Skip connections connect the contracting and expansive paths, facilitating the
flow of information between them.

Final Layer:

The final layer of the U-Net architecture typically consists of a convolutional


layer with a softmax activation function for multi-class segmentation or a
sigmoid activation function for binary segmentation.

The output of this layer is the segmentation mask, where each pixel is
classified into the desired classes (e.g., tumor or background).

Loss Function:
Common loss functions for U-Net-based segmentation tasks include binary
cross-entropy loss for binary segmentation or categorical cross-entropy loss for
multi-class segmentation.

The loss function measures the difference between the predicted segmentation
mask and the ground truth mask.

Training Data:

To train a U-Net model for Eye fundus diseases cancer segmentation, you
need a dataset of annotated Eye fundus diseases images. The dataset should
include input mammograms or other relevant images and corresponding pixel-
level segmentation masks indicating the regions of interest (e.g., tumors).

Inference:

During inference, you feed an unseen Eye fundus diseases image through the
trained U-Net model to obtain the segmentation mask, which highlights the
areas of interest, such as tumors or lesions.

Post-processing:

Post-processing steps, such as morphological operations (e.g., erosion,


dilation) or connected component analysis, can be applied to refine the
segmentation mask and remove any artifacts.

U-Net has been widely adopted in medical image segmentation due to its
ability to capture fine details and its effectiveness in handling limited training
data. Researchers and practitioners often customize U-Net architectures by
adjusting the number of layers, filter sizes, and skip connections to suit the
specific requirements of their Eye fundus diseases cancer segmentation tasks.
Additionally, data augmentation techniques are commonly used to increase the
diversity of training data and improve model generalization.
U-Net Architecture:

DEPLOY

Deploying the model in Django Framework and predicting output

In this module the trained deep learning model is converted into


hierarchical data format file (.h5 file) which is then deployed in our django
framework for providing better user interface and predicting the output.

Django

Django is a high-level Python web framework that enables rapid


development of secure and maintainable websites. Built by experienced
developers, Django takes care of much of the hassle of web development, so
you can focus on writing your app without needing to reinvent the wheel. It is
free and open source, has a thriving and active community, great
documentation, and many options for free and paid-for support.
Django helps you write software that is:

Complete

Django follows the "Batteries included" philosophy and provides


almost everything developers might want to do "out of the box". Because
everything you need is part of the one "product", it all works seamlessly
together, follows consistent design principles, and has extensive and up-to-
date documentation.

Output Screenshot:
Conclusion:

In conclusion, the application of artificial intelligence techniques in the


diagnosis of eye fundus diseases through classification and segmentation has
shown promising results in enhancing the efficiency and accuracy of medical
assessments. The use of advanced algorithms and machine learning models has
facilitated the automatic identification and categorization of various eye fundus
pathologies, providing valuable support to healthcare professionals. These AI-
driven approaches have the potential to streamline the diagnostic process,
enabling early detection and intervention, thereby improving patient outcomes.
However, ongoing research and validation studies are crucial to ensuring the
robustness and reliability of these techniques in real-world clinical settings,
and collaborative efforts between the medical and AI communities are
essential for the continued development and implementation of effective tools
for eye fundus disease diagnosis.
FUTURE WORK:

In the realm of diagnosing eye fundus diseases, the integration of advanced


artificial intelligence (AI) techniques holds great promise for future
developments. The ongoing exploration of deep learning models, particularly
convolutional neural networks (CNNs), offers an avenue for enhancing the
accuracy and efficiency of disease classification and segmentation in eye
fundus images. Future work could focus on refining existing models to handle
a broader spectrum of pathologies and ensuring robustness across diverse
datasets. Additionally, exploring novel architectures that incorporate multi-
modal information, such as combining imaging and patient clinical data, may
further improve diagnostic capabilities. Addressing challenges related to
interpretability and the integration of AI systems into clinical workflows will
be crucial for facilitating the adoption of these technologies in real-world
medical settings. Furthermore, continuous efforts in collecting large and
diverse datasets, along with collaborative initiatives between clinicians and AI
researchers, will be essential for training models that generalize well and
contribute to the evolution of reliable and accessible tools for the early
detection and management of eye fundus diseases.

You might also like