RETINAL
RETINAL
Base Paper
1. https://www.researchgate.net/publication/
342285658_Deep_Learning-
Based_Detection_of_Pigment_Signs_for_Analysis_and_Diag
nosis_of_Retinitis_Pigmentosa
2. https://www.researchgate.net/publication/
327294998_Machine_Learning_and_Deep_Learning_appro
aches_for_Retinal_Disease_Diagnosis
Algorithm Description
Convolutional Neural Network: As we all are aware of the
fact, how deep learning and transfer learning is revolutionizing
the world with its immense capability of handling any kind of
data and learning so efficiently. So, similarly we have applied
the same concept by picking a deep learning model i.e.,
Convolutional neural network which basically work son the
principle of having filters. Each convolutional layer has some
specific filters to identify and extract the features from the
input image and learn it and transfer it to other layers for
further processing. We can have as many filters as possible in
the convolutional layer depending on the data we are dealing
on. Filter are nothing but feature detectors in the input data.
Along with the convolutional layer we also have other layers
which does further pre-processing such as Maxpooling,
Activation function, Batch Normalization and dropout layer.
These all contribute to the CNN model creation and along with
the flatten and output layer. The reason we do flattening is to
feed the output of the CNN model to the dense layer which
gives us the probability of the predicted value.
https://techieyantechnologies.com/2022/07/how-to-install-
anaconda/
https://techieyantechnologies.com/2022/06/get-started-with-
creating-new-environment-in-anaconda-configuring-jupyter-
notebook-and-installing-libraries-using-requirements-txt-2/
Dataset Description
The dataset was downloaded from a private data repository
which might not be available now. The dataset is divided into
train and test sets, where each folder is again divided into
negative and positive folders, where thee training_negative
consists of 386 images and training_positive consists of 134
images, similarly test_negative consists of 96 images and
test_positive consists of 34 images. Shape of all the images is
equally scaled about 3072 x 2048.
Negative
Positive
Issues Faced
1. We might face an issue while installing specific libraries, in
this case, you might need to install the libraires manually.
Example: pip install “module_name/library” i.e., pip install
pandas
2. Make sure you have the latest or specific version of python,
since sometimes it might cause version mismatch.
3. Adding path to environment variables in order to run python
files and anaconda environment in code editor, specifically in
any code editor.
4. Make sure to change the paths in the code accordingly
where your dataset/model is saved.
https://techieyantechnologies.com/2022/07/how-to-install-
anaconda/
https://techieyantechnologies.com/2022/06/get-started-with-
creating-new-environment-in-anaconda-configuring-jupyter-
notebook-and-installing-libraries-using-requirements-txt-2/
Note:
All the required data has been provided over here.
Please feel free to contact me for model weights and if
you face any issues.
https://www.linkedin.com/in/abhinay-lingala-5a3ab7205/
Evaluation Metrics
Evaluation metrics are considered as one of the most important
steps in any machine learning and deep learning projects,
where it will allow us to evaluate how good our model is
performing on the new data or on unseen data. There are a lot
of evaluation metrics which can be used in order to assess how
good our model is performing, in our case, since we are dealing
with binary classification and neural network, we are going to
sue binary_cross_entropy/log_loss, which basically compares
the actual class with the predicted probabilities and then it
calculates a corrected probability by subtracting it with the
probability of a datapoint belonging to class1 with the predicted
probability, i.e. for the case of ID8 it is actually class 0, but the
probability is of class 1 is 0.56, so we subtract (1 – 0.56), we
get 0.44 that is our corrected probability. Then Log_loss is
calculated by applying log transformation on each of the
calculated_probablities. The the average of the negative
corrected_probablities are taken which will gives us the
log_loss/binary_cross_entropy, the lower the value the better
our model is performing.
Reference:
https://www.analyticsvidhya.com/blog/2021/03/binary-cross-
entropy-log-loss-for-binary-classification/
Results