Batch 18 MP Report
Batch 18 MP Report
Submitted by
VARINILAKSHMI S 113022205110
BACHELOR OF TECHNOLOGY
In
INFORMATION TECHNOLOGY
APRIL 2025
VEL TECH HIGH TECH
Dr RANGARAJAN Dr SAKUNTHALA ENGINEERING COLLEGE
An Autonomous Institution
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
Mrs. R. LAVANYA M.E., Dr M. MALLESWARI M.E.,Ph.D.,
SUPERVISOR HEAD OF THE DEPARTMENT
ASSISTANT PROFESSOR PROFESSOR
Department of Information Technology, Department of Information Technology,
Vel Tech High Tech Dr. Rangarajan Vel Tech High Tech Dr.Rangarajan
Dr.Sakunthala Engineering College Dr.Sakunthala Engineering College.
ii
CERTIFICATE OF EVALUATION
Name, Designation&
Name of the Title of the Project Department of the Supervisor
SNo.
Students
01 SHREE
SHAINIHA JS Advanced Convolutional Mrs. R LAVANYA M.E.,
Neural Network-Based
ASSISTANT PROFESSOR
Multi-Disease Detection
02 TANUSHRI S Department of
System for
Information
Comprehensive Health
Technology
VARINI Analysis
03
LAKSHMI S
The report of the project work submitted by the above students in partial
fulfillment for the award of degree, Bachelor of Technology in Information
Technology for the viva voce examination held at Vel Tech High Tech
Dr.Rangarajan Dr.Sakunthala Engineering College on has been
evaluated and confirmed to be reports of the work done by the above students.
iii
ACKNOWLEDGEMENT
We wish to express our sincere thanks and gratitude to our chairman Col.
Prof. Dr. R. RANGARAJAN B.E.(Elec.), B.E.(Mech.), M.S(Auto.), DSC. and
vice-chairman Dr. SAKUNTHALA RANGARAJAN M.B.B.S., for providing us
with a comfort zone for doing this project work. We express our thanks to our
principal, Professor Dr. E. KAMALANABAN B.E., M.E.,Ph.D., for offering us
all the facilities to do the project.
iv
ABSTRACT
identify and classify multiple diseases from medical imaging data, offering a
robust and scalable solution for early diagnosis and efficient patient management.
extracting features from diverse datasets, ensuring high accuracy across various
complex patterns in medical images such as X-rays, CT scans, and MRIs, and is
v
TABLE OF CONTENTS
ABSTRACT v
LIST OF FIGURES x
LIST OF ABBREVIATIONS xi
1 INTRODUCTION 1
1.1 OVERVIEW OF THE PROJECT 1
1.2 STATEMENT OF THE PROBLEM 1
1.3 WHY THE PROBLEM STATEMENT IS OF
2
INTEREST
2 LITERATURE SURVEY 4
vi
2.5 AN EFFICIENT MULTI-DISEASE
PREDICTION MODEL USING ADVANCED
OPTIMIZATION AIDED WEIGHTED 8
CONVOLUTIONAL NEURAL NETWORK
vii
3 SYSTEM ANALYSIS 9
3.1 EXISTING SYSTEM 9
3.1.1 Disadvantages 10
3.2 PROPOSED SYSTEM 11
3.2.1 Advantages 12
4 REQUIREMENTS SPECIFICATION 13
4.1 INTRODUCTION 13
4.3.1 PYTHON 18
4.3.2 HTML 23
4.3.3 CSS 23
4.3.4 ANACONDA 24
viii
5 SYSTEM DESIGN 25
5.1 ARCHITECTURE DIAGRAM 25
5.2.2 DESCRIPTION 30
5.3 MODULES 32
5.3.6 DEPLOYMENT 33
6 METHODOLOGY 34
ix
6.5 MODEL EVALUATION AND UPDATES 35
7.1 CONCLUSION 36
7.2 FUTUREWORKS 37
8 APPENDICES 38
9 REFERENCES 56
x
LIST OF FIGURES
xi
LIST OF ABBREVIATION
ABBREVIATIONS DESCRIPTION
xii
CHAPTER 1
INTRODUCTIO
1
This approach is time-consuming, costly, and inefficient, especially when multiple
diseases coexist. The reliance on manual interpretation of medical imaging also
2
introduces variability and errors, influenced by clinician fatigue and differences in
expertise, which can delay critical treatment decisions and compromise patient
outcomes.This project addresses these challenges by proposing a Convolutional
Neural Network (CNN)-based multi-disease detection system. It aims to deliver
accurate, simultaneous diagnosis for multiple diseases using medical imaging,
while ensuring interpretability through explainable AI techniques. This innovative
solution seeks to improve diagnostic efficiency, accessibility, and reliability,
ultimately enhancing patient care and outcomes.
4
1.4 OBJECTIVE OF THE STUDY
5
CHAPTER 2
LITERATURE
SURVEY
PUBLISHER: IEEE
YEAR: 2022
DESCRIPTION:
Tongue image analysis for disease diagnosis is an ancient, traditional, non-
invasive diagnostic technique widely used by traditional medicine practitioners.
Deep learning-based multi-label disease detection models have tremendous
potential for clinical decision support systems because they facilitate preliminary
diagnosis. Methods: In this work, we propose a multi-label disease detection
pipeline where observation and analysis of tongue images captured and received
via smartphones assist in predicting the health status of an individual. Subjects,
who consult collaborating physicians, voluntarily provide all images. Images
thus acquired are first and foremost classified either into a diseased or a normal
category by a 5-fold cross-validation algorithm using a convolutional neural
network (MobileNetV2) model for binary classification. Once it predicts the
diseased label, the disease prediction algorithm based on DenseNet-121 uses the
image to diagnose single or multiple disease labels. Results: The MobileNetV2
architecture-based disease detection model achieved an average accuracy of 93%
6
in distinguishing between diseased and normal, healthy tongues, whereas the
multilabel disease classification model produced more than 90% accuracy.
7
2.2 LITERATURE SURVEY 02
PUBLISHER: IEEE
YEAR: 2021
DESCRIPTION:
The diagnosis of heart disease has become a difficult medical task in the present
medical research. This diagnosis depends on the detailed and precise analysis of
the patient's clinical test data on an individual's health history. The enormous
developments in the field of deep learning seek to create intelligent automated
systems that help doctors both to predict and to determine the disease with the
internet of things (IoT) assistance. Therefore, the Enhanced Deep learning assisted
Convolutional Neural Network (EDCNN) has been proposed to assist and
improve patient prognostics of heart disease. The EDCNN model is focused on a
deeper architecture which covers multi-layer perceptron's model with
regularization learning approaches. Furthermore, the system performance is
validated with full features and minimized features. Hence, the reduction in the
features affects the efficiency of classifiers in terms of processing time, and
accuracy has been mathematically analyzed with test results.
8
2.3 LITERATURE SURVEY 03
PUBLISHER: IEEE
YEAR: 2020
DESCRIPTION:
The development of artificial intelligence (AI) and the gradual beginning of AI's
research in the medical field have allowed people to see the excellent prospects of the
integration of AI and healthcare. Among them, the hot deep learning field has shown
greater potential in applications such as disease prediction and drug response
prediction. From the initial logistic regression model to the machine learning model,
and then to the deep learning model today, the accuracy of medical disease prediction
has been continuously improved, and the performance in all aspects has also been
significantly improved. This article introduces some basic deep learning frameworks
and some common diseases, and summarizes the deep learning prediction methods
corresponding to different diseases. Point out a series of problems in the current
disease prediction, and make a prospect for the future development. It aims to clarify
the effectiveness of deep learning in disease prediction, and demonstrates the high
correlation between deep learning and the medical field in future development. The
unique feature extraction methods of deep learning methods can still play an
important role in future medical research.
9
2.4 LITERATURE SURVEY 04
PUBLISHER: IEEE
YEAR: 2018
DESCRIPTION:
Electronic health records are used to extract patient’s information instantly and
remotely, which can help to keep track of patients’ due dates for checkups,
immunizations, and to monitor health performance. The Health Insurance Portability
and Accountability Act (HIPAA) in the USA protects the patient data confidentiality,
but it can be used if data is re-identified using ‘HIPAA Safe Harbor’ technique. Usually,
this re-identification is performed manually, which is very laborious and time
captivating exertion. Various techniques have been proposed for automatic extraction of
useful information, and accurate diagnosis of diseases. Most of these methods are based
on Machine Learning and Deep Learning Methods, while the auxiliary diagnosis is
performed using Rule-based methods. This review focuses on recently published papers,
which are categorized into Rule-Based Methods, Machine Learning (ML) Methods, and
Deep Learning (DL) Methods. Particularly, ML methods are further categorized into
Support Vector Machine Methods (SVM), Bayes Methods, and Decision Tree Methods
(DT). DL methods are decomposed into Convolutional Neural Networks (CNN),
Recurrent Neural Networks (RNN), Deep Belief Network (DBN) and Autoencoders
(AE) methods. The objective of this survey paper is to highlight both the strong and
weak points of various proposed techniques in the disease diagnosis. Moreover, we
present advantage, disadvantage, focused disease, dataset employed, and publication
year of each category.
10
2.5 LITERATURE SURVEY 05
PUBLISHER: IEEE
YEAR: 2023
DESCRIPTION:
11
the Enhanced Kookaburra Optimization Algorithm (EKOA).
12
CHAPTER 3
SYSTEM ANALYSIS
13
3.1.1 DISADVANTAGES
14
3.2 PROPOSED SYSTEM
15
3.2.1 ADVANTAGES
16
CHAPTER 4
REQUIREMENTS SPECIFICATION
4.1 INTRODUCTION
Requirements are the basic constraint hat are required to develop a system.
Requirements are collected while designing the system. The following are
the requirements that are to be discussed
1. Functional requirements
2. Non-Functional requirements
3. System requirements
A. Hardware requirements
B. Software requirements
The functional requirements outline the essential features and capabilities the
proposed multi-disease detection system must possess to operate effectively in a
clinical environment. These include:
1. Multi-Disease Detection: The system must be able to identify and classify
multiple diseases from medical images (e.g., X-rays, CT scans, MRIs)
simultaneously. It should handle different diseases such as pneumonia,
tuberculosis, cardiovascular conditions, and diabetes-related complications,
without the need for separate analyses.
2. Medical Imaging Integration: The system should seamlessly integrate with
existing medical imaging technologies, accepting a variety of image formats
and resolutions. It must be capable of processing images in real-time or
batch modes, depending on the clinical setting’s needs.
17
3. Accuracy and Precision: The system should offer high accuracy, with the
ability to detect diseases with minimal false positives and false negatives.
The deep learning model must be trained and optimized for performance to
ensure reliability in clinical diagnoses.
4. Explainable AI (XAI): The system must provide clear, interpretable
outputs, explaining the reasoning behind its diagnostic predictions.
Visualizations and decision rationales should be accessible to healthcare
professionals to support informed clinical decision-making and enhance trust
in the system’s recommendations.
5. Scalability and Adaptability: The system should be scalable to handle
large datasets and adaptable to new diseases and imaging modalities. It must
be easy to update with new disease models, keeping the system relevant as
healthcare needs evolve.
6. User-Friendly Interface: A simple, intuitive user interface is necessary for
healthcare professionals to interact with the system efficiently. It should
allow clinicians to upload images, view diagnostic results, and access
interpretability outputs with minimal training required.
7. Performance and Speed: The system should be capable of processing
medical images and delivering diagnostic results in a timely manner,
ensuring that it can be used in high-pressure clinical environments.
Diagnosis should be completed within a clinically acceptable time frame to
facilitate quick patient management.
8. Data Security and Privacy: The system must comply with healthcare data
privacy regulations (such as HIPAA or GDPR) to ensure the secure handling
of patient data. All medical images and diagnostic results should be
encrypted and stored securely to maintain confidentiality.
9. Integration with Healthcare Systems: The system must be able to
integrate with existing Electronic Health Records (EHR) or Picture
Archiving and Communication Systems (PACS) for seamless data flow,
18
ensuring that
19
diagnostic results can be easily accessed by medical personnel for further
treatment planning.
10.Model Training and Updates: The system should support continuous
learning and model updates. As new diseases or imaging modalities emerge,
the system should be able to integrate new training data and retrain the model
to maintain high performance. This feature ensures the system remains
relevant over time and adapts to evolving medical knowledge and
technology.
11.Multi-Language Support: xThe system should provide multi-language
support to cater to healthcare professionals from diverse linguistic
backgrounds. This feature is particularly important in global healthcare
settings, ensuring that clinicians from different regions can easily understand
and utilize the system, improving its accessibility and adoption worldwide.
12.Real-time Feedback and Alerts
The system should be capable of providing real-time feedback or alerts if a critical
condition is detected, helping healthcare providers prioritize cases and take
immediate action in urgent situations.
Scalability
Reliability
Performance
Security
Usability
Availability
Maintainability
Compatibility
Interoperability
20
4.1.3 HARDWARE AND SOFTWARE REQUIREMENTS
WINDOWS
21
4.2 SOFTWARE DESCRIPTION
4.3.1 Python
23
4.3.2 FEATURES OF PYTHON
24
intensive tasks.
High-Level Language: Python abstracts many low-level details, allowing
developers to focus on solving problems rather than managing system
resources like memory and hardware.
26
Plotly, Matplotlib is essential for any data scientist or researcher needing data
visualization capabilities.
Django: Django is a high-level web framework for Python, known for rapid
development and clean, pragmatic design. It follows the “batteries included”
philosophy, providing a comprehensive set of tools and libraries for web
development, such as ORM (Object-Relational Mapping), authentication, and
admin panels. Django’s security features include protections against common
web vulnerabilities like SQL injection and cross-site scripting. It is ideal for
building robust, scalable web applications and follows the Model-View-
Template (MVT) architecture. Django is widely used for developing complex
web applications and content management systems (CMS), supporting projects
of all sizes.
29
libraries like NumPy for numerical operations and Matplotlib for visualizing the
results of image processing.
4.3.2 HTML
HTML (Hypertext Markup Language) is the standard language used to structure and
present content on the web. It defines the elements that make up a web page, such as
text, images, links, and multimedia. HTML uses tags, like <p> for paragraphs, and
attributes to provide additional information about elements, such as the class
attribute for CSS styling. These tags and attributes tell web browsers how to display
the content.HTML is often used alongside other technologies, such as CSS
(Cascading Style Sheets) for design and JavaScript for interactivity. HTML5, the
latest version of HTML, introduced several new features, including native support
for video and audio playback, improved form elements, and new semantic tags
like <article> and
<section>, which help structure content more meaningfully. HTML plays a crucial role
in web development, ensuring that websites are correctly rendered and accessible
across different devices and platforms.
4.3.3 CSS
Cascading Style Sheets (CSS) is a stylesheet language used to control the visual
presentation of HTML or XML documents, allowing web developers to separate
content from design. It defines aspects like layout, colors, fonts, and positioning,
ensuring a consistent look across webpages and devices. CSS can be applied in three
ways: inline (directly within HTML elements), internal (within a <style> block in
the HTML document), and external (via a linked CSS file). The "cascading" nature
of CSS means that styles are applied in a hierarchical order, with more specific rules
overriding general ones. Advanced features such as Flexbox, Grid Layout, and media
queries enable responsive web design, allowing websites to adapt to different screen
sizes. Overall, CSS plays a crucial role in creating visually appealing, user-friendly,
and consistent web experiences.
30
4.3.3 ANACONDA:
31
CHAPTER 5
SYSTEM DESIGN
The architecture diagram involves using connected sensors and devices to collect
physiological and environmental data, enabling real-time monitoring of stress levels.
Advanced algorithms analyze this data to predict stress patterns, allowing for timely
interventions. Adaptive care solutions then personalize responses, to help manage
and reduce stress effectively.
32
Input Image:
The first step in the disease detection system involves acquiring medical
images, such as X-rays, MRIs, CT scans, or images from wearable sensors.
These images serve as the raw data for analysis, often representing various
health conditions. The quality and diversity of these images are critical for
training an effective model. Input images should be properly labeled, with
information such as disease type and severity, allowing the system to learn to
recognize patterns associated with different conditions. The images are then
preprocessed and fed into the model for further analysis, classification, and
prediction.
Preprocessing:
Image preprocessing is essential for preparing raw data for model input. This
step includes resizing the images to a consistent size, normalizing pixel values
to a standard range (e.g., 0 to 1), and augmenting the dataset through
transformations like rotation, flipping, and cropping. Augmentation helps
increase dataset diversity, improving the model’s ability to generalize.
Additionally, noise reduction techniques are applied to improve image quality
by eliminating unwanted artifacts. Preprocessing ensures the input images are
standardized and ready for feature extraction, contributing to better performance
in the CNN model. Feature Extraction (CNN Layers):
Convolutional Neural Networks (CNNs) are designed to automatically extract
hierarchical features from input images. The first layers of the CNN, known as
convolutional layers, apply filters to detect simple features like edges, textures,
and corners. As the data progresses through the layers, the network extracts
increasingly complex patterns, such as shapes and structures, crucial for
distinguishing between diseases. Pooling layers downsample the data, reducing
dimensionality while preserving essential information. This feature extraction
process enables the CNN to learn relevant patterns from medical images, which
are later used for classification.
33
Classification:
After feature extraction, the CNN passes the extracted features through fully
connected layers to perform classification. These layers interpret the learned
features and assign the image to a disease category. The classification process
uses activation functions like softmax (for multi-class classification) or sigmoid
(for binary classification) to produce output labels. Each output corresponds to
a specific disease or condition, with a confidence score representing the
model’s certainty. The model is trained using labeled datasets, allowing it to
recognize patterns and classify new, unseen images accurately, supporting
diagnostic decision-making in healthcare.
Data Storage:
Data storage is crucial for managing the large volumes of data generated during
the disease detection process. This includes input images, intermediate model
outputs, results, and patient-specific data. The storage system must be secure,
compliant with healthcare regulations (e.g., HIPAA), and scalable to
accommodate growing data needs. Structured data like disease labels and
medical records are stored in relational databases, while unstructured data, such
as images, are typically stored in file systems or cloud storage. Efficient data
retrieval and backup systems ensure that all patient data and model results are
accessible for future analysis and monitoring.
Training Models:
The model training process involves feeding the preprocessed and labeled data
into the CNN and adjusting its internal parameters (weights and biases) through
backpropagation. During training, the model learns to map input images to
corresponding disease labels by minimizing the loss function, which quantifies
the error in predictions. The model’s performance is optimized through
techniques like gradient descent, which iteratively updates the parameters.
Hyperparameter tuning, including adjustments to learning rate and batch size,
further improves performance. Training continues until the model converges to
34
an optimal state,
35
capable of making accurate disease predictions.
Results:
After the model processes and classifies an image, it generates results indicating
the presence of specific diseases, along with a confidence score for each
diagnosis. These results assist healthcare professionals by providing quick and
accurate disease identification. The system may also highlight areas of the
image relevant to the diagnosis, using techniques like Class Activation Mapping
(CAM) for interpretability. The final output can be integrated into patient health
records, providing clinicians with actionable insights for treatment planning.
Results help prioritize cases, facilitate early detection, and ultimately support
better clinical decision-making.
36
Fig 5.2.1-FLOW DIAGRAM
37
5.2.2 DESCRIPTION
Upload Image: The disease detection system begins when the user uploads a
medical image, such as an X-ray, MRI, or CT scan, or provides necessary
information, such as patient demographics, medical history, or symptoms. This step
is essential as it serves as the input data that the system will analyze. The uploaded
image or information is often critical in diagnosing various conditions and diseases.
The user- friendly interface of the system ensures that clinicians or healthcare
providers can easily upload images or enter patient data for accurate analysis and
diagnosis in a streamlined workflow.
Preprocessing: Preprocessing is a crucial step where the raw input image is
prepared for analysis. This involves several techniques, including resizing the image
to a standard dimension, normalizing pixel values to a consistent range, and
applying augmentation (like rotation, flipping, and cropping) to increase dataset
variability and improve model robustness. Noise reduction methods are applied to
enhance image clarity, ensuring the model works efficiently and effectively.
Preprocessing ensures that the system is working with high-quality, standardized
data, which improves the accuracy of disease detection and helps the model
generalize better across various cases.
View Result: After the system processes the image and detects potential diseases,
the healthcare provider views the results in an intuitive display. The results can
include diagnostic information, disease type, and confidence levels, along with any
necessary visualizations like annotated images highlighting areas of concern. The
user can review these results to confirm the diagnosis or proceed with additional
medical tests. This step enhances the decision-making process by providing clear,
evidence-based insights, enabling clinicians to act quickly. It also helps in tracking
patient progress and making timely decisions regarding treatment and care.
39
5.3 MODULES
This initial step involves gathering relevant data from reliable sources like hospital
databases, Kaggle, or public medical datasets. For disease detection, this can
include medical images (X-rays, CT scans) or structured data (patient
demographics, health records). High-quality, diverse datasets are essential for
training a robust model that can accurately detect and classify diseases across
various conditions and patient profiles.
In this stage, raw data is prepared for analysis. For image data, this involves
resizing images to a standard size, normalizing pixel values, and augmenting the
dataset with transformations like rotations or flips to increase variability. For
structured data, scaling numeric values, encoding categorical variables, and
splitting the data into training, validation, and test sets are crucial steps to ensure
proper model training and prevent overfitting.
Feature extraction involves extracting key patterns and information from data. In the
case of medical images, CNN layers automatically extract relevant features such as
edges, textures, and shapes, which help identify specific diseases. For structured data,
manual feature engineering techniques may be applied, such as creating new features
based on existing data or using domain knowledge to highlight critical factors like
age, gender, or previous medical conditions.
40
5.3.4 MODEL TRAINING:
During model training, algorithms such as CNNs for image data or ensemble
methods like Random Forest or XGBoost for structured data are employed. The
model learns to classify diseases by adjusting internal weights using optimization
techniques. The loss function guides the model toward accurate predictions, and
optimizers like gradient descent help minimize the prediction errors, ensuring the
model’s ability to generalize effectively to new data
5.3.6 DEPLOYMENT:
After the model has been trained and evaluated, it is saved for deployment. The
trained model is then deployed as an API using frameworks like Flask or FastAPI,
allowing users (clinicians, healthcare providers) to input new data and receive
predictions in real-time. This deployment step makes the model accessible for use in
clinical settings, enabling quick and efficient disease detection, diagnosis, and
decision- making within the healthcare workflow.
41
CHAPTER 6
METHODOLOG
This step involves gathering diverse datasets from medical images (X-rays, MRIs,
CT scans), wearable sensors (e.g., ECG, blood pressure), and structured data (e.g.,
electronic health records). The data should be comprehensive, capturing a wide
range of diseases to train the model effectively. The collected data may also include
patient demographics and medical history for risk factor analysis. Proper data
labeling and annotation are essential for supervised learning tasks, ensuring the
model can accurately detect and classify multiple diseases. Data diversity and
quality are crucial for building a robust disease detection system.
6.2 DATA PREPROCESSING:
Data preprocessing involves cleaning and transforming raw data into a usable
format for the model. For medical images, preprocessing techniques like resizing,
normalization, and augmentation (rotation, flipping, cropping) are applied to
increase dataset size and variability, preventing overfitting. Image feature
extraction techniques such as edge detection or texture analysis help the model
focus on key regions of interest. For structured data like patient records, encoding
categorical variables (e.g., one-hot encoding) and normalizing numerical values
ensure consistency and compatibility. Proper preprocessing improves the model's
efficiency and accuracy in detecting diseases.
Model Training and Validation for the Advanced Convolutional Neural Network-
Based Multi-Disease Detection System involve optimizing the CNN model to
42
accurately detect and classify diseases from medical data. During training, the
43
system uses labeled datasets, such as medical images, to learn features and
patterns indicative of specific conditions. The model’s parameters are adjusted
using optimization techniques like stochastic gradient descent and a loss function
that quantifies prediction errors. Validation is performed on a separate dataset to
evaluate the model’s generalization and prevent overfitting. Key metrics like
accuracy, precision, recall, and F1-score assess performance, while cross-
validation ensures robustness by testing across multiple data splits. This iterative
process refines the CNN, ensuring it delivers high accuracy and reliability for
comprehensive health analysis and early disease detection in clinical applications.
The Decision Support System (DSS) provides actionable insights based on the
model’s predictions, helping healthcare professionals make informed decisions.
The DSS assesses the detected diseases in the context of the patient's medical
history, risk factors, and demographic information. It generates personalized
recommendations, such as further tests, treatments, or lifestyle changes.
Additionally, the DSS calculates a risk score, prioritizing patients based on the
severity of detected conditions. Automated alerts notify clinicians of high-risk
cases, enabling timely interventions. The DSS enhances clinical efficiency and
ensures that appropriate healthcare steps are taken.
Model evaluation ensures that the disease detection system remains accurate and
relevant over time. The model is evaluated using performance metrics like
accuracy, precision, recall, and F1-score. Continuous learning techniques are
applied, retraining the model with new data to improve its accuracy and
adaptability to emerging disease patterns. Feedback from clinicians is
incorporated to refine the model’s predictions and enhance its clinical utility.
Regular updates ensure the model adapts to evolving medical knowledge and
44
diagnostic techniques.
45
CHAPTER 7
7.1 CONCLUSION
46
7.2 FUTURE WORKS
47
CHAPTER 8
APPENDICES
PYTHON:
# Loading Models
covid_model = load_model('models/covid.h5')
braintumor_model = load_model('models/braintumor.h5')
alzheimer_model = load_model('models/alzheimer_model.h5')
diabetes_model = pickle.load(open('models/diabetes.sav', 'rb'))
heart_model = pickle.load(open('models/heart_disease.pickle.dat', "rb"))
pneumonia_model = load_model('models/pneumonia_model.h5')
breastcancer_model = joblib.load('models/cancer_model.pkl')
# Configuring Flask
UPLOAD_FOLDER = 'static/uploads'
ALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg'])
48
app = Flask( name )
49
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.secret_key = "secret key"
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS
ADD_PIXELS = add_pixels_value
new_img = img[extTop[1]-ADD_PIXELS:extBot[1]+ADD_PIXELS,
extLeft[0]-ADD_PIXELS:extRight[0]+ADD_PIXELS].copy()
set_new.append(new_img)
return np.array(set_new)
Routing Functions
@app.route('/')
def home():
return render_template('homepage.html')
@app.route('/covid')
def covid():
return render_template('covid.html')
@app.route('/breastcancer')
def breast_cancer():
return render_template('breastcancer.html')
@app.route('/braintumor')
def brain_tumor():
return render_template('braintumor.html')
@app.route('/diabetes')
def diabetes():
return render_template('diabetes.html')
@app.route('/alzheimer')
def alzheimer():
return render_template('alzheimer.html')
@app.route('/pneumonia')
def pneumonia():
return render_template('pneumonia.html')
@app.route('/heartdisease')
def heartdisease():
51
return render_template('heartdisease.html')
Result Functions
@app.route('/resultc', methods=['POST'])
def resultc():
if request.method == 'POST':
firstname = request.form['firstname']
lastname = request.form['lastname']
email = request.form['email']
phone = request.form['phone']
gender = request.form['gender']
age = request.form['age']
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
flash('Image successfully uploaded and displayed below')
img = cv2.imread('static/uploads/'+filename)
img = cv2.resize(img, (224, 224))
img = img.reshape(1, 224, 224, 3)
img = img/255.0
pred = covid_model.predict(img)
if pred < 0.5:
pred = 0
else:
pred = 1
# pb.push_sms(pb.devices[0],str(phone), 'Hello {},\nYour COVID-19 test results are
ready.\nRESULT: {}'.format(firstname,['POSITIVE','NEGATIVE'][pred]))
return render_template('resultc.html', filename=filename, fn=firstname, ln=lastname,
age=age, r=pred, gender=gender)
else:
flash('Allowed image types are - png, jpg, jpeg')
return redirect(request.url)
@app.route('/resultbt', methods=['POST'])
def resultbt():
if request.method == 'POST':
firstname = request.form['firstname']
lastname = request.form['lastname']
52
email = request.form['email']
phone = request.form['phone']
gender = request.form['gender']
age = request.form['age']
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
flash('Image successfully uploaded and displayed below')
img = cv2.imread('static/uploads/'+filename)
img = crop_imgs([img])
img = img.reshape(img.shape[1:])
img = preprocess_imgs([img], (224, 224))
pred = braintumor_model.predict(img)
if pred < 0.5:
pred = 0
else:
pred = 1
# pb.push_sms(pb.devices[0],str(phone), 'Hello {},\nYour Brain Tumor test results are
ready.\nRESULT: {}'.format(firstname,['NEGATIVE','POSITIVE'][pred]))
return render_template('resultbt.html', filename=filename, fn=firstname, ln=lastname,
age=age, r=pred, gender=gender)
else:
flash('Allowed image types are - png, jpg, jpeg')
return redirect(request.url)
@app.route('/resultd', methods=['POST'])
def resultd():
if request.method == 'POST':
firstname = request.form['firstname']
lastname = request.form['lastname']
email = request.form['email']
phone = request.form['phone']
gender = request.form['gender']
pregnancies = request.form['pregnancies']
glucose = request.form['glucose']
bloodpressure = request.form['bloodpressure']
insulin = request.form['insulin']
bmi = request.form['bmi']
diabetespedigree = request.form['diabetespedigree']
53
age = request.form['age']
skinthickness = request.form['skin']
pred = diabetes_model.predict(
[[pregnancies, glucose, bloodpressure, skinthickness, insulin, bmi, diabetespedigree,
age]])
# pb.push_sms(pb.devices[0],str(phone), 'Hello {},\nYour Diabetes test results are
ready.\nRESULT: {}'.format(firstname,['NEGATIVE','POSITIVE'][pred]))
return render_template('resultd.html', fn=firstname, ln=lastname, age=age, r=pred,
gender=gender)
@app.route('/resultbc', methods=['POST'])
def resultbc():
if request.method == 'POST':
firstname = request.form['firstname']
lastname = request.form['lastname']
email = request.form['email']
phone = request.form['phone']
gender = request.form['gender']
age = request.form['age']
cpm = request.form['concave_points_mean']
am = request.form['area_mean']
rm = request.form['radius_mean']
pm =
request.form['perimeter_mean'] cm =
request.form['concavity_mean'] pred
= breastcancer_model.predict(
np.array([cpm, am, rm, pm, cm]).reshape(1, -1))
# pb.push_sms(pb.devices[0],str(phone), 'Hello {},\nYour Breast Cancer test results
are ready.\nRESULT: {}'.format(firstname,['NEGATIVE','POSITIVE'][pred]))
return render_template('resultbc.html', fn=firstname, ln=lastname, age=age, r=pred,
gender=gender)
55
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
flash('Image successfully uploaded and displayed below')
img = cv2.imread('static/uploads/'+filename)
img = cv2.resize(img, (176, 176))
img = img.reshape(1, 176, 176, 3)
img = img/255.0
pred = alzheimer_model.predict(img)
pred = pred[0].argmax()
print(pred)
else:
flash('Allowed image types are - png, jpg, jpeg')
return redirect('/')
@app.route('/resultp', methods=['POST'])
def resultp():
if request.method == 'POST':
firstname = request.form['firstname']
lastname = request.form['lastname']
email = request.form['email']
phone = request.form['phone']
gender = request.form['gender']
age = request.form['age']
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
flash('Image successfully uploaded and displayed below')
img = cv2.imread('static/uploads/'+filename)
img = cv2.resize(img, (150, 150))
img = img.reshape(1, 150, 150, 3)
img = img/255.0
pred = pneumonia_model.predict(img)
if pred < 0.5:
pred = 0
56
else:
pred = 1
else:
flash('Allowed image types are - png, jpg, jpeg')
return redirect(request.url)
@app.route('/resulth', methods=['POST'])
def resulth():
if request.method == 'POST':
firstname = request.form['firstname']
lastname = request.form['lastname']
email = request.form['email']
phone = request.form['phone']
gender = request.form['gender']
nmv =
float(request.form['nmv']) tcp =
float(request.form['tcp']) eia =
float(request.form['eia']) thal =
float(request.form['thal']) op =
float(request.form['op'])
mhra = float(request.form['mhra'])
age = float(request.form['age'])
print(np.array([nmv, tcp, eia, thal, op, mhra, age]).reshape(1, -1))
pred = heart_model.predict(
np.array([nmv, tcp, eia, thal, op, mhra, age]).reshape(1, -1))
# pb.push_sms(pb.devices[0],str(phone), 'Hello {},\nYour Diabetes test results are
ready.\nRESULT: {}'.format(firstname,['NEGATIVE','POSITIVE'][pred]))
return render_template('resulth.html', fn=firstname, ln=lastname, age=age, r=pred,
gender=gender)
58
HOMEPAGE:
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap CSS -->
<linkhref="https://cdn.jsdelivr.net/npm/[email protected]
beta3/dist/css/bootstrap.min.css" rel="stylesheet"
integrity="sha384-
eOJMYsd53ii+scO/bJGFsiCZc+5NDVN2yr8+0RDqr0Ql0h+rP48ckxlpbzKgwra6"
crossorigin="anonymous">
<title>HealthCure</title>
</head>
<body>
<nav class="navbar navbar-expand-lg navbar-dark bg-dark">
<div class="container-fluid">
<a class="navbar-brand" href="/">HealthCure</a>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse"
data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent"
aria-expanded="false"
aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav ms-auto mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link " aria-current="page" href="/covid">Covid</a>
</li>
<li class="nav-item">
<a class="nav-link " aria-current="page" href="/braintumor">Brain Tumor</a>
</li>
<li class="nav-item">
<a class="nav-link " aria-current="page" href="/breastcancer">Breast Cancer</a>
</li>
<li class="nav-item">
<a class="nav-link " aria-current="page" href="/alzheimer">Alzheimer</a>
</li>
<li class="nav-item">
<a class="nav-link " aria-current="page" href="/diabetes">Diabetes</a>
59
</li>
<li class="nav-item">
<a class="nav-link " aria-current="page" href="/pneumonia">Pneumonia</a>
</li>
<li class="nav-item">
<a class="nav-link " aria-current="page" href="/heartdisease">Heart Disease</a>
</li>
</ul>
</div>
</div>
</nav>
<h1 class='text-center py-3'
style="font-variant: petite-caps;margin-bottom:0px">
<b><i>HealthCure - an all in one medical solution</i></b>
</h1>
<div class="row" style="font-size: 20px;padding: 0px 50px 50px 50px;">
<p><b>HealthCure</b> is an all in one medical solution app which brings 7 Disease
Detections like Covid Detection, Brain Tumor Detection, Breast Cancer Detection,
Alzheimer Detection,
Diabetes Detection, Pneumonia Detection, and Heart Disease Detection under one
platform.</p>
<h2 class='text-center py-3'
style="color: rgb(255, 255, 255);font-variant: petite-caps;background-color: rgb(0, 0,
0);margin-bottom:0px">
<b><i>7 Disease Detections</i></b>
</h2>
62
<h3 class='text-center py-3'
style="color: rgb(255, 255, 255);font-variant: petite-caps;background-color:
rgb(0, 0, 0);margin-bottom:0px">
<b><i>AI in HealthCare</i></b>
</h3>
<div class="row py-3"
style='margin-bottom: 30px;'>
<div class="col">
<p class="text-left" style='font-size:18px'>
The artificial intelligence (AI) technologies becoming ever present in
modern business and
everyday
life is
also steadily being applied to healthcare. The use of artificial intelligence
in healthcare has
the
potential to assist healthcare providers in many aspects of patient care and
administrative
processes. Most
AI and healthcare technologies have strong relevance to the healthcare
field, but the tactics
they
support
can vary significantly. And while some articles on artificial intelligence in
healthcare suggest
that the
use of artificial intelligence in healthcare can perform just as well or
better than humans at
certain
procedures, such as diagnosing disease, it will be a significant number of
years before AI in
healthcare
replaces humans for a broad range of medical tasks.
</p>
</div>
<div class="col">
<img src="../static/healthcure.png" class="img-fluid rounded mx-auto d-
block" alt="...">
</div>
</div>
63
<h3 class='text-center py-3'
style="color: rgb(255, 255, 255);font-variant: petite-caps;background-color:
rgb(0, 0, 0);margin-bottom:0px">
<b><i>Machine Learning</i></b>
</h3>
<div class="row py-3" style='margin-bottom: 30px'>
<div class="col">
<p class="text-left" style='font-size:18px'>
</p>
</div>
<div class="col">
<img src="../static/ml.png" class="img-fluid rounded mx-auto d-block"
alt="...">
</div>
</div>
</p>
</footer>
</body>
</html>
65
8.2 APPENDIX B–SCREENSHOT
66
8.2.1 - Homepage
8.2.2 COVID-19 DETECTION: The Covid-19 Detection page allows users to enter their
personal details and upload a chest scan for analysis. The system processes the uploaded
image and generates a test result, displaying details such as name, age, gender, and
diagnosis outcome. The results page provides a clear and professional UI, ensuring an
efficient and user-friendly experience.
67
FIG 8.2.2 - Covid-19 Detection
8.2.4 BRAIN TUMOUR DETECTION: The Brain Tumor Detection System enables
users to enter their details and upload MRI scans for analysis. Using dataset based
processing, the system examines the scan to detect the presence of a brain tumor. The
results page then displays the patient's details, MRI image, and diagnosis, such as "No
Tumor."
68
FIG 8.2.4 - Brain Tumor Detection
69
using medical imaging.
70
8.2.8 BREAST CANCER DETECTION: The Breast Cancer Detection system
allows users to input personal details and specific tumor attributes to assess the
likelihood of breast cancer. Upon submission, the system processes the data and
provides a diagnosis, as shown in the test results indicating whether the tumor is
benign or malignant. The interface incorporates a pink ribbon theme for awareness,
reinforcing its focus on breast cancer detection and early intervention.
71
CHAPTER 9
REFERENCES
73
Book Series, Aug. 2014.
10. G. Hrovat, G. Stiglic, P. Kokol and M. Ojsteršek, "Contrasting temporal trend
discovery for large healthcare databases", Comput. Methods Programs Biomed.,
vol. 113, no. 1, pp. 251-257, Jan. 2014.
11. E. Choi, A. Schuetz, W. F. Stewart and J. Sun, "Using recurrent neural network
models for early detection of heart failure onset", J. Amer. Med. Informat. Assoc.,
vol. 24, no. 2, pp. 361-370, Mar. 2017.
12. G. Luo, G. Sun, K. Wang, S. Dong and H. Zhang, "A novel left ventricular
volumes prediction method based on deep learning network in cardiac MRI", Proc.
Comput. Cardiol. Conf. (CinC), pp. 89-92, Sep. 2016.
13. Liang H, Tsui BY, Ni H, et al. Evaluation and accurate diagnoses of pediatric
diseases using artificial intelligence. Nat Med. 2019
14. R. Pitchai, K. Praveena, P. Murugeswari, Ashok Kumar, M. K. Mariam Bee,
Nouf M. Alyami, et al., "Region Convolutional Neural Network for Brain Tumor
Segmentation", computational intelligence and neuroscience, pp. 1-9, 2022.
15. Alhussein Mohammed Ahmed, Gais Alhadi Babikir and Salma Mohammed
Osman, "Classification of Pneumonia Using Deep Convolutional Neural Network",
american journal of computer science and technology, vol. 5, no. 2, pp. 26-26,
2022.
74