0% found this document useful (0 votes)
21 views

Report Final 2

Uploaded by

THE FACT BUSTER
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Report Final 2

Uploaded by

THE FACT BUSTER
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Multiple Disease Prediction System Using Machine Learning

A Project Report
Submitted for partial fulfillment of the requirements for the degree of
BACHELOR OF TECHNOLOGY
in
Information Technology
Submitted by
Utkarsh Srivastava (2000320130184)
Shambhavi Verma (2000320130150)
Sachin Singh (2000320130138)
Utkarsh Kumar Srivastava (2000320130183)
Under the supervision of
Ms. Tanaya Gupta
Assistant Professor
IT Department
Department of Information Technology

ABES Engineering College


19th Km stone, NH-09, Ghaziabad (U.P)

May, 2024

1
Multiple Disease Prediction System Using Machine Learning
by
Utkarsh Srivastava (2000320130184)
Shambhavi Verma (2000320130150)
Sachin Singh (2000320130138)
Utkarsh Kumar Srivastava (2000320130183)

Submitted to the department of Information Technology


in partial fulfillment of the requirements
for the degree of
Bachelor of Technology
in
Information Technology

ABES Engineering College, Ghaziabad


Dr. A.P.J. Abdul Kalam Technical University, Uttar Pradesh Lucknow

May, 2024

2
DECLARATION

We hereby declare that this submission is our own work that, to the best of our knowledge and
belief, it contains no material previously published or written by another person nor material
which to a substantial extent has been accepted for the award of any other degree or diploma of
the university or other institute of higher learning, except where due acknowledgment has been
made in the text.

Signature:
Name: Utkarsh Srivastava.
Roll number: 2000320130184.
Date:

Signature:
Name: Shambhavi Verma.
Roll number: 2000320130150.
Date:

Signature:
Name: Sachin Singh.
Roll number: 2000320130138.
Date:

Signature:
Name: Utkarsh Kumar Srivastava.
Roll number: 2000320130183.
Date:

3
CERTIFICATE

This is to certify that project report entitled “Multiple Disease Prediction System Using
Machine Learning” which is submitted by Utkarsh Srivastava , Shambhavi Verma , Sachin
Singh and Utkarsh Kumar Srivastava in partial fulfillment of the requirement for the award of
degree B.Tech. in Department of Information Technology of Dr. A.P.J. Abdul Kalam, Technical
University, is a record of the candidates’ own work carried out by them under my supervision.
The matter embodied in this thesis is original and has not been submitted for the award of any
other degree.

(Supervisor Signature)

Date:
Name:
Designation:
Department: Information Technology.
ABES Engineering College, Ghaziabad.

4
ACKNOWLEDGEMENT
It gives us a great sense of pleasure to present the report of the B.Tech. Project undertaken
during B.Tech Final Year. We owe special debt of gratitude to Ms. Tanaya Gupta Department of
Information Technology, ABES Engineering College, Ghaziabad for his constant support and
guidance throughout the course of our work. His sincerity, thoroughness and perseverance have
been a constant source of inspiration for us. It is only his cognizant efforts that our endeavors
have seen light of the day.
We also take the opportunity to acknowledge the contribution of Professor (Dr.) Rakesh Ranjan,
Head of Department of Information Technology, ABES Engineering College, Ghaziabad for his
full support and assistance during the development of the project.
We also do not like to miss the opportunity to acknowledge the contribution of all faculty
members of the department for their kind assistance and cooperation during the development of
our project. Last but not the least, we acknowledge our friends for their contribution in the
completion of the project.

Signature:
Name: Utkarsh Srivastava.
Roll number: 2000320130184.
Date:

Signature:
Name: Shambhavi Verma.
Roll number: 2000320130150.
Date:

Signature:
Name: Sachin Singh.
Roll number: 2000320130138.
Date:

Signature:
Name: Utkarsh Kumar Srivastava.
Roll number: 2000320130183.
Date:

5
ABSTRACT

With the development of intelligent computer systems that can identify diseases more accurately
than people, healthcare has never been the same. This paper examines the role played by
machine learning algorithms in predicting Several diseases. Using these algorithms, their
problems as well as the possible use of them for practical applications are discussed. It is hard
for doctors to rapidly examine very closely loads of data on a patient's symptoms and determine
a particular sickness. We propose a method utilizing computer software that aims at making this
process quicker and simpler. The fact is that we apply various types of such programs as KNN,
SVM, Decision Tree, and Logistic Regression to improve its functioning efficiency. However, our
system stands out because we use just one easy application whose minimal info from the user
allows us to envision various diseases. It facilitates processes for physicians and enables them to
quickly identify problems in patients. Additionally, we discuss matters to do with selecting
appropriate data, checking whether our program is running effectively, and using differing forms
of data at a time. Adopting this approach to thinking enhances our predictability and also
ensures general good health in the community. The use of computer programs in disease risk
prediction will help improve, cheaper, and better health care, as our research reveals.

6
TABLE OF CONTENT

Title Page No.

Declaration iii

Certificate iv

Acknowledgement v

Abstract vi

List of Figure ix

List of Table x

CHAPTER 1 INTRODUCTION 11 - 17

1.1 Need To Study 11


1.2 Motivation 12
1.3 Project Objectives 14
1.4 Scope Of The Project 15
CHAPTER 2 LITERATURE REVIEW 18-20

CHAPTER 3 METHODOLOGY 21-39

3.1 System Design 21


3.2 Algorithm 24
3.3 Detailed Discussion Of The Dataset 30
3.4 Tools And Technology Used In P 32

CHAPTER 4 RESULT AND DISCUSSION 40-46

CHAPTER 5 CONCLUSION AND FUTURE SCOPE 47-50

5.1 Conclusion 47
5.2 Future Scope 49

REFERENCES 51-53

7
PUBLICATION DETAILS 54

APPENDIX 55-57

PLAGIARISM REPORT 58

8
LIST OF FIGURES

No. Title Page No.

1 System Design 22

2 Flow Chart 23

3 Dashboard 42

4 Cancer Classification 43

5 Result Of Cancer Module 43

6 History Section 44

7 Bar Chart Of Cancer Dataset 44

9
LIST OF TABLES
No. Title Page No.
1 Model Accuracy 45

10
CHAPTER 1
INTRODUCTION
1.1 NEED TO STUDY
The burden of sickness in contemporary healthcare is enormous, including a wide range of
ailments from minor injuries to life-threatening illnesses. A precise and practical illness
prognosis is essential for a good outcome and treatment planning. With the advent of machine
learning (ML) techniques, there is a significant chance to transform the healthcare industry by
using enormous amounts of data to predict the start of various diseases. This section delves into
the crunching needed for a complete infection prediction system that makes use of machine
learning computations.

● Epidemiological Trends
The epidemiological geography of illnesses must be well understood by healthcare
professionals as well as lawmakers in order to allocate resources and implement
preventive interventions appropriately. Analyzing trends in sickness prevalence, rates of
incidence, and socioeconomic factors might help comprehend the dynamics of various
health conditions across different populations. These sorts of findings are essential for
developing targeted intervention strategies and mitigating the detrimental consequences
of diseases on public health.

● Limitations of Traditional Approaches


Traditional illness prediction techniques often rely on clinical judgment, manual
symptoms and test interpretation, and both. Despite the fact that these methods have
proven useful, subjectivity, error by humans, and the slowness of handling massive
amounts of data will always remain limitations. Moreover, traditional approaches may
not be sustainable and can be difficult to modify in response to evolving disease patterns
and risk factors. As a result, it is becoming more and more clear that using computational
techniques like machine learning (ML) to improve traditional diagnostic tools and
increase prediction accuracy is essential.

● Potential Benefits of Machine Learning


Machine learning methods provide a viable alternative for illness prediction because they
can search through massive volumes of data and identify intricate patterns and
correlations that might not be apparent to human observers. By using a range of variables,
such as medical, environmental, genetic, and lifestyle characteristics, machine learning
models may be trained to provide precise forecasting algorithms that can accurately
ascertain an individual's risk of acquiring certain illnesses. Furthermore, the ability of

11
machine learning (ML) models to learn and evolve over time enhances their capability
for adaptability and prediction in changing healthcare situations.

● Addressing Healthcare Challenges


A comprehensive illness prediction system developed using machine learning might
address many important healthcare problems. By simplifying early detection and risk
categorization, such a system may enable proactive therapy aimed at avoiding disease
development and decreasing rates of morbidity and death. Furthermore, ML-based
prediction models may help optimize resource allocation by identifying high-risk people
who would benefit most from focused screening, monitoring, and preventive therapies.
By leveraging technological advancements in data collection, integration, and analysis,
ML-driven disease prediction systems may also support evidence-based decision-making,
enhance clinical outcomes, and elevate the bar for healthcare delivery.

In conclusion, the development of a machine learning-based multiple illness prediction system


represents a significant advancement toward preventive healthcare management and tailored
treatment. By using data-driven methodologies, physicians may enhance risk categorization, get
valuable insights into the dynamics of illnesses, and ultimately enhance patient treatment and
outcomes.

1.2 Motivation
The motivations behind the creation of the Multiple Disease Prediction System Using Machine
Learning are described in this section. We highlight the difficulties caused by the wide range of
medical disorders and the shortcomings of conventional diagnostic techniques as we talk about
the urgent need for precise and effective illness prediction models in contemporary healthcare
systems. We also discuss the possible advantages of using machine learning approaches to
improve disease prediction, such as increased precision, early detection, customized treatment
regimens, and efficient use of resources in healthcare institutions. The context for
comprehending the importance and applicability of the suggested effort in tackling pressing
healthcare issues is established by this conversation.

● Addressing the Complexities of Disease Diagnosis


Traditional methods of diagnosing sickness sometimes rely on a physician's expertise and
are subject to prejudices and human error. Furthermore, the ever expanding amount of
healthcare knowledge and the intricate relationships between many factors that influence
the development of disease may make it challenging to identify illnesses accurately.
Machine learning is a workable solution that will assist medical professionals in making
well-informed decisions by using algorithms to analyze massive amounts of medical data,
identify patterns, and provide accurate projections.

12
● Enhancing Early Detection and Intervention
Early diagnosis is crucial for the effective treatment of many illnesses since it may
greatly improve the results for patients and save costs associated with medical care. In an
attempt to support early diagnosis, the many Disease Prediction System analyzes many
patient data sets, including demographics, medical history, their symptoms, and
diagnostics test results, using machine learning. Through prompt dissemination of
suitable activities and treatment procedures to healthcare practitioners, the system may
enhance both the prognosis and quality of life of patients.

● Mitigating the Burden on Healthcare Systems


An increasing number of chronic illnesses, an aging population, and rising healthcare
expenses are only a few of the many challenges confronting the world healthcare system.
These problems strain healthcare systems globally, necessitating innovative solutions to
enhance healthcare provision and optimize resource management. By automating some
aspects of illness prediction and treatment, the Multiple Disease Prediction System aims
to reduce the burden on healthcare systems, improve workflow, and increase the overall
effectiveness of healthcare delivery.

● Empowering Patients and Healthcare Providers


The Multiple Disease Prediction System not only helps medical professionals but also
gives individuals important information about their health state, empowering them.
Patients may take proactive steps to protect their health by using the system's
individualized risk evaluations and recommendations, which are generated by using
machine learning algorithms to evaluate individual health data. In addition, healthcare
professionals may use the system as a tool for decision-making to supplement their
clinical knowledge with data-driven recommendations and give patients with more
effective and individualized treatment.

● Contributing to Medical Research and Knowledge Discovery


Furthermore, the creation of the Multiple Disease Prediction System enhances medical
research and information gathering. By analyzing large-scale healthcare records, the
system may uncover novel relationships between various risk factors and diseases,
expanding our understanding of the origins and progression of diseases. These findings
direct the development of more robust prediction models and pave the way for the
discovery of new therapeutic targets and therapies, all of which will ultimately progress
medical research and improve patient care.

13
● Promoting Equity and Accessibility in Healthcare
Finally, the Multiple Disease Forecasting System contributes to the primary goal of
improving fairness and accessibility in healthcare. By using technology to offer more
precise and effective healthcare services, particularly in disadvantaged groups and areas
with limited resources, the system may narrow the gaps that currently exist in access to
healthcare and delivery. The system's goal is to democratize access to modern facilities
and healthcare technology so that everyone, regardless of socioeconomic status or
geographic location, may get early and effective illness predictions and treatment.

To sum up, the Multiple Disease Forecasting System's creation marks a major advancement in
the use of machine learning to transform the provision of healthcare. The system has the
potential to significantly impact patient outcomes and change the way healthcare is delivered in
the future by addressing the challenges associated with disease diagnosis, improving early
detection and treatment, reducing the load on healthcare organizations, encouraging both patients
and healthcare professionals, supporting equal opportunity in healthcare, and contributing to
medical research.

1.3 Project Objectives


The fast changing nature of healthcare nowadays makes technology integration essential to
ensure effective and efficient disease management. There's a big chance to change the way
healthcare is delivered using machine learning (ML), especially when it comes to the diagnosis
and prediction of illness. Our project's main goal, "Multiple Disease Prediction System Using
Machine Learning," is to use ML algorithms to create a robust and all-encompassing system that
can predict a wide range of illnesses based on user-inputted symptoms.

This project's main goal is to meet the urgent demand for precise, rapid, and individualized
illness prediction technologies that can support healthcare practitioners' decision-making. We
want to develop a system that uses cutting-edge machine learning algorithms to not only forecast
the possibility of different illnesses with high accuracy, but also provide insightful information
about possible risk factors and the best course of treatment.

The main goal of our work is to use the large amount of medical data available to improve the
efficiency and accuracy of illness prediction. By means of the methodical examination of
extensive datasets that include symptoms, medical records, and diagnostic results, our system
will identify complex patterns and relationships that could be invisible to the human eye. By
doing this, we hope to greatly increase our system's predictive power, which will support
diagnosis accuracy and enhance patient outcomes.

Moreover, creating a user-friendly and intuitive interface that enables smooth communication
between medical experts and the medical prediction system is a key goal of our project.

14
Carefully designed, this interface will let doctors enter patient symptoms with ease and quickly
get precise illness prognosis. Our goal is to guarantee that our system can be easily integrated
into current healthcare processes, reducing disturbance and optimizing usefulness, by placing a
high priority on usability and accessibility.

Apart from its practical use in hospital environments, our initiative aims to enable people to take
an active role in their own medical care process. We want to promote a proactive health
management culture by offering an intuitive dashboard that enables people to enter their
symptoms and obtain individualized illness forecasts. We see our technology being essential in
preventing the course of illness and improving general well-being by facilitating early
identification and intervention.

Furthermore, our initiative aims to respect the values of equality and inclusion by guaranteeing
the dependability and resilience of our prediction models across a variety of demographic groups
and geographical areas. In order to do this, we will implement a thorough validation system that
meticulously assesses our models' performance across a range of demographic subgroups. We
work to reduce biases and guarantee the ability to generalize our prediction models by adding a
variety of relevant datasets to our training pipeline.

Our research aims to promote openness and cooperation in the scientific community, going
beyond its immediate use. In order to promote knowledge sharing and multidisciplinary
cooperation, we are dedicated to making our data sets and ML models publicly available. Our
goal is to spur innovation and propel improvements in illness forecasting and healthcare delivery
by making advanced machine learning tools and methodology more accessible to a wider
audience.

To put it briefly, the main goal of our project, "Multiple Disease Prediction System Using
Machine Learning," is to create an advanced tool that is also user-friendly, using machine
learning to allow precise and customized illness prediction. Our goal is to bring in an age of
active and driven data healthcare by revolutionizing illness management and equipping people
and healthcare professionals with actionable information.

1.4 Scope of the Project


The Multiple Disease Prediction System with Machine Learning has a broad and complex scope,
covering a number of important facets that add to its importance and possible influence on
patient outcomes and healthcare delivery. We examine the many facets of the undertaking's scope
in this part, emphasizing its consequences, difficulties, and potential paths.

15
1. Early Detection and Disease Prediction:
Using machine learning algorithms to reliably forecast the possibility of various illnesses based
on input factors such as signs, medical history, demographic data, and genetic predispositions is
one of the project's main goals. The technology may facilitate early diagnosis and intervention by
identifying correlations and trends that may not be visible to human observers by evaluating
massive datasets that include clinical and patient data.

In order to improve treatment results and lower healthcare expenditures related to extended
sickness and disease progression, early identification is essential. Healthcare practitioners may
reduce the risk of illness onset or progression by identifying high-risk people or communities,
putting preventative measures into place, starting screenings on time, and recommending
targeted therapies or lifestyle adjustments.

2. Extensive Disease Coverage:


The Multiple Disease Prediction System seeks to include a broad spectrum of illnesses that fall
under different disease categories and organ systems. The system is intended to provide thorough
coverage and predictive skills across a variety of healthcare situations, ranging from infectious
illnesses like dengue, malaria, and TB to chronic ailments like diabetes, cancer, and heart
disease.

The diagnosis, prognosis, and therapeutic planning of each illness pose different problems. The
system can accommodate the varied requirements of patients as well as physicians by integrating
a wide range of illnesses into the prediction model. This allows the system to give individualized
solutions and knowledge for the best possible clinical decision-making.

3. Treatment Optimization and Personalized Medicine:


Because machine learning-based illness prediction algorithms may customize treatment plans to
specific patient histories and disease features, they hold great promise for revolutionizing the
field of personalized medicine. The technology may provide customized hazards and treatment
suggestions based on the individual requirements and preferences of each patient by evaluating
patient-specific data, such as genetic markers, biomarkers that are and treatment response
factors.

There is great potential for personalized medicine to enhance treatment effectiveness, minimize
side effects, and maximize resource use in the field of health care. Clinicians may improve
patient outcomes and quality of life by using predictive machine learning and analytics
algorithms to help them make better educated choices about medication selection, dose
optimization, and therapeutic monitoring.

16
4. Data Integration and Interchangeability:
The construction of compatible frameworks for smooth data interchange and cooperation
amongst healthcare stakeholders, as well as the integration of diverse data sources, are crucial
components of the project scope. Patient data is created in the digital health age from a variety of
sources, such as genetic testing platforms, wearable technology, electronic medical records
(EHRs), and public health databases.

There are several technological and administrative obstacles to overcome in the process of
integrating and harmonizing these disparate data sources, including data governance, privacy
protection, and standards. To fully realize the promise of machine learning-based illness
prediction systems and understand their influence on the medical profession and public health,
these obstacles must be addressed.

5. Regulation and Ethical Issues:


The creation and implementation of a machine learning-based multiple disease prediction system
raises significant ethical and legal issues that need to be properly addressed to guarantee patient
safety, privacy, and equality, as with any healthcare breakthrough. These factors take into
account things like informed approval, algorithm transparency and responsibility, data privacy
and security, and fair access to medical care.

Collaboration amongst multidisciplinary stakeholders, such as data scientists, politicians,


ethicists, patient advocates, and healthcare practitioners, is necessary to ensure the ethical and
responsible use of predictive analytics. Stakeholders may create standards, best practices, and
guidelines to direct the moral development, use, and assessment of machine learning-based
medical devices by encouraging candid communication and teamwork.

In summary:
To sum up, the Multiple Disease Prediction System employing Machine Learning has a wide
range of applications, including personalized medicine, extensive disease coverage, data
exchange and interoperability, ethical and legal issues, early disease detection, and disease
prediction. The initiative has the potential to transform healthcare delivery and enhance patient
outcomes in a variety of clinical settings by using big data analytics and machine learning
techniques. To fully realize this promise, however, stakeholders must work together to overcome
legal, moral, and technological obstacles as well as to guarantee the fair and responsible use of
analytics for prediction in the healthcare industry.

17
CHAPTER 2
LITERATURE REVIEW

1 - The heart plays a crucial role in people, which is the motivation behind the suggested paper.
Since heart-related illnesses are on the rise nowadays, it is crucial that they are accurately
diagnosed and predicted because they can result in fatal heart complications. Hence, recent
developments in AI and ML can enable developing a system that accurately and quickly
forecasts the disease. So, using datasets acquired from the well-known website Kaggle, the
authors of this research analyze the accuracy of machine learning (ML) for predicting heart
disease using logistic regression, diabetes, and Parkinson's disease using SVM. They also
contrasted the methods using SVM (81% and Logistic Regression 82%) accuracy as a
benchmark [2].

2 - According to the research, diabetes is one of the chronic illnesses associated with high blood
sugar (glucose) levels. It is the cause of many different diseases, including blindness. They have
used ML techniques in the proposed study to assess diabetic illness since it is easy and flexible to
forecast if a person has the illness or not. The creation of the method is largely motivated by the
need to correctly identify people with diabetes. The prediction system uses two algorithms: SVM
(Support Vector Machine) and Logistic Regression. Its accuracy ratings are 78% and 75%,
respectively. Here, the accuracy of two models was compared [1].

3 -Numerous research works have looked at the prediction of heart disease using machine
learning algorithms. For example, Kaur and Singh (2020) work suggested a heart disease
prediction model based on the SVM and K-nearest neighbor algorithms. The accuracy of the
SVM method was found to be higher than that of the K-nearest neighbor approach [9][10].

4 - Another illness that has been thoroughly researched using machine learning algorithms is
diabetes. A research by Patil (2021) used K-nearest neighbor algorithms and logistic regression
to create a diabetes prediction model. The results demonstrated that, in terms of accuracy, the
logistic regression approach outperformed the K-nearest neighbor technique [11].

5 - S. Leoni Sharmila, C. Dharuman, and P. Venkatesan compare a number of machine learning


approaches, such as fuzzy logic, fuzzy neural networks, and decision trees in their paper [12].

6 - They developed a system that can accurately and quickly identify diabetes by using the
random forest approach. The UCI train repository provided the dataset that was used in this
study. First, the authors used conventional methods for data preparation, such as integration,
reduction, and purification. Using the random forest approach, the accuracy value was 90%,
which is much greater when analyzing the algorithm [14].

18
7 - To confirm the accuracy, they classified overall using the Cleveland typical heart illness
databases. SVM, KNN, and ANN (artificial neural network) are used to estimate the accuracy of
the computerized prediction approach. KNN (82.963%) and ANN (73.3333%) are used for
accuracy. They suggested SVM as the best classification method with the highest degree of
accuracy for heart disease prediction [15].

8 - With a success rate of almost 97.13%, Support Vector Machine (SVM) achieves the best
results in terms of precision and low error rates, confirming its usefulness in cancer prediction
and diagnosis [16].

9 - With the help of the Pima Disease Dataset, they used the SVM algorithm to assess and predict
diabetes. This study used four different types of kernels—polynomial, linear, RBF, and
sigmoid—across a machine learning platform to predict diabetes. With a variety of kernels, the
authors were able to get varying accuracy, between 0.69 and 0.82. With the radial basis kernel
function, the SVM algorithm produced the highest accuracy of 0.82 [17].

10 - They used machine learning techniques to identify diabetes. The aim of this research was to
develop a technique that might enable the person to accurately diagnose the user's diabetes. In
this case, they essentially used four main algorithms: Decision Tree, Naïve Bayes, and SVM
approaches. The precision of each was determined to be 85%, 77%, and 77.3%, respectively.
After the training phase, they also used an ANN algorithm to watch the computer network's
reactions and determine whether or not the sickness was correctly classified. Here, they looked at
each model's accuracy, F1 score support, and accurate recall [18].

11 - Using the UCI repository dataset for both training and validation, they used k-nearest
neighbor, decision tree, regression model, and SVM to determine the accuracy of ML in
predicting cardiovascular illness. They assessed the accuracy and approach as well. SVM 83%,
k-nearest neighbor 87%, decision tree 79%, and linear regression 78% [19].

12 - We were able to get individual information, including medical histories. Using online
technologies, we gather lifestyle data and save it in a data repository. The user inputs their health
conditions each day. The entered data is understood, and using natural language processing
(NLP), the individual's illness may be further anticipated [20].

13 - The study discusses how diabetes is one of the most hazardous illnesses in the world and
how it may lead to a wide range of ailments, including blindness.Because machine learning
methods make it simple and adaptable to predict whether a patient is unwell or not, they were
utilized in this work to determine the presence of diabetic disease. The purpose of this
investigation was to develop a method that would enable the patient to accurately diagnose their
diabetes. Here, they examined the accuracy of four key algorithms—Decision Tree, Naïve

19
Bayes, and SVM—which are 85%, 77%, and 77.3%, respectively. Following the training phase,
they also used the ANN algorithm to observe the network's responses, which indicate whether
the illness has been correctly identified or not. Here, they contrasted each model's accuracy,
precision, recall, and F1 score support [19].

14 - The paper's primary goal is to demonstrate how vital the heart is to all living things. Because
heart-related diseases may be fatal, it is essential that the diagnosis and prognosis of these
conditions be precise and accurate.Thus, artificial intelligence and machine learning aid in the
prediction of all types of natural disasters.Therefore, using the UCI repository dataset for training
and testing, the authors of this study compute the accuracy of machine learning for predicting
heart disease using k-nearest neighbor, decision tree, linear regression, and SVM. Additionally,
they contrasted the algorithms' accuracy: k-nearest neighbor (87%), decision tree (79%), linear
regression (78%), and SVM (83%) [18].

15 - In humans, the heart is incredibly vital. Since the heart is a vital organ that may cause
mortality, heart-related illness prediction needs to be precise and accurate. Thus, the accuracy of
machine learning algorithms for heart disease prediction is described in this work. The author of
this research employed SVM, linear regression, decision trees, and k-nearest neighbors [18].

16 - The goal of this research is to comprehend support vector machines and use them to forecast
lifestyle disorders to which a person may be vulnerable [23].

17 - In this study, two supervised data mining algorithms—the Naïve Bayes Classifier and the
Decision Tree Classification—were used to analyze the dataset and forecast the likelihood that a
patient will have heart disease. 91% of the patients with heart disease were predicted by the
decision tree model, while 87% of the patients were predicted by the Naïve Bayes classifier [4].

18 - The performance of two distinct data mining classification algorithms was evaluated in
order to determine which classifier performed the best when it came to the prediction of different
illnesses. In the fields of data mining and machine learning, developing accurate and
computationally effective classifications for medical applications is a significant problem [5].

19 - In 2020, Ramik Rawal conducted study in three different areas. It consists of three domains:
the first predicts the presence of cancer before a diagnosis is made, the second predicts the
diagnosis and course of therapy, and the third focuses on the course of treatment. Additionally,
based on accuracy, the research compares the performance among four classifiers: Random
Forest, SVM, kNN, and logistic regression. To assess and analyze data in terms of efficacy and
efficiency, a further tenfold cross-validation approach is used [6].

20
CHAPTER 3
METHODOLOGY

3.1 SYSTEM DESIGN:


There are many processes involved in designing a machine learning-based multiple illness
prediction system. This is a high-level summary of the procedure:

1. Determining the Problem and Gathering Information:


● Clearly state the issue you want to resolve, along with the illnesses you hope to forecast.
● Gather pertinent datasets including demographics, lab results, medical records,
symptoms, and other important data.

2. Preparing data:
● Deal with data irregularities, outliers, and missing values.
● To guarantee consistency, scale or normalize the characteristics.
● Convert numerical values for category variables into codes.

3. Feature Engineering:
● Extrapolate pertinent, disease-predictive traits from the data.
● Take domain expertise into account while choosing significant features.
● If required, use methods such as dimensionality reduction.

4. Model Choice:
Select the best machine learning techniques for making predictions. When predicting
several diseases, you may want to think about:
● Ensemble techniques such as AdaBoost, Gradient Boosting, and Random Forest.
● Deep learning models, such as Transformers, Recurrent Neural Networks (RNNs), and
Convolutional Neural Networks (CNNs).
● Try out several models and assess how well they work using relevant measures such as
area beneath the ROC curve (AUC-ROC), accuracy, precision, recall, and F1-score.

5. Training Models:
● Divided the data into sets for testing, validation, and training.
● Use the training data to train the chosen models.
● To maximize model performance, fine-tune hyperparameters using methods like search
by grid or random search.

21
6. Model Evaluation:
● Assess the performance of the trained models using the validation set.
● Adjust models or test alternative strategies in light of assessment outcomes.
● Decide which model (or models) will perform best when deployed.

7. Deployment:
● Install the chosen model or models in a working context.
● Create a stand-alone application or incorporate the model(s) as the current healthcare
system.
● Put in place the appropriate privacy and security safeguards to safeguard patient data.

8. Monitoring and Maintenance:


● Keep an eye out for drift or performance loss in the deployed model(s).
● To keep the model current and accurate, retrain it with fresh data on a regular basis.
● Take user comments into account and update the system as necessary.

Figure 3.1: System Design

22
Figure 3.2: Flow Chart

23
3.2 ALGORITHMS:
1. Random Forest:
A potent ensemble learning technique for classification and regression applications is
called Random Forest. During training, it builds a large number of decision trees, and it
produces a class that is either the average prediction (regression) of a person tree or the
method of the classes (classification).
The Random Forest algorithm's operation is broken down as follows:

● Random Sampling:
➔ To construct each decision tree, randomly choose a portion of training data (with
replacement). We refer to this procedure as bootstrapping.
● Decision Tree Construction:
➔ To find the optimal split for every choice tree, a random selection of
characteristics is chosen at each node. The trees' decorrelation is aided by this
unpredictability.
● Tree Growth:
➔ Every tree is developed to its greatest extent or until a predetermined stopping
point is reached, such as a maximum depth or a minimum amount of samples per
leaf.
● Voting (Classification) / Averaging (Regression):
➔ In classification tasks, the class of an input data point is predicted by every in the
forest, and the class that receives the most votes from all the trees is selected as
the final forecast.
➔ In regression tasks, the final prediction is determined by taking the average of the
numerical values predicted by each tree.
● Ensemble Output:
➔ By averaging or polling on several independent predictions, the ensemble of
decision trees decreases overfitting and increases generalization.

Random Forest offers a number of benefits:


➔ Because it is made resistant against overfitting by averaging many trees.
➔ It is capable of handling big, highly dimensional datasets.
➔ It offers feature significance estimations, which are helpful for feature selection.
➔ It requires little adjustment and is comparatively insensitive to hyperparameters.

24
There are a few restrictions, though:
➔ Random Forest models, particularly when dealing with an enormous amount of
trees and data, may be computationally and memory-intensive.
➔ If improperly handled, they might not perform effectively on skewed datasets.
➔ Random Forests may not be as good as certain other algorithms, such as Gradient
Boosting Machines (GBMs), at capturing intricate correlations in the data.
➔ To maximize efficiency for the particular issue at hand, it is crucial to fine-tune
Random Forest implementation settings, such as the total number of trees, the
deepest point of the trees and the amount of characteristics to take into account at
each split. Furthermore, the optimal hyperparameters may be chosen and model
performance assessed using cross-validation approaches.

2. Logistic Regression:
A classification procedure called logistic regression is used to estimate the likelihood of a
binary result. It works by fitting the data to a sigmoid-shaped curve that connects the
input characteristics to the likelihood of the positive class. By modifying its parameters to
reduce the difference between the projected probability and the real class labels, the
model learns during training how the input characteristics relate to the binary result. After
being trained, the model may predict the future by estimating the likelihood that a
particular input is in the positive group and then classifying it based on a threshold, often
0.5. Because of its effectiveness, interpretability, and simplicity, logistic regression is
utilized extensively.
This is an explanation of how logistic regression functions:

● Preparing Data:
➔ Be careful to clean and preprocess the data before using logistic regression. This
includes coding category variables, managing missing values, and scaling
numerical characteristics as needed.
● Model Representation:
➔ A sigmoid (logistic) function is used in logistic regression to predict the
connection between the input characteristics and the binary result. With the use of
this function, probabilities ranging from 0 to 1 are created from a linear
arrangement of the input characteristics and model coefficients.

25
● Linear Combination:
➔ The dependent variable's log-odds (the binary outcome) and the independent
variables are assumed to have a linear relationship in logistic regression. The
logarithm of the chances of the event happening is the log-odds, also referred to
as the logit function.
● Sigmoid Function:
➔ The sigmoid function, which is used in logistic regression, resembles an extended
'S' curve. Any value entered is mapped to a value in the range of 0 to 1. The result
of a linear arrangement of the input characteristics and coefficients is what This
function is utilized to translate into probabilities. The output of the sigmoid
function approaches 1 when the input is big and approaches 0 when the input is
small or negative. Because of this, it may be used to express probabilities in tasks
involving binary categorization.
● Training Models:
➔ Logistic regression determines the model coefficients, or weights, that best match
the training set during the training phase. Typically, to decrease a cost function,
such as the binary cross-entropy loss, optimization methods like gradient descent
or the Newton-Raphson algorithm are used.
➔ The link between the input characteristics and the binary outcome's log-odds is
represented by the model coefficients. A positive coefficient means that the
likelihood of the positive class grows as the feature value increases, while
negative coefficients means the reverse.
● Boundary of Decision:
➔ In logistic regression, the decision boundary is a cutoff point that divides cases
into two groups. It is the line or hyperplane where the estimated likelihood of
belonging to one class meets the threshold value (often 0.5), and is determined by
coefficients learnt during training.
● Prediction:
➔ Using the trained model, logistic regression determines the likelihood that a given
input is a member of the positive class in order to provide predictions. The input
is categorized as falling to the positive category if the probability is greater than a
threshold, which is typically 0.5; if not, it is classed as falling to a negative class.
● Model Evaluation:
➔ Metrics like precision, accuracy, are used to assess the model's performance after
training on a different validating or test dataset. These measures evaluate the
model's ability to generalize to new data.
➔ These procedures may be used to model binary classification issues and provide
predictions based on input characteristics using logistic regression.

26
The benefits of logistic regression are numerous:
➔ It is simple to implement and computationally efficient.
➔ The findings are comprehensible since the coefficients show how each parameter
affects the probability of the result.
➔ It is less likely to overfit, particularly when dealing with a limited number of
features, and performs well with data that is linearly separated

Logistic regression has drawbacks as well:


➔ It makes the assumption that there is a linear connection between the log-odd of
the result and the input attributes, which might not always apply in reality.
➔ Non-linear decision boundary issues are not a good fit for it.
➔ It can only be used for binary classification problems; multi-class classification
requires adjustments such as one-versus-rest or softmax regression.

It's critical to manage categorical variables, preprocess the data, and, if needed, execute
feature scaling while using logistic regression. Furthermore, regularization parameter
tweaking may enhance model generalization and reduce overfitting.

3. AdaBoost:
An ensemble learning technique called AdaBoost Classifier is mostly used for
classification jobs. It works by fusing the results of many ineffective classifiers to
produce a strong, effective model. This is a thorough description of AdaBoost's
operation:

● Initialization:
➔ The training set's data points are originally assigned identical weight.
● Base Model Training:
➔ AdaBoost begins by using the training data to train a weak learner, often a
decision tree. A classifier that does just marginally better than guessing at random
is considered a poor learner.
➔ With consideration for weights of information points, learners who are weak are
trained to decrease the rate of error on the training set.
● Weight Update:
➔ Based on the accuracy of the trained weak learner, AdaBoost gives the classifier a
weight. Classifiers with greater accuracy are assigned more weights.
➔ Data points that were mistakenly categorized have more weights than properly
classified data points, which is the opposite of what happens.

27
● Iterative Training:
➔ AdaBoost performs the training procedure repeatedly using the modified weights.
➔ A fresh weak learner undergoes training on the reweighted data for every
iteration.
➔ The procedure keeps on until a certain number of underperforming students have
received instruction or until a high enough performance level is attained.
● Final Model:
➔ A weighted mixture of all the weak learners makes up the final AdaBoost model.
➔ Based on its precision during training, each weak learner's participation to the
final estimation is weighted.
➔ More precisely, weak learners usually have a greater impact on the outcome
prediction.
● Prediction:
➔ AdaBoost uses the weights assigned to each weak learner to combine their
predictions in order to generate predictions.
➔ A weighted majority or the weighted mean of the guesses made by the weaker
learners determines the final forecast.

AdaBoost offers a number of benefits:


➔ It is adaptive, which means that as training goes on, it concentrates more on cases
that are challenging to categorize.
➔ Even with very basic, weak learners, it may obtain excellent accuracy.
➔ When compared with training a single complicated model, it is less prone to
overfitting.

AdaBoost has drawbacks as well:


➔ Its performance may be adversely affected by noisy information as well as
outliers because of its sensitivity.
➔ AdaBoost training may be computationally costly, particularly if a lot of weak
learners are used.
➔ To avoid overfitting and get peak performance, it's critical to fine-tune
hyperparameters like the learning rate and the number of weak learners.
➔ All things considered, AdaBoost is an effective method for classification
problems, especially when paired with inadequate learners that focus on distinct
data features.

28
4. SVM(Support Vector Machine):
For problems involving regression and classification, the supervised learning method
Support Vector Machine (SVM) is used. It operates by determining which hyperplane in a
space of high-dimensional features best divides data points into various classes. This is a
thorough description of SVM:

● Maximum Margin:
➔ Finding the hyperplane that optimizes the margin across the classes is the goal of
SVM. The margin, sometimes referred to as the support vectors, is the length of
time that separates the hyperplane from the closest data points in each class.
● Linear Separability:
➔ SVM determines the hyperplane that divides the classes by the greatest margin
when dealing with linearly separable data.
➔ SVM transforms information that is not linear to higher-dimensional space, and
when it turns into linearly separable, using a method known as the kernel trick.
● Optimization:
➔ SVM approaches the issue as a concave optimization work, with the goal of
maximizing the margin and minimizing the classification error.
➔ The optimization goal is solving a quadratic computing problem, often with the
use of quadratic programming solvers or gradient descent methods.
● Kernel Trick:
➔ SVM can handle data that is not linearly separable by using kernel functions like
polynomial, sigmoid, or radial basis function (RBF) to map the data to a
higher-dimensional space.
➔ SVM is able to capture intricate decision limits in the converted feature space
because of these kernels.
● Regularization:
➔ SVM balances the trade-off between decreasing classification error and
maximizing the margin by incorporating a regularization parameter (C).
➔ Smaller margins and possibly higher training accuracy are achieved with higher
values of C, but overfitting may become more likely.
● Prediction:
➔ SVM can identify fresh data points by identifying what side for the hyper plane
they come up on once it has been trained.
➔ SVM handles multiple classes in multi-class classification by utilizing techniques
like one-vs-one or one-vs-all.

29
SVM offers a number of benefits:
➔ It functions well with both linearly as well as non-linearly separable data, and it is
efficient in high-dimensional spaces.
➔ Because it only employs a portion of training rewards as support vectors, it is
memory-efficient.
➔ It resists overfitting well, particularly when the right regularization is applied.

SVM is not without limitations, though:


➔ It may incur significant computational costs, especially when working with large
databases or non-linear kernels.
➔ It can be difficult to select the ideal kernel and adjust hyperparameters like the
parameter for regularization.
➔ When there is a lot of imbalance or noise in the data, SVM performance may
suffer.

3.3 DETAILED DISCUSSION OF THE DATASET:


1. Cancer:
● Radius (Best, Worst, Mean, SE):
➔ Mean Radius: The tumor's average size.
➔ SE Radius: Tumor size variation.
➔ The largest tumor size is the worst radius.
● Texture (Best, Worst, Mean):
➔ Mean Texture: Tumor surface roughness or smoothness.
➔ SE Texture: Texture consistency.
➔ The roughest texture that was noticed is the worst.
● (Mean, SE, Worst) perimeter:
➔ Mean Perimeter: The tumor's entire boundary length.
➔ SE Perimeter: Length variations in the perimeter.
➔ The longest measured perimeter is the worst perimeter.
● Region (Worst, SE, Mean):
➔ Mean Area: The average area that the tumor covers.
➔ SE Area: Tumor area variations.
➔ Worst Area: The largest region a tumor has covered.
● Smoothness (Best, Average, and Worst):
➔ Tumor edge irregularity is measured by mean smoothness.
➔ SE Smoothness: The smoothness's consistency.
➔ Worst Smoothness: The majority of uneven edges were noticed.

30
● Compactness (Best, Worst, Mean):
➔ Mean Compactness: Tumor cell density in relation to the perimeter.
➔ SE Compactness: Tightness that is consistent.
➔ Worst Compactness: The highest degree of compactness.
● (Mean, SE, Worst) concavity:
➔ Mean Concavity: The degree to which the tumor's surface has concave areas.
➔ SE Concavity: Uniformity in the degree of concavity.
➔ Worst Concavity: The areas with the worst concavity.
● Mean, SE, Worst Concave Points:
➔ Mean Concave Points: The quantity of segments on the tumor edge that point
inward.
➔ SE Concave Points: These counts' consistency.
➔ The highest point total indicates the worst concave points.
● Mean, SE, Worst symmetry:
➔ Mean Symmetry: The tumor's symmetry or balance.
➔ SE Symmetry: Symmetry consistency.
➔ Worst Symmetry: The least amount of symmetry.
● Fractal Dimension (Mean, SE, Worst):
➔ Mean Fractal Dimension: Tumor shape complexity.
➔ SE Fractal Dimension: Complexity consistency.
➔ The highest observed complexity is the worst fractal dimension.

2. Heart Disease:
● Age: The individual's age.
● Sex : A person's gender.
● CP: What type of pain is in their chest?
➔ 0: An ordinary chest ache.
➔ 1: An unusual type of chest pain.
➔ 2: A non-cardiac chest ache is number two.
➔ 3: Absolutely no chest pain.
● Trestbps: The level of hypertension during a state of relaxation.
● Chol: The amount of blood fat, or cholesterol, that is present.
● Fbs: Whether they have elevated blood sugar following a period of fasting.
● Restaging: The cardiac electrical activity test findings reveal:
➔ 0: Typical.
➔ 1: A little irregularity.
➔ 2: An indication of an enlarged heart.
● Thalach: The heart rate attained during the test.
● Exang: If people have chest discomfort while working out.

31
● Oldpeak: The degree to which an individual's heart activity varies between
exercise and rest.
● Slope: The pattern of their heart rate during physical activity:
➔ 0: Going up.
➔ 1: Staying flat.
➔ 2: Going down.
● Ca: The number of large blood arteries in their heart that may be seen on a certain
kind of X-ray.
● Thal: A specific kind of blood issue
➔ 3: Standard.
➔ 6: A resolved problem.
➔ 7: A fixable problem.
● Target: Whether or not they suffer from heart disease
➔ 1: Yes.
➔ 0: No.

3.4 TOOLS AND TECHNOLOGY USED IN THE PROJECT:

1. React JavaScript:
Facebook created the JavaScript library ReactJS to help in creating user interfaces,
particularly for online apps. It enables programmers to design interactive user interface
elements that quickly update in reaction to changes in data. Because React has a
declarative approach, code is simpler to comprehend and update. Because of its
performance, versatility, and significant community support, it is commonly utilized in
the building of contemporary online applications.

2. Python:
For each of these stages, Python provides a robust ecosystem of libraries, such as pandas
for data processing, scikit-learn as methods, and matplotlib for visualization of data.
Additionally, web applications may be developed and deployed using frameworks like
Flask or Django.

● NumPY:
A Python library called NumPy was created specifically for numerical computation,
especially for jobs requiring big matrices and arrays. It offers a robust and adaptable
numerical data manipulation interface with a plethora of features and functionalities.
Here are some of NumPy's salient features:

32
➔ Arrays: n-di array , a homogenous collection of items with fixed-size
dimensions, is the fundamental data structure in NumPy. Because of their
contiguous memory arrangement, these arrays perform numerical computations
more efficiently than Python lists.
➔ Mathematical Functions: For executing operations on arrays, NumPy comes
with an extensive library of mathematical functions. This covers exponentials,
logarithms, trigonometric functions, fundamental arithmetic operations, and more.
➔ NumPy has an extensive collection of functions for working with arrays,
including indexing, slicing, splitting, concatenating, and reshaping. Effective data
extraction and manipulation are made possible by these activities.
➔ Broadcasting: NumPy facilitates broadcasting, which enables the combination of
arrays of various forms in mathematical processes. This enhances the clarity and
performance of code by enabling element-wise actions between arrays of various
forms.
➔ Linear Algebra: NumPy has functions for solving linear equations and carrying
out a range of linear algebra operations, including matrix multiplication, matrix
inversion, eigenvalue decomposition, and singular value decomposition.
➔ Production of Random Numbers: NumPy offers functions for producing
random numbers from various probability distributions. For activities like
sampling, simulation, and producing random data to evaluate and
experimentation, this is helpful.
➔ Python integration: NumPy easily interacts with SciPy, Matplotlib, Pandas,
which is scikit-learn, among other Python scientific computing libraries and tools.
This makes it possible to create intricate processes for machine learning and data
analysis by combining different frameworks.

● Pandas:
Pandas is a Python package designed for analysis and data manipulation. DataFrame and
Series are its two main data structures. A DataFrame is a two-dimensional named data
structure with rows and columns that resembles a table or spreadsheet, while a Series is
an 1-dimensional array-like object that may house data of any sort.

Pandas' salient characteristics include:


➔ Pandas offers effective data structures, including DataFrame and Series, to
manage structured data.
➔ Data Manipulation: Indexing, slicing, filtering, sorting, merging, joining, and
reshaping are just a few of the many methods that Pandas provides for working
with data.
➔ Missing Data Handling: To enable thorough data cleaning and preprocessing,
Pandas offers techniques for identifying, eliminating, and filling in missing data.

33
➔ Aggregation and Grouping: Pandas allows for the grouping of data based on one
or more keys, which makes it possible to perform aggregation functions like
count, mean, sum, and custom functions.
➔ Time Series Data: Pandas has tools to work with time series data, such as time
zone handling, frequency conversion, resampling, and date/time indexing.
➔ Input/Output: Pandas can read and write data to and from a variety of file formats,
such as HTML, HDF5, CSV, Excel, and SQL databases.
➔ Integration: NumPy, Matplotlib, SciPy, and scikit-learn are just a few of the
Python libraries that Pandas easily integrates with to provide a robust
environment for machine learning, data analysis, and visualization.

● Pickle:
A Python package called Pickle is used to serialize and deserialize Python objects.
Complex Python objects may be transformed into byte streams with its help, which can
then be delivered over a network or saved in a file. Pickle is often used to save Python
object states on disk for eventual loading back into memory.

Essential elements of Pickle comprise:


➔ Serialization: Pickle makes it possible to serialize Python objects, which enables
them to be represented as a byte stream.
➔ Data Preservation: Pickle maintains Python objects' properties, methods, and
associations while also maintaining their structure and state.
➔ Flexibility: Pickle is capable of serializing a large number of Python objects, such
as nested data structures, custom classes, functions, and more.
➔ Binary Format: Pickle converts items into a small, effective binary format for
transport and storage.
➔ Cross-Version Compatibility: Pickle offers the ability to deserialize objects
serialized in one version of Python into another because it is designed to function
with multiple versions of the language.
➔ Security Considerations: Pickle might run arbitrary code during deserialization,
which could result in security vulnerabilities. Therefore, it should be used
carefully when importing data from untrusted sources.

● SkLearn:
A well-known Python machine learning package is called Scikit-learn, or simply
Sk-Learn. For data mining and analysis activities, especially in the area of supervised and
unsupervised learning, it offers a straightforward and effective toolkit.

34
This is a thorough rundown of scikit-learn:
➔ Machine Learning Algorithms: Scikit-learn implements a large number of
supervised and unsupervised learning algorithms, such as the following:
regression (linear, polynomial, ridge, Lasso); classification (logistic regression,
decision trees, random forests, SVM, KNN); clustering (K-means, hierarchical,
DBSCAN); dimensionality reduction (PCA, t-SNE); and model selection and
evaluation techniques (cross-validation, grid search, evaluation metrics like
accuracy, precision, recall, F1-score, an
➔ Consistent Interface: Scikit-learn offers a uniform interface for a variety of
methods, which simplifies the process of experimenting with various models and
evaluating their effectiveness.
➔ Data Integration: NumPy and Pandas are two more Python libraries for analysis
of data and manipulation that Scikit-learn easily connects with. Pandas
DataFrames or NumPy arrays are the formats in which it takes data.
➔ Preprocessing and Feature Extraction: To prepare information to machine
learning algorithms, Scikit-learn offers tools for preprocessing (such as scaling,
normalization, and imputation of values that are not present and feature extraction
(such as text component extraction using TF-IDF).
➔ Pipeline: Scikit-learn has a Pipeline class that makes it simple to replicate and
implement machine learning pipelines by chaining together many data processing
stages and models based on machine learning into a single workflow.
➔ Ease of Use: Scikit-learn's well-designed APIs, comprehensive documentation,
and uniform protocols all contribute to its simplicity & ease of use. Because of
this, it is understandable to both novice and seasoned machine learning
practitioners.
➔ Community & Ecosystem: Scikit Learn has a thriving user and developer
community, frequent updates, and active development. It also provides a wealth of
online resources, including as user manuals, examples, and tutorials.

● Scipy:
A Python package called Scipy is used for technical and scientific computing. It extends
NumPy's capabilities by adding features for signal processing, linear algebra,
interpolation, optimization, integration, interpolation, statistics, and other areas.

This is a thorough synopsis of Scipy:


➔ The numerical integration functions offered by Scipy include quad, which
integrates a function over a finite or infinite interval, db-lquad, which integrates
twice, and n-quad, which integrates n dimensions.

35
➔ Optimization: Scipy provides optimization procedures to determine a function's
minimum (or maximum). For scalar univariate functions, it comprises the
minimize_scalar method, minimize_constrained method, and minimize for
unconstrained optimization.
➔ Scipy has functions for interpolation that let you estimate values that are unknown
between known data points. For one-dimensional interpolation, it contains
interp1d, and for multidimensional interpolation, griddata.
➔ Signal Processing: Wavelet transforms, windowing, Fourier analysis, and filtering
are just a few of the many functions available in Scipy for signal processing. It
has functions for spectrum analysis (fft), correlation (correlate), convolution
(convolve), and more.
➔ Scipy has functions for a range of linear algebraic operations, including matrix
inversion (inv), eigenvalue decomposition (eig), decomposition of singular values
(svd), and solving linear equations (solve). Sparse matrix support is another
feature of it.
➔ Statistics: For typical tasks like distributions of probability, testing of hypotheses,
and descriptive statistics, Scipy has statistical functions. It offers functions to
compute statistical analyses (t-test, chi-square test), generate random variables
from distributions of probability, and compute summary information .
➔ Interoperability with Other Libraries: Scipy has an easy integration with other
Python libraries, especially NumPy and Matplotlib. Because it takes NumPy
arrays as inputs and outputs, integrating Scipy functions into current processes is
a breeze.
➔ Rich Documentation: Scipy's rich documentation, which includes tutorials and
examples, makes it simple for users to understand and make efficient use of its
features.
➔ Scipy is an all-around strong Python scientific computing toolkit that offers a
variety of tools and functions for different mathematical and scientific activities.
It is extensively used in data analysis, engineering, academic research, and other
fields.

● Regex:
Character sequences called regular expressions, or "regex," are used to specify search
patterns. These are quite strong tools for searching inside text and manipulating strings.

This is a thorough explanation:


➔ Pattern Matching: With Regex, you can create patterns that will match certain
character sequences in a string. Within a text block, for instance, you may look up
phone numbers, email addresses, or URLs.

36
➔ Metacharacters: Regex defines patterns using metacharacters. These contain
characters such as "." which matches any character, "*" which match zero or more
instances of the preceding character, "+" which matches any or all occurrences,
"^" which matches the beginning of a string, "$" which matches the conclusion of
a string, and more.
➔ Character Classes: Regex lets you use square brackets to construct character
sets. "[a-z]" corresponds to any sm letter, "[0-9]" to any number, and
"[A-Za-z0-9]" to any alphanumeric character, for instance.
➔ Quantifiers: Regex has quantifiers to indicate the minimum and maximum
number of times an individual or set of characters should appear. For instance, the
character "{3}" corresponds to precisely three occurrences, "{3,}" to three or
more, and "{3,5}" to from three to five occurrences.
➔ Limits and Anchors: Regex allows anchors such as "^" and "$" to match the
beginning and finish of string, respectively. Word boundaries may be matched
using boundaries such as "\b".
➔ Grouping and Capturing: Using parentheses, Regex enables you to group
together elements of a pattern. This is helpful for generating subpatterns or adding
quantifiers to many characters. You may utilize capturing groups to retrieve
certain characters from a matching string.
➔ Greedy versus Non-Greedy Matching: By default, regex quantifiers match to
the string because they are greedy. A quantifier becomes non-greedy by
appending "?", matching as little for the string as feasible.
➔ Escape Characters: In regex (metacharacters), certain characters have unique
meanings. You must use the backslash "\" to get out of these characters in order
for them to match literally.

● TensorFlow:
Google created the open-source artificial intelligence framework TensorFlow. For
creating and implementing machine learning models, it offers an extensive ecosystem of
instruments, libraries, and community resources.

This is a thorough explanation:


➔ TensorFlow uses graphs to describe computations, with nodes standing for
operations and edges for the data flow between them. This makes it possible to
optimize and run sophisticated machine learning models quickly.
➔ TensorFlow has the ability to generate gradients of mathematical expressions
autonomously via the use of its automated differentiation feature. When utilizing
methods like gradient descent to build deep learning models, this is crucial.
➔ High-Level APIs: TensorFlow provides high-level APIs that make neural
network construction and training easier, such as Keras, tf.keras, and TensorFlow

37
Estimators. Simple-to-use interfaces for model creation, training, and deployment
are offered via these APIs.
➔ Flexible design: Deep neural networks, CNNs, recurrent neural networks with
reinforcement learning (RNNs), and other machine learning models may be
constructed using TensorFlow's very modular and adaptable design. It works with
deep learning models as well as conventional machine learning techniques.
➔ TensorFlow offers support for GPU and TPU computations, enabling you to take
advantage of hardware acceleration for tasks like inference and training.
Additionally, TensorFlow works with Google's Tensor Processing Units (TPUs),
which are dedicated hardware accelerators designed for deep learning workloads.
➔ TensorFlow facilitates distributed computing, which lets you train models on
many computers and devices at once. This makes it possible to train big models
on big datasets in a scalable manner.
➔ TensorFlow offers tools to export trained models and serve them in production
settings, facilitating the process of model deployment and serving. Frameworks
for delivering models being manufactured & on mobile and devices that are
embedded, respectively, include TensorFlow Servicing and TensorFlow Lite.
➔ Community & Ecosystem: Libraries, tutorials, models that have been trained
tools, and an active development and research community make up TensorFlow's
ecosystem. This dynamic network encourages cooperation and creativity in the
machine learning space.

● Torch:
The AI Research lab at Facebook created the open-source machine learning framework
Torch, also referred to as PyTorch (FAIR). It offers a versatile and effective framework
for deep learning model construction and training.

This is a thorough explanation:


➔ PyTorch employs dynamic computational graphs, in which the graph topology is
determined dynamically while the application runs. This allows for better
debugging and more adaptable model structures than static computational
networks seen in other frameworks.
➔ Automatic Differentiation: PyTorch's autograd module offers the ability to
differentiate data automatically. This makes it possible to automatically calculate
the gradients of mathematical expressions, which makes it easier to train
sophisticated neural network designs using gradient-based optimization
techniques.

38
➔ Tensor Operations: Tensors are the basic data structure used by PyTorch to
represent multi-dimensional arrays. Similar to NumPy arrays, tensors have extra
properties that are tailored for deep learning applications. A comprehensive
collection of tensor operations is provided by PyTorch for effective data handling
and processing.
➔ Neural Network Modules: To construct neural network topologies, PyTorch
offers a versatile and modular API. Model building and customisation are made
simple using neural network modules, which are just subclasses of
torch.nn.Module and encapsulate each layer and operation of a neural network.
➔ GPU Acceleration: GPU acceleration for inference and training operations is
made possible by PyTorch's seamless integration with NVIDIA's computing in
parallel platform, CUDA. On hardware that is compatible, this enables quicker
computing and deep learning model training.
➔ Model Deployment: Mobile devices, internet servers, as well as production
systems are just a few of the contexts to which trained models may be exported
using PyTorch's tools and utilities. PyTorch models may be translated into a
lightweight format using the TorchScript translator, allowing them to be used in
settings without Python.
➔ Robust Ecosystem: Torch has an extensive network of libraries, instruments and
community assets, such as torchtext for the processing of natural languages,
torchaudio for processing of audio, and torchvision for applications for computer
vision. A collection of models that have been trained and components is also
available via PyTorch Hub for simple integration into your applications.
➔ Research and Development: PyTorch is extensively used for machine learning
and artificial intelligence research and development in both academia and
industry. Because of its adaptability, simplicity, and dynamic quality, it's
especially well-suited for testing out novel concepts and innovative methods.

39
CHAPTER 4
RESULT AND DISCUSSION

● Introduction to Intelligent Disease Prediction Systems:


The advent of intelligent computer systems has revolutionized healthcare by offering
unprecedented accuracy and efficiency in disease identification. In this section, we delve
into the results and discussions surrounding the role of machine learning algorithms in
predicting diseases. We explore the practical applications of these algorithms, their
challenges, and the proposed method for streamlining the diagnostic process.
● Machine Learning Algorithms in Disease Prediction:
Machine learning algorithms play a pivotal role in disease prediction, enabling healthcare
professionals to analyze vast amounts of patient data with remarkable precision. Various
algorithms, including K-Nearest Neighbors (KNN), Support Vector Machines (SVM),
Decision Trees, and Logistic Regression, have been employed to enhance the functioning
efficiency of predictive systems.
➔ Comparative Analysis of Algorithms:
A comparative analysis of different machine learning algorithms reveals their
distinct advantages and limitations in disease prediction. KNN, for instance, relies
on proximity-based classification, making it suitable for identifying diseases with
clear clusters in feature space. SVM, on the other hand, excels in handling
high-dimensional data and nonlinear relationships, making it ideal for complex
disease prediction tasks. Decision Trees offer interpretability and ease of
understanding, while Logistic Regression provides probabilistic outputs, aiding in
risk assessment.
➔ Advantages of Unified Application Approach:
Our proposed method distinguishes itself by utilizing a unified application that
requires minimal user input yet offers comprehensive disease predictions. Unlike
traditional approaches that may necessitate the use of multiple programs for
different tasks, our system streamlines the diagnostic process, facilitating rapid
and accurate disease identification. This unified approach not only enhances
efficiency but also empowers healthcare professionals to make informed decisions
promptly.

40
● Practical Applications and Implications:
The practical applications of intelligent disease prediction systems are far-reaching, with
profound implications for healthcare delivery and patient outcomes. By automating the
diagnostic process and leveraging machine learning algorithms, these systems offer
several benefits:
➔ Enhanced Efficiency and Accuracy:
Intelligent disease prediction systems expedite the diagnostic process, enabling
healthcare professionals to swiftly identify and address patient concerns. By
analyzing symptoms and historical data, these systems provide accurate
predictions, leading to timely interventions and improved health outcomes.
➔ Cost-Efficiency and Accessibility:
The adoption of computer programs in disease prediction contributes to
cost-efficiency in healthcare delivery. By reducing the need for extensive manual
analysis and invasive diagnostic procedures, these systems lower healthcare costs
while improving accessibility, particularly in underserved communities and rural
areas.
➔ Proactive Healthcare Management:
Early disease detection facilitated by predictive systems allows for proactive
healthcare management, leading to better disease management and prevention. By
identifying risk factors and warning signs at an early stage, healthcare
professionals can implement targeted interventions, ultimately reducing the
burden of chronic diseases and improving overall public health.
● Challenges and Considerations:
While intelligent disease prediction systems offer promising solutions to healthcare
challenges, several challenges and considerations must be addressed:
➔ Data Quality and Integration:
The quality and integration of data are paramount to the success of predictive
systems. Ensuring the accuracy, reliability, and privacy of patient data is essential
for generating meaningful insights and predictions. Moreover, integrating diverse
datasets from various sources poses challenges related to data compatibility and
standardization.
➔ Model Evaluation and Validation:
The evaluation and validation of predictive models are critical to ensuring their
reliability and effectiveness. Rigorous testing and validation procedures, including
cross-validation and independent validation, are essential to assess model
performance and generalizability across diverse patient populations.

41
➔ Ethical and Regulatory Considerations:
Ethical considerations, including patient consent, data privacy, and algorithmic
bias, must be carefully addressed to uphold patient rights and ensure equitable
healthcare delivery. Regulatory frameworks governing the use of predictive
healthcare technologies play a crucial role in safeguarding patient interests and
maintaining ethical standards.

Figure 4.1: Dashboard

42
Figure 4.2: Cancer Classification

Figure 4.3: Result of Cancer Module

43
Figure 4.4: History Section

Figure 4.5: Bar Chart of Cancer Dataset

44
TABLE 4.1 : Model Accuracy

S.No. Model Accuracy

1 naive_bayes 0.936170

linear_discriminant_analysis 0.968085
2

3 logistic_regression 0.962766

4 SVC 0.982456

5 kneighbors_classifier 0.941489

6 sgd_classifier 0.856383

7 random_forest_classifier 0.962766

8 gradient_boosting_classifier 0.957447

9 xgboost_classifier 0.973404

10 adaboost_classifier 0.941489

11 lgbm_classifier 0.968085

12 etc_classifier 0.968085

45
Based on the accuracy scores provided for various classifiers, here's a summary and a
recommendation for which classifiers are performing the best in this context:

1. Naive Bayes: 0.936170


2. Linear Discriminant Analysis (LDA): 0.968085
3. Logistic Regression: 0.962766
4. Support Vector Classifier (SVC): 0.982456
5. k-Nearest Neighbors (kNN): 0.941489
6. Stochastic Gradient Descent (SGD): 0.856383
7. Random Forest: 0.962766
8. Gradient Boosting: 0.957447
9. XGBoost: 0.973404
10. AdaBoost: 0.941489
11. LightGBM (LGBM): 0.968085
12. Extra Trees Classifier (ETC): 0.968085

Top Performing Classifiers:

1. Support Vector Classifier (SVC): 0.982456


2. XGBoost: 0.973404
3. Linear Discriminant Analysis (LDA): 0.968085
4. LightGBM (LGBM): 0.968085
5. Extra Trees Classifier (ETC): 0.968085
6. Logistic Regression: 0.962766
7. Random Forest: 0.962766

Analysis and Recommendation:

● SVC has the highest accuracy (0.982456), making it the best performing classifier in this
list.
● XGBoost also performs very well with a high accuracy of 0.973404.
● LDA, LightGBM, and ETC all have the same accuracy of 0.968085, showing strong
performance as well.
● Logistic Regression and Random Forest are tied with an accuracy of 0.962766, slightly
lower but still very competitive.

Given these results, the Support Vector Classifier (SVC) would be the recommended choice for
the highest accuracy. However, XGBoost is also a strong contender if you are looking for a
tree-based method. The choice between these might depend on other factors like training time,
interpretability, and computational resources.

46
CHAPTER 5
CONCLUSION AND FUTURE SCOPE

5.1 CONCLUSION:
Revolutionizing Healthcare with Intelligent Disease Prediction Systems:
● Recapitulation of the Problem and Solution:
In this paper, we have explored the transformative potential of intelligent computer
systems in revolutionizing healthcare, particularly in disease prediction. We began by
recognizing the pressing need for improved healthcare accessibility, cost-efficiency, and
time-saving measures, especially in underserved communities. Traditional healthcare
systems often fall short in timely disease identification, leading to compromised health
outcomes and increased healthcare costs. To address these challenges, we proposed the
development of a "Multiple Disease Prediction System" powered by machine learning
algorithms.
● Contributions and Methodology:
Our study has made significant contributions across various stages, starting from data
collection and compilation to the deployment and accessibility of the predictive model.
By leveraging diverse datasets, selecting appropriate machine learning models, and
fine-tuning hyperparameters, we have laid the groundwork for a robust and accurate
disease prediction system. The integration of this model into a user-friendly web platform
ensures accessibility and real-time predictions, thereby bridging the gap between patients
and healthcare providers.
● Implications and Benefits:
The implications of our research extend beyond mere technological advancements. By
facilitating early disease detection and timely interventions, our predictive system has the
potential to alleviate the burden of chronic diseases and improve overall health outcomes.
Moreover, the cost-effectiveness and efficiency of our approach make healthcare more
accessible to a wider population, regardless of geographical or socioeconomic barriers.
This democratization of healthcare not only enhances individual well-being but also
fosters a healthier community at large.

47
● Future Directions and Challenges:
While our study represents a significant step forward in predictive healthcare, several
challenges and opportunities lie ahead. Continuous monitoring and model updating are
essential to maintaining the accuracy and relevance of our predictive system in the face of
evolving health trends and patient demographics. Additionally, addressing ethical and
privacy concerns surrounding the collection and utilization of sensitive health data
remains paramount. Collaborative efforts between researchers, healthcare providers,
policymakers, and technology developers will be crucial in overcoming these challenges
and realizing the full potential of intelligent disease prediction systems.
● Precision in Disease Identification:
In this examination of the transformative potential of intelligent computer systems in
healthcare, we've highlighted the critical role of machine learning algorithms in disease
prediction. Traditional methods of diagnosis often struggle to swiftly analyze vast
amounts of patient data, leading to delayed treatments and compromised outcomes. Our
proposed solution aims at overcoming this limitation by employing sophisticated
algorithms to expedite and enhance disease identification processes.
● Streamlined Approach for Improved Healthcare:
By focusing on a singular, user-friendly application that requires minimal input, our
system streamlines the diagnostic process for healthcare professionals. Utilizing machine
learning techniques such as KNN, SVM, Decision Tree, and Logistic Regression, we
ensure efficient and accurate predictions. This approach not only saves time and
resources but also empowers physicians to make informed decisions promptly, ultimately
improving patient care.
● Contributions and Future Directions:
Throughout our study, we've outlined significant contributions ranging from data
collection and model selection to deployment and continuous monitoring. Continuous
refinement and adaptation are essential to ensure the ongoing effectiveness and relevance
of our predictive system. Addressing challenges such as data privacy concerns and
algorithmic biases will be pivotal in shaping the future of predictive healthcare.
● Implications for Public Health:
The implications of our research extend beyond individual patient care to broader public
health outcomes. Early disease detection facilitated by our system has the potential to
mitigate the burden of chronic illnesses and reduce healthcare costs. By democratizing
access to predictive healthcare solutions, we pave the way for a more equitable and
resilient healthcare system, benefiting communities worldwide.

48
In conclusion, the development and implementation of intelligent disease prediction systems
mark a paradigm shift in healthcare delivery. By harnessing the power of machine learning and
data analytics, we have the opportunity to transform reactive healthcare models into proactive,
preventive measures. Our research underscores the importance of leveraging technology to
promote health equity, affordability, and efficiency. As we navigate the complexities of modern
healthcare, let us embrace innovation as a catalyst for building a healthier and more resilient
future for all. The development and implementation of intelligent disease prediction systems
herald a new era of precision healthcare. By leveraging cutting-edge technology, we can
revolutionize the way diseases are identified, treated, and prevented. Our commitment to
innovation and collaboration underscores our collective endeavor to advance human health and
well-being.

5.2 FUTURE SCOPE:

● Integration of Advanced Technologies: The future of disease prediction lies in the


integration of advanced technologies such as artificial intelligence (AI), deep learning,
and big data analytics. These technologies offer greater scalability, flexibility, and
predictive accuracy, enabling more robust disease prediction models.
● Personalized Medicine: There is a growing emphasis on personalized medicine, where
disease prediction models are tailored to individual patient profiles, genetic factors, and
lifestyle habits. By leveraging patient-specific data, including genetic markers and
biomarkers, predictive models can provide more targeted and effective interventions.
● Real-Time Monitoring: The development of wearable devices and mobile health
applications allows for real-time monitoring of patient health data, including vital signs,
activity levels, and physiological parameters. Integrating real-time data streams with
predictive algorithms enables continuous monitoring and early detection of health
abnormalities.
● Interdisciplinary Collaboration: Collaboration between healthcare professionals, data
scientists, engineers, and policymakers is essential for advancing the field of intelligent
disease prediction. Interdisciplinary research initiatives can facilitate the development of
innovative solutions, address complex healthcare challenges, and ensure the ethical and
responsible deployment of predictive technologies.
● Enhanced Data Governance: The establishment of robust data governance frameworks
is crucial for ensuring the quality, integrity, and privacy of patient data used in predictive
modeling. Clear guidelines and standards for data collection, storage, and sharing can
mitigate risks associated with data breaches, algorithmic biases, and ethical concerns.

49
● Global Health Initiatives: Addressing global health disparities requires collaborative
efforts to deploy intelligent disease prediction systems in underserved regions and
low-resource settings. Mobile health technologies, telemedicine platforms, and
community health interventions can extend the reach of predictive healthcare services to
remote populations, improving access to timely diagnosis and treatment.
● Longitudinal Studies and Predictive Modeling: Longitudinal studies that track patient
health over extended periods provide valuable insights into disease progression, risk
factors, and treatment outcomes. Integrating longitudinal data into predictive modeling
enhances the accuracy and reliability of disease predictions, enabling proactive healthcare
interventions.
● Continuous Model Optimization: Continuous model optimization through feedback
loops and iterative learning processes ensures the adaptability and relevance of predictive
models over time. By analyzing real-world outcomes and refining model parameters,
healthcare providers can enhance the effectiveness of disease prediction algorithms and
improve patient outcomes.
● Public Health Surveillance: Intelligent disease prediction systems can serve as powerful
tools for public health surveillance, monitoring disease trends, and outbreaks in real-time.
Early detection of emerging threats enables proactive response measures, including
vaccination campaigns, quarantine protocols, and targeted interventions to mitigate the
spread of infectious diseases.
● Patient Empowerment and Education: Empowering patients with access to their health
data, predictive insights, and personalized recommendations fosters greater engagement
and collaboration in disease prevention and management. Educational initiatives that
promote health literacy and self-care empower individuals to make informed decisions
about their health and well-being.
Looking ahead, the continued advancement of intelligent disease prediction systems holds
immense promise for transforming healthcare delivery and improving patient outcomes. Future
research efforts should focus on addressing the aforementioned challenges, refining predictive
models, and integrating innovative technologies such as artificial intelligence and big data
analytics. By harnessing the power of machine learning and data-driven insights, we can pave the
way for a future where healthcare is not only more precise and efficient but also more accessible
and equitable for all.

50
REFERENCES
[1] Laxmi Deepthi Gopisetti, Srinivas Karthik Lambavai Kummera, Sai Rohan Pattamsetti,
Sneha Kuna, Niharika Parsi, Hari Priya Kodali, “Multiple Disease Prediction Model by using
Machine Learning and Streamlit” 2023 IEEE, 5th International Conference on Smart Systems
and Inventive Technology (ICCSIT).

[2] Akkem Yaganteeswarudu, “Multi Disease Prediction Model by using Machine Learning”
2020 IEEE, 5th International Conference on Communication and Electronics Systems (ICCES).

[3] Elsevier B.V,” Diabetes Prediction Using Machine Learning” 2019, International Conference
on Recent Trends in Advanced Computing.

[4] “Prediction of Heart Disease using Machine Learning Algorithm” Mr. Santhana Krishnan J,
Dr. Geetha. S. (2018).

[5] “Multi Disease Prediction using Data Mining Techniques” K. Gomathi, Dr. D. Shanmuga
Priyaa (2017).

[6] Reza Rabiei, Seyed Mohammad Ayyoubzadeh, Solmaz Sohrabei, Marzieh Esmaeili, and
Alireza Atashi. "Prediction of Cancer Using Machine Learning Approaches." Journal of
Biomedical Physics and Engineering, 2021.

[7] Chaimaa Boukhatem, Heba Yahia Youssef, Ali Bou Nassif. February 2022 IEEE, Advances
in Science and Engineering Technology International Conferences (ASET).

[8] Supriya Kamoji, Dipali Koshti, Valiant Vincent D'mello, Alrich Agnel Kudel, Nash Rajesh
Vaz, Prediction of Parkinson's Disease using Machine Learning and Deep Transfer Learning
from different Feature Sets, July 2021 IEEE, 6th International Conference on Communication
and Electronics Systems (ICCES).

[9] Acharya, D. P., Adeli, H., & Nguyen, T. K. (2020). Application of machine learning in
Parkinson's disease diagnosis.

[10] Wang, H., Ding, Y., Tang, H., Wang, L., & Xia, J. (2018). Prediction of hepatitis B
infection among chronic hepatitis B carriers using machine learning algorithms. Frontiers
in Public Health.

[11] Patil, D., Khatri, R., & Saha, S. (2021). A comparative analysis of machine learning
algorithms for diabetes prediction. Journal of Ambient Intelligence and Humanized Computing.

51
[12] Sharmila, Leoni. (2022). A Comparative Study of Neural Network and Fuzzy Neural
Network for Classification. Volume 10. 1371-78.

[13] Deo RC. Machine learning in medicine Circulation. 2015;132(20):1920-1930.

[14] VijayaKumar, K. , Lavanya, B. , Nirmala, I. , Caroline, S.S. : Random forest algorithm for
the prediction of diabetes. In: International Conference on System, Computation, Automation
and Networking.

[15] M. F. Rabbi, M. P. Uddin, M. A. Ali et al., “Performance evaluation of data mining


classification techniques for heart disease prediction,” American Journal of Engineering
Research, vol. 7.

[16] H. Asri, H. Mousannif, H. A. Moatassim, and T. Noel, ‘Using Machine Learning


Algorithms for Breast Cancer Risk Prediction and Diagnosis’, Procedia Computer Science, vol.
83, pp. 1064–1069, 2016, doi: 10.1016/j.procs.2016.04.224.

[17] Mohan, N. , Jain, V. : Performance analysis of support vector machines in diabetes


prediction. In: International Conference on Electronics, Communication and Aerospace
Technology.

[18] Priyanka Sonar, Prof. K. Jaya Malini,” DIABETES PREDICTION USING DIFFERENT
MACHINE LEARNING APPROACHES”, 2019 IEEE ,3rd International Conference on
Computing Methodologies and Communication (ICCMC).

[19] Archana Singh, Rakesh Kumar, “Heart Disease Prediction Using Machine Learning
Algorithms”, 2020 IEEE, International Conference on Electrical and Electronics
Engineering(ICE3).

[20] Prediction Support System for Multiple Disease Prediction Using Naive Bayes Classifier”.
Selvaraj A, Mithra MK, Keerthana S, Deepika M. International Journal of Engineering and
Techniques - Volume 4 Issue 2, Mar-Apr 2021.

[21] “A Proposed Model for Lifestyle Disease Predict Vectorion Using Support Machine”
Mrunmayi Patil, Vivian Brian Lobo, Pranav Puranik, Aditi Pawaskar, Adarsh Pai, Rupesh
Mishra U.G. Student, Assistant Professor (2018).

[22] Albarqouni, S., Baur, C., Achilles, F., Belagiannis, V., Demirci, S., Navab, N. (2016).
AggNet: Mitosis Detection via Deep Learning from Crowds IEEE Transactions on Medical
Imaging, 35(5), 1313-1321.

52
[23] Ardila, D., Kiraly, A. P., Bharadwaj, S., Choi, B., Reicher, J. J., Peng, L., ... & Lungren, M.
P. (2019). Comprehensive screening for lung cancer using three-dimensional deep learning on
low-dose chest computed tomography. natural medicine, 25(6), 954-961.

[24] Baldi, P., & Sadowski, P. (2014). The dropout learning algorithm. arXiv preprint
arXiv:1312.6197.

[25] Choi, H., Jin, K. H., & Ye, J. C. (2018). For pulmonary nodules or masses, radiomics
repeatability is improved using deep learning-based image conversion of CT reconstruction
kernels. 290(3), 771–781, Radiology.

[26] Kuprel, B., Novoa, R. A., Ko, J., Blau, H. M., Thrun, S., Esteva, A., & Swetter, S. M.
(2017). Deep neural networks for the classification of skin cancer at the dermatologist level.
115–118 in Nature, 542(7639).

[27] Narayanaswamy, A., Wu, D., Peng, L., Coram, M., Stumpe, M. C., Gulshan, V., & Kim, R.
(2016). Creation and verification of a deep learning method to identify diabetic retinopathy in
retinal fundus photos. Jama, 2402-2410, 316(22).

[28] Sun, J., Ren, S., Zhang, X., & He, K. (2016). Deep residual learning for the identification of
images. In Computer Vision and Pattern Recognition Conference Proceedings, IEEE (pp.
770-778).

[29] Hinton, G. E., Sutskever, I., Krizhevsky, A., & Salakhutdinov, R. R. (2012). Srivastava, N.
enhancing neural networks by keeping feature detectors from co-adapting. Preprint arXiv
arXiv:1207.0580.

[30] Vapnik, V., Barnhill, S., Weston, J., and Guyon, I. (2002). Support vector machines are used
for gene selection in the classification of cancers. 389–422 in Machine Learning.

[31] Friedman, J., Tibshirani, R., and Hastie, T. (2009). The three components of statistical
learning are prediction, inference, and data mining. Business & Science Media, Springer.

[32] James, G., Hastie, T., Witten, D., & Tibshirani, R. (2013). 33. Learning statistics: an
introduction (Vol. 112, p. 18). Springer New York.

[33] McGraw-Hill, New York, 36. Mitchell, T. (1997). Machine learning.

[34] Strobl, C., Zeileis, A., Boulesteix, A. L., & Hothorn, T. (2007). Examples, references, and a
fix for bias in random forest variable importance measures. 8(1) BMC bioinformatics, 25. R.
Tibshirani (1996).

53
PUBLICATION DETAILS

54
APPENDIX

55
56
57
PLAGIARISM REPORT

58

You might also like