0% found this document useful (0 votes)
598 views73 pages

Liver Disease Prediction Using Machine Learning and Deep Learning

Uploaded by

kobovap895
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
598 views73 pages

Liver Disease Prediction Using Machine Learning and Deep Learning

Uploaded by

kobovap895
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 73

ABSTRACT

Recently liver diseases are becoming most lethal disorder in a number of countries.
The count of patients with liver disorder has been going up because of alcohol
intake, breathing of harmful gases, and consumption of food which is spoiled and
drugs. Liver patient data sets are being studied for the purpose of developing
classification models to predict liver disorder. This data set was used to implement
prediction and classification algorithms which in turn reduces the workload on
doctors.The data set used in this paper is Liver Patient taken from UCI Repository
(i.e. Supervised Learning). There is a plenty of data on patients undergoing
medical examination at hospitals and these data has been extracted on liver patients
whose information can be further used for future improvement of their conditions.
In other words, historical and classified input of patients and output data is fed into
various algorithms or classifiers for predicting the future data of patients. In this
work, we proposed apply machine learning and deep learning algorithms to check
the entire patient’s liver disorder.The algorithms used here for predicting liver
patients are Decision Tree, KNNeighbor and Artifical Neural Network. Based on
the analysis and result calculations, it was found that these algorithm has obtained
good accuracy after feature selection. A Decision Tree boasts remarkable accuracy
at 99.96%, leading the pack in precise liver disease predictions. Meanwhile, the
KNearestNeighbour model follows closely at 97.42%, with Artificial Neural
Networks achieving 71.55% accuracy, offering diverse options for predictive
analysis.
1.INTRODUCTION

The liver is the most imperative structure in a human build. Insulin is broken
down by the liver. The liver breaks bilirubin with glucuronidation, which
further helps its defecation into bile [1]. It is also accountable for the breaking
down and excretion of many unwanted products. It shows a noteworthy role in
altering toxic materials. It shows a noteworthy role in collapsing medicinal
products. It’s named Drug metabolism. The weight would be 1.3 kg. The liver
consists of 2 immense portions namely the privileged portion, and the left
estimate. The gallbladder is located below the liver, near the pancreas. The
Liver along with these organs helps to consume and give nutrition. Its job is to
help the flow of the wounding materials in the stream of blood from the
stomach, before passing it to whatsoever is left of the body. Liver sicknesses are
triggered when the working of the liver is affected or any injury has happened
to it [2]. The development of liver disorders [3] is complicated and varied in
character, influenced by a number of variables that determine disease
susceptibility. Sex, ethnicity, genetics, environmental exposures (viruses,
alcohol, nutrition, and chemicals), body mass index (BMI), and coexisting
diseases like diabetes are among them. A high mortality rate is associated with
liver problems, which are life-threatening diseases. The usual urine and blood
tests are the first step in the prognosis of liver disorders. A LFT (liver functions
test) is recommended for the patient based on the symptoms seen [4]. Liver
disease is a significant health issue affecting millions of people globally. Early
detection and accurate classification of liver diseases can lead to better patient
outcomes and reduce the burden on the healthcare system. One-third of adults
and an increasing proportion of youngsters in affluent nations suffer from non-
alcoholic fatty liver disease (NAFLD) [5], a growing health issue. The
abnormal buildup of triglycerides in the liver, which in some people causes an
inflammatory reaction that can lead to cirrhosis and liver cancer, is the first sign
of the condition. While there is a significant correlation between obesity, insulin
resistance, and non-alcoholic fatty liver disease (NAFLD), the pathophysiology
of NAFLD remains poorly understood, and treatment options are limited.
However, machine learning techniques have demonstrated encouraging results
in predicting and categorizing liver diseases based on patient data. By utilizing
sophisticated algorithms to analyze and learn from large datasets, these
techniques can identify patterns and anticipate outcomes. The employment of
machine learning techniques in liver disease prediction and classification is a
dynamic area of research, with continual advancements being made to enhance
accuracy and decrease healthcare costs.

Chemicals Compounds in Liver Chemicals

such as Bilirubin, Albumin, Alkaline phosphatase, Aspartate aminotransferase,


and globulin are existent in the liver and perform a vital role in the daily
operations of the healthy liver.

1) Bilirubin: Bilirubin is a yellowish complex that arises in the usual catabolic


trail that breaks down heme in vertebrates. Bile and urine emit it. Raised
volumes of bilirubin in the body cause diseases. The bilirubin is accountable for
the yellow shade of cuts and the yellow staining in jaundice disease. Its
following breakdown products, like stercobilin, are accountable for the brown
color of feces. Another breakdown product, urobilin, is the key constituent of
the straw-yellow color of urine.
2) Alkaline phosphatase: In beings, alkaline phosphatase is existent in all
tissues all over the body but is mainly focused in the liver, intestinal mucosa,
bile duct, bone, kidney, and placenta. In the serum, two kinds of alkaline
phosphatase isozymes prevail skeletal and liver. In childhood, most of the
alkaline phosphatase is of the skeletal source. Most of the mammals including
humans have these types of alkaline phosphatases:

• ALPI: It is intestinal having a molecular mass of 150 kDa.

• ALPL: It is tissue-nonspecific mainly present in the liver, kidney, and bone.

• ALPP: It is placental and is also known as Regan isozyme.

• GCAP: It is a germ cell.

3) Aspartate aminotransferase: AST is a kind of enzyme. AST levels are higher


in the heart and liver. AST is found in the kidneys and muscles, although in less
amounts. It is very low in human blood. When muscle or liver cells are injured,
the AST is released into the bloodstream. The AST test will therefore be useful
for tracking or identifying liver damage or dysfunction.

4) Albumin: They’re globular proteins. Serum albumins are common and are
the most imperative protein of blood. It binds thyroxine (T4), water, cations like
Ca2+ and Na+, hormones, fatty acids, bilirubin, and pharmaceuticals. Its core
part is to govern and normalize the oncotic pressure of the blood. It binds
several fatty acids, cations, and bilirubin.

5) Globulin: They are protein globules. They are heavier than albumin at the
molecular level. It will not dissolve in pure water but will solvate in dilute salt
solutions. The liver produces some globulins. Globulin absorption in fit human
blood is around 2.6-3.5 g/dL. There are several different types of globulins,
including beta, alpha 1, alpha 2, and gamma globulins. Any unfitting amounts
of these chemicals produced in the kidney can reason an imbalance and cause
liver diseases. These are considered features. There are n number of kinds of
liver illnesses and these are grounded based on the proportion of these
chemicals stashed.

2.LITERATURE SURVEY
2.1 Deep learning in liver biopsies using convolutional neural networks

AUTHORS: R. Forlano, P. Manousou, and N. Giannakeas,

ABSTRACT: Nonalcoholic fatty liver disease (NAFLD) presents a wide range


of pathological conditions, varying from nonalcoholic steatohepatitis (NASH)
to cirrhosis and hepatocellular carcinoma (HCC). Their prevalence is
characterized by increased fat accumulation and hepatocellular ballooning.
They have become a cause of concern among physicians and engineers, as
significant implications tend to occur regarding their accurate diagnosis and
treatment. Although magnetic resonance, ultrasonography and other
noninvasive methods can reveal the presence of NAFLD, image quantitative
interpretation through histology has become the gold standard in clinical
examinations. The proposed work introduces a fully automated diagnostic tool,
taking into account the high discrimination capability of histological findings in
liver biopsy images. The developed methodology is based on deep supervised
learning and image analysis techniques, with the determination of an efficient
convolutional neural network (CNN) architecture, performing eventually a
classification accuracy of 95%.

2.2 Accuracy prediction using machine learning techniques for indian patient
liver disease

AUTHORS: L. A. Auxilia,

ABSTRACT: The utilization of medicinal datasets has pulled in the


consideration of specialists around the world. Machine Learning methods have
been broadly utilized as a part of creating choice emotionally supportive
networks for ailments forecast through an arrangement of therapeutic datasets.
Grouping systems have been broadly utilized as a part of the restorative field
for exact order than an individual classifier. Liver malady (additionally called
hepatic infection) is a sort of harm to or illness of the liver. There are in excess
of a hundred various types of liver ailment. In this task, I have taken the
datasets of general Indian liver ailment patient's records to help basic
leadership. Indian Liver Patient's datasets demonstrate that proposed technique
amazingly enhances the illnesses expectation precision.

2.3 Applying machine learning in liver disease and transplantation: a


comprehensive review

AUTHORS: B. Wang, A. Goldenberg, and M. Bhat

ABSTRACT: Machine learning (ML) utilizes artificial intelligence to generate


predictive models efficiently and more effectively than conventional methods
through detection of hidden patterns within large data sets. With this in mind,
there are several areas within hepatology where these methods can be applied.
In this review, we examine the literature pertaining to machine learning in
hepatology and liver transplant medicine. We provide an overview of the
strengths and limitations of ML tools and their potential applications to both
clinical and molecular data in hepatology. ML has been applied to various types
of data in liver disease research, including clinical, demographic, molecular,
radiological, and pathological data. We anticipate that use of ML tools to
generate predictive algorithms will change the face of clinical practice in
hepatology and transplantation. This review will provide readers with the
opportunity to learn about the ML tools available and potential applications to
questions of interest in hepatology.

2.4 Diagnosis of liver diseases using machine learning

AUTHORS: S. Sontakke, J. Lohokare, and R. Dani,


ABSTRACT: Liver Diseases account for over 2.4% of Indian deaths per
annum. [14] Liver disease is also difficult to diagnose in the early stages owing
to subtle symptoms. Often the symptoms become apparent when it is too late.
[1] This paper aims to improve diagnosis of liver diseases by exploring 2
methods of identification-patient parameters and genome expression. The paper
also discusses the computational algorithms that can be used in the
aforementioned methodology and lists demerits. It proposes methods to
improve the efficiency of these algorithms.

2.5 Human fatty liver disease: old questions and new insights

AUTHORS: J. C. Cohen, J. D. Horton, and H. H. Hobbs,

ABSTRACT: Nonalcoholic fatty liver disease (NAFLD) is a burgeoning health


problem that affects one-third of adults and an increasing number of children in
developed countries. The disease begins with the aberrant accumulation of
triglyceride in the liver, which in some individuals elicits an inflammatory
response that can progress to cirrhosis and liver cancer. Although NAFLD is
strongly associated with obesity and insulin resistance, its pathogenesis remains
poorly understood, and therapeutic options are limited. Here, we discuss recent
mechanistic insights into NAFLD, focusing primarily on those that have
emerged from human genetic and metabolic studies.

3.SYSTEM ANALYSIS
3.1 EXISTING SYSTEM:
In the existing system, different classifiers were implemented on liver patient
diseases dataset to predict liver diseases based on developed software. Dataset was
processed and implemented on WEKA tool using feature selection techniques with
10-fold cross validation testing option. The results of the proposed work were
compared using feature selection and without using feature selection techniques
after the implementation of different classifiers in terms of execution time and
accuracy. During the research work the result of other parameters like kappa
statistic, correctly classified instances, and mean absolute error were also
compared on liver patient diseases dataset.

3.1.1 DISADVANTAGES OF EXISTING SYSTEM:

 Accuracy is very less using weka tool.


 It take more time to compare and classification of diseases.

3.2PROPOSED SYSTEM:

In the proposed system, we have to import the liver patient dataset (.csv). Then the
dataset is pre-processed and the anomalies and full-up empty cells in the dataset
are removed, so that we can further improve the effective liver disease prediction.
Then we construct a Confusion matrix for accomplishing an enhanced lucidity of
the no of correct/incorrect predictions. Formerly, several classification and
prediction procedures and if possible, combinations of different algorithms are
implemented and check the accuracy.

3.2.1 ADVANTAGES OF PROPOSED SYSTEM:

The advantages are improved classification, early prediction of risks, and


improved accuracy.
A Decision Tree boasts remarkable accuracy at 99.96%, leading the pack in precise
liver disease predictions

3.3 SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

• System : Pentium IV 2.4 GHz.


• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

• Operating System: Windows

• Coding Language: Python 3.7

3.4 SYSTEM STUDY


FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business


proposal is put forth with a very general plan for the project and some cost
estimates. During system analysis the feasibility study of the proposed system is
to be carried out. This is to ensure that the proposed system is not a burden to
the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the
research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that
are employed to educate the user about the system and to make him familiar with
it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.

4.SYSTEM DESIGN

4.1 SYSTEM ARCHITECTURE:


4.2 DATA FLOW DIAGRAM:

1. The DFD is also called as bubble chart. It is a simple graphical formalism


that can be used to represent a system in terms of input data to the
system, various processing carried out on this data, and the output data is
generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It
is used to model the system components. These components are the
system process, the data used by the process, an external entity that
interacts with the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that
depicts information flow and the transformations that are applied as data
moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a
system at any level of abstraction. DFD may be partitioned into levels that
represent increasing information flow and functional detail.

User

Unauthorized user
Yes Check NO

Upload liver Dataset

Data Preprocessing

Data Analysis

Model Generation

Comparison Graph
Create Flask Object

Load Model

User Register

User Login

Upload Patient Details

Predict Result

End process
4.3 UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have
proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software
and the software development process. The UML uses mostly graphical notations
to express the design of software projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.

Use case diagram:

A use case diagram in the Unified Modeling Language (UML) is a type of


behavioral diagram defined by and created from a Use-case analysis. Its purpose is
to present a graphical overview of the functionality provided by a system in terms
of actors, their goals (represented as use cases), and any dependencies between
those use cases. The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the system can be
depicted.
Class diagram:

The class diagram is used to refine the use case diagram and define a detailed design of
the system. The class diagram classifies the actors defined in the use case diagram into a set of
interrelated classes. The relationship or association between the classes can be either an "is-a" or
"has-a" relationship. Each class in the class diagram may be capable of providing certain
functionalities. These functionalities provided by the class are termed "methods" of the class.
Apart from this, each class may have certain "attributes" that uniquely identify the class.
Object diagram:

The object diagram is a special kind of class diagram. An object is an instance of a class.
This essentially means that an object represents the state of a class at a given point of time while
the system is running. The object diagram captures the state of different classes in the system and
their relationships or associations at a given point of time.
State diagram:

A state diagram, as the name suggests, represents the different states that objects in the
system undergo during their life cycle. Objects in the system change states in response to events.
In addition to this, a state diagram also captures the transition of the object's state from an initial
state to a final state in response to events affecting the system.
Activity diagram:

The process flows in the system are captured in the activity diagram. Similar to a state
diagram, an activity diagram also consists of activities, actions, transitions, initial and final
states, and guard conditions.
Sequence diagram:

A sequence diagram represents the interaction between different objects in the system. The
important aspect of a sequence diagram is that it is time-ordered. This means that the exact
sequence of the interactions between the objects is represented step by step. Different objects in
the sequence diagram interact with each other by passing "messages".
Collaboration diagram:

A collaboration diagram groups together the interactions between different objects. The
interactions are listed as numbered interactions that help to trace the sequence of the interactions.
The collaboration diagram helps to identify all the possible interactions that each object has with
other objects.

4.4 IMPLEMENTATION:
MODULES:
Admin:

Login: In this module, admin will login through username and password.

Upload Dataset : Liver disease dataset (.csv) uploaded successfully.

Data Preprocess : Preprocessed liver disease dataset ready for analysis.

Decision Tree : Model is build and achieved 99.96% accuracy.

KNearestNeighbour : Model is build and achieved 97.42 % accuracy.

Artificial Neural Network : Model is build and achieved 71.55% accuracy.


Comparison: View the accuracy results of Algorithms.

Logout: Log out of the admin account.

User:

Register: Sign up for a new account.

Login: In this module, user will login through username and password.

View Profile: In this module, user can see profile information.

Predict Liver Disease: In this module, user can predict result using patient data.

Logout: In this module, Logout of the account.

4.4 ALGORITHMS:
Decision tree classifiers

Decision tree classifiers are used successfully in many diverse areas. Their most
important feature is the capability of capturing descriptive decision making
knowledge from the supplied data. Decision tree can be generated from training
sets. The procedure for such generation based on the set of objects (S), each
belonging to one of the classes C1, C2, …, Ck is as follows:

Step 1. If all the objects in S belong to the same class, for example Ci, the decision
tree for S consists of a leaf labeled with this class
Step 2. Otherwise, let T be some test with possible outcomes O1, O2,…, On. Each
object in S has one outcome for T so the test partitions S into subsets S1, S2,… Sn
where each object in Si has outcome Oi for T. T becomes the root of the decision
tree and for each outcome Oi we build a subsidiary decision tree by invoking the
same procedure recursively on the set Si.

K-Nearest Neighbors (KNN)

 Simple, but a very powerful classification algorithm

 Classifies based on a similarity measure


 Non-parametric
 Lazy learning
 Does not “learn” until the test example is given

 Whenever we have a new data to classify, we find its K-nearest neighbors


from the training data

Example

 Training dataset consists of k-closest examples in feature space


 Feature space means, space with categorization variables (non-metric
variables)
 Learning based on instances, and thus also works lazily because instance
close to the input vector for test or prediction may take time to occur in the
training dataset

Multi-layer ANN
Deep Learning deals with training multi-layer artificial neural networks, also
called Deep Neural Networks. After Rosenblatt perceptron was developed in the
1950s, there was a lack of interest in neural networks until 1986, when Dr.Hinton
and his colleagues developed the backpropagation algorithm to train a multilayer
neural network. Today it is a hot topic with many leading firms like Google,
Facebook, and Microsoft which invest heavily in applications using deep neural
networks.

Multi-layer ANN

A fully connected multi-layer neural network is called a Multilayer Perceptron


(MLP).
It has 3 layers including one hidden layer. If it has more than 1 hidden layer, it is
called a deep ANN. An MLP is a typical example of a feedforward artificial neural
network. In this figure, the ith activation unit in the lth layer is denoted as ai(l).

The number of layers and the number of neurons are referred to as hyper
parameters of a neural network, and these need tuning. Cross-validation techniques
must be used to find ideal values for these.

The weight adjustment training is done via back propagation. Deeper neural
networks are better at processing data. However, deeper layers can lead to
vanishing gradient problems. Special algorithms are required to solve this issue.
5.SOFTWARE ENVIRONMENT

What is Python :-
Below are some facts about Python.

Python is currently the most widely used multi-purpose, high-level programming language.

Python allows programming in Object-Oriented and Procedural paradigms. Python


programs generally are smaller than other programming languages like Java.

Programmers have to type relatively less and indentation requirement of the language,
makes them readable all the time.

Python language is being used by almost all tech-giant companies like – Google,
Amazon, Facebook, Instagram, Dropbox, Uber… etc.

The biggest strength of Python is huge collection of standard library which can be used
for the following –

 Machine Learning
 GUI Applications (like Kivy, Tkinter, PyQt etc. )
 Web frameworks like Django (used by YouTube, Instagram, Dropbox)
 Image processing (like Opencv, Pillow)
 Web scraping (like Scrapy, BeautifulSoup, Selenium)
 Test frameworks
 Multimedia

Advantages of Python :-

Let’s see how Python dominates over other languages.


1. Extensive Libraries

Python downloads with an extensive library and it contain code for various purposes like
regular expressions, documentation-generation, unit-testing, web browsers, threading,
databases, CGI, email, image manipulation, and more. So, we don’t have to write the
complete code for that manually.

2. Extensible

As we have seen earlier, Python can be extended to other languages. You can write some
of your code in languages like C++ or C. This comes in handy, especially in projects.

3. Embeddable

Complimentary to extensibility, Python is embeddable as well. You can put your Python
code in your source code of a different language, like C++. This lets us add scripting
capabilities to our code in the other language.

4. Improved Productivity

The language’s simplicity and extensive libraries render programmers more productive than
languages like Java and C++ do. Also, the fact that you need to write less and get more
things done.

5. IOT Opportunities

Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for
the Internet Of Things. This is a way to connect the language with the real world.

6. Simple and Easy

When working with Java, you may have to create a class to print ‘Hello World’. But in
Python, just a print statement will do. It is also quite easy to learn, understand, and code.
This is why when people pick up Python, they have a hard time adjusting to other more
verbose languages like Java.
7. Readable

Because it is not such a verbose language, reading Python is much like reading English.
This is the reason why it is so easy to learn, understand, and code. It also does not need
curly braces to define blocks, and indentation is mandatory. This further aids the
readability of the code.

8. Object-Oriented

This language supports both the procedural and object-oriented programming paradigms.
While functions help us with code reusability, classes and objects let us model the real
world. A class allows the encapsulation of data and functions into one.

9. Free and Open-Source

Like we said earlier, Python is freely available. But not only can you download Python for
free, but you can also download its source code, make changes to it, and even distribute it. It
downloads with an extensive collection of libraries to help you with your tasks.

10. Portable

When you code your project in a language like C++, you may need to make some changes
to it if you want to run it on another platform. But it isn’t the same with Python. Here, you
need to code only once, and you can run it anywhere. This is called Write Once Run
Anywhere (WORA). However, you need to be careful enough not to include any system-
dependent features.

11. Interpreted

Lastly, we will say that it is an interpreted language. Since statements are executed one by
one, debugging is easier than in compiled languages.
Any doubts till now in the advantages of Python? Mention in the comment section.
Advantages of Python Over Other Languages

1. Less Coding

Almost all of the tasks done in Python requires less coding when the same task is done in
other languages. Python also has an awesome standard library support, so you don’t have to
search for any third-party libraries to get your job done. This is the reason that many people
suggest learning Python to beginners.

2. Affordable

Python is free therefore individuals, small companies or big organizations can leverage the
free available resources to build applications. Python is popular and widely used so it gives
you better community support.

The 2019 Github annual survey showed us that Python has overtaken Java in the most
popular programming language category.

3. Python is for Everyone

Python code can run on any machine whether it is Linux, Mac or Windows. Programmers
need to learn different languages for different jobs but with Python, you can professionally
build web apps, perform data analysis and machine learning, automate things, do web
scraping and also build games and powerful visualizations. It is an all-rounder programming
language.

Disadvantages of Python

So far, we’ve seen why Python is a great choice for your project. But if you choose it, you
should be aware of its consequences as well. Let’s now see the downsides of choosing
Python over another language.
1. Speed Limitations

We have seen that Python code is executed line by line. But since Python is interpreted, it
often results in slow execution. This, however, isn’t a problem unless speed is a focal point
for the project. In other words, unless high speed is a requirement, the benefits offered by
Python are enough to distract us from its speed limitations.

2. Weak in Mobile Computing and Browsers

While it serves as an excellent server-side language, Python is much rarely seen on


the client-side. Besides that, it is rarely ever used to implement smartphone-based
applications. One such application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.

3. Design Restrictions

As you know, Python is dynamically-typed. This means that you don’t need to declare the
type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it
just means that if it looks like a duck, it must be a duck. While this is easy on the
programmers during coding, it can raise run-time errors.

4. Underdeveloped Database Access Layers

Compared to more widely used technologies like JDBC (Java DataBase


Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers
are a bit underdeveloped. Consequently, it is less often applied in huge enterprises.

5. Simple

No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I
don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity
of Java code seems unnecessary.

This was all about the Advantages and Disadvantages of Python Programming Language.

History of Python : -
What do the alphabet and the programming language Python have in common? Right, both
start with ABC. If we are talking about ABC in the Python context, it's clear that the
programming language ABC is meant. ABC is a general-purpose programming language and
programming environment, which had been developed in the Netherlands, Amsterdam, at the
CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to influence
the design of Python.Python was conceptualized in the late 1980s. Guido van Rossum
worked that time in a project at the CWI, called Amoeba, a distributed operating system. In
an interview with Bill Venners1, Guido van Rossum said: "In the early 1980s, I worked as an
implementer on a team building a language called ABC at Centrum voor Wiskunde en
Informatica (CWI). I don't know how well people know ABC's influence on Python. I try to
mention ABC's influence because I'm indebted to everything I learned during that project
and to the people who worked on it."Later on in the same Interview, Guido van Rossum
continued: "I remembered all my experience and some of my frustration with ABC. I
decided to try to design a simple scripting language that possessed some of ABC's better
properties, but without its problems. So I started typing. I created a simple virtual machine, a
simple parser, and a simple runtime. I made my own version of the various ABC parts that I
liked. I created a basic syntax, used indentation for statement grouping instead of curly
braces or begin-end blocks, and developed a small number of powerful data types: a hash
table (or dictionary, as we call it), a list, strings, and numbers."

What is Machine Learning : -

Before we take a look at the details of various machine learning methods, let's start by
looking at what machine learning is, and what it isn't. Machine learning is often categorized
as a subfield of artificial intelligence, but I find that categorization can often be misleading at
first brush. The study of machine learning certainly arose from research in this context, but
in the data science application of machine learning methods, it's more helpful to think of
machine learning as a means of building models of data.

Fundamentally, machine learning involves building mathematical models to help understand


data. "Learning" enters the fray when we give these models tunable parameters that can be
adapted to observed data; in this way the program can be considered to be "learning" from
the data. Once these models have been fit to previously seen data, they can be used to predict
and understand aspects of newly observed data. I'll leave to the reader the more philosophical
digression regarding the extent to which this type of mathematical, model-based "learning" is
similar to the "learning" exhibited by the human brain.Understanding the problem setting in
machine learning is essential to using these tools effectively, and so we will start with some
broad categorizations of the types of approaches we'll discuss here.

Categories Of Machine Leaning :-

At the most fundamental level, machine learning can be categorized into two main types:
supervised learning and unsupervised learning.

Supervised learning involves somehow modeling the relationship between measured features
of data and some label associated with the data; once this model is determined, it can be used
to apply labels to new, unknown data. This is further subdivided into classification tasks
and regression tasks: in classification, the labels are discrete categories, while in regression,
the labels are continuous quantities. We will see examples of both types of supervised
learning in the following section.

Unsupervised learning involves modeling the features of a dataset without reference to any
label, and is often described as "letting the dataset speak for itself." These models include
tasks such as clustering and dimensionality reduction. Clustering algorithms identify distinct
groups of data, while dimensionality reduction algorithms search for more succinct
representations of the data. We will see examples of both types of unsupervised learning in
the following section.

Need for Machine Learning

Human beings, at this moment, are the most intelligent and advanced species on earth
because they can think, evaluate and solve complex problems. On the other side, AI is still in
its initial stage and haven’t surpassed human intelligence in many aspects. Then the question
is that what is the need to make machine learn? The most suitable reason for doing this is,
“to make decisions, based on data, with efficiency and scale”.

Lately, organizations are investing heavily in newer technologies like Artificial Intelligence,
Machine Learning and Deep Learning to get the key information from data to perform
several real-world tasks and solve problems. We can call it data-driven decisions taken by
machines, particularly to automate the process. These data-driven decisions can be used,
instead of using programing logic, in the problems that cannot be programmed inherently.
The fact is that we can’t do without human intelligence, but other aspect is that we all need
to solve real-world problems with efficiency at a huge scale. That is why the need for
machine learning arises.

Challenges in Machines Learning :-

While Machine Learning is rapidly evolving, making significant strides with cybersecurity
and autonomous cars, this segment of AI as whole still has a long way to go. The reason
behind is that ML has not been able to overcome number of challenges. The challenges that
ML is facing currently are −

Quality of data − Having good-quality data for ML algorithms is one of the biggest
challenges. Use of low-quality data leads to the problems related to data preprocessing and
feature extraction.

Time-Consuming task − Another challenge faced by ML models is the consumption of time


especially for data acquisition, feature extraction and retrieval.

Lack of specialist persons − As ML technology is still in its infancy stage, availability of


expert resources is a tough job.

No clear objective for formulating business problems − Having no clear objective and
well-defined goal for business problems is another key challenge for ML because this
technology is not that mature yet.
Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be
represented well for the problem.

Curse of dimensionality − Another challenge ML model faces is too many features of data
points. This can be a real hindrance.

Difficulty in deployment − Complexity of the ML model makes it quite difficult to be


deployed in real life.

Applications of Machines Learning :-

Machine Learning is the most rapidly growing technology and according to researchers we
are in the golden year of AI and ML. It is used to solve many real-world complex problems
which cannot be solved with traditional approach. Following are some real-world applications
of ML −

 Emotion analysis

 Sentiment analysis

 Error detection and prevention

 Weather forecasting and prediction

 Stock market analysis and forecasting

 Speech synthesis

 Speech recognition

 Customer segmentation

 Object recognition

 Fraud detection

 Fraud prevention

 Recommendation of products to customer in online shopping


How to Start Learning Machine Learning?

Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of
study that gives computers the capability to learn without being explicitly
programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one
of the most popular (if not the most!) career choices. According to Indeed, Machine Learning
Engineer Is The Best Job of 2019 with a 344% growth and an average base salary
of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start
learning it? So this article deals with the Basics of Machine Learning and also the path you
can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get
started!!!

How to start learning ML?

This is a rough roadmap you can follow on your way to becoming an insanely talented
Machine Learning Engineer. Of course, you can always modify the steps according to your
needs to reach your desired end-goal!

Step 1 – Understand the Prerequisites

In case you are a genius, you could start ML directly but normally, there are some
prerequisites that you need to know which include Linear Algebra, Multivariate Calculus,
Statistics, and Python. And if you don’t know these, never fear! You don’t need a Ph.D.
degree in these topics to get started but you do need a basic understanding.

(a) Learn Linear Algebra and Multivariate Calculus

Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However,
the extent to which you need them depends on your role as a data scientist. If you are more
focused on application heavy machine learning, then you will not be that heavily focused on
maths as there are many common libraries available. But if you want to focus on R&D in
Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very
important as you will have to implement many ML algorithms from scratch.

(b) Learn Statistics

Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML
expert will be spent collecting and cleaning data. And statistics is a field that handles the
collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical Significance,
Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is
also a very important part of ML which deals with various concepts like Conditional
Probability, Priors, and Posteriors, Maximum Likelihood, etc.

(c) Learn Python

Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn
them as they go along with trial and error. But the one thing that you absolutely cannot skip
is Python! While there are other languages you can use for Machine Learning like R, Scala,
etc. Python is currently the most popular language for ML. In fact, there are many Python
libraries that are specifically useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using various
online resources and courses such as Fork Python available Free on GeeksforGeeks.

Step 2 – Learn Various ML Concepts

Now that you are done with the prerequisites, you can move on to actually learning ML
(Which is the fun part!!!) It’s best to start with the basics and then move on to the more
complicated stuff. Some of the basic concepts in ML are:
(a) Terminologies of Machine Learning

 Model – A model is a specific representation learned from data by applying some machine
learning algorithm. A model is also called a hypothesis.
 Feature – A feature is an individual measurable property of the data. A set of numeric
features can be conveniently described by a feature vector. Feature vectors are fed as input to
the model. For example, in order to predict a fruit, there may be features like color, smell,
taste, etc.
 Target (Label) – A target variable or label is the value to be predicted by our model. For the
fruit example discussed in the feature section, the label with each set of input would be the
name of the fruit like apple, orange, banana, etc.
 Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so
after training, we will have a model (hypothesis) that will then map new data to one of the
categories trained on.
 Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a
predicted output(label).

(b) Types of Machine Learning

 Supervised Learning – This involves learning from a training dataset with labeled data using
classification and regression models. This learning process continues until the required level of
performance is achieved.
 Unsupervised Learning – This involves using unlabelled data and then finding the underlying
structure in the data in order to learn more and more about the data itself using factor and
cluster analysis models.
 Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning
with a small amount of labeled data. Using labeled data vastly increases the learning accuracy
and is also more cost-effective than Supervised Learning.
 Reinforcement Learning – This involves learning optimal actions through trial and error. So
the next action is decided by learning behaviors that are based on the current state and that will
maximize the reward in the future.
Advantages of Machine learning :-

1. Easily identifies trends and patterns -

Machine Learning can review large volumes of data and discover specific trends and patterns
that would not be apparent to humans. For instance, for an e-commerce website like Amazon, it
serves to understand the browsing behaviors and purchase histories of its users to help cater to
the right products, deals, and reminders relevant to them. It uses the results to reveal relevant
advertisements to them.

2. No human intervention needed (automation)

With ML, you don’t need to babysit your project every step of the way. Since it means giving
machines the ability to learn, it lets them make predictions and also improve the algorithms on
their own. A common example of this is anti-virus softwares; they learn to filter new threats as
they are recognized. ML is also good at recognizing spam.

3. Continuous Improvement

As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets
them make better decisions. Say you need to make a weather forecast model. As the amount of
data you have keeps growing, your algorithms learn to make more accurate predictions faster.

4. Handling multi-dimensional and multi-variety data

Machine Learning algorithms are good at handling data that are multi-dimensional and multi-
variety, and they can do this in dynamic or uncertain environments.

5. Wide Applications

You could be an e-tailer or a healthcare provider and make ML work for you. Where it does
apply, it holds the capability to help deliver a much more personal experience to customers
while also targeting the right customers.
Disadvantages of Machine Learning :-

1. Data Acquisition

Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must wait for new
data to be generated.

2. Time and Resources

ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose
with a considerable amount of accuracy and relevancy. It also needs massive resources to
function. This can mean additional requirements of computer power for you.

3. Interpretation of Results

Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.

4. High error-susceptibility

Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased predictions
coming from a biased training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors that can go undetected
for long periods of time. And when they do get noticed, it takes quite some time to recognize
the source of the issue, and even longer to correct it.

Python Development Steps : -

Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in
February 1991. This release included already exception handling, functions, and the core data
types of list, dict, str and others. It was also object oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included in this
release were the functional programming tools lambda, map, filter and reduce, which Guido
Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was
introduced. This release included list comprehensions, a full garbage collector and it was
supporting unicode.Python flourished for another 8 years in the versions 2.x before the next
major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python
3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the
removal of duplicate programming constructs and modules, thus fulfilling or coming close to
fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably only one --
obvious way to do it."Some changes in Python 7.3:

 Print is now a function


 Views and iterators instead of lists
 The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot be
sorted, because all the elements of a list must be comparable to each other.
 There is only one integer type left, i.e. int. long is int as well.
 The division of two integers returns a float instead of an integer. "//" can be used to have the
"old" behaviour.
 Text Vs. Data Instead Of Unicode Vs. 8-bit

Purpose :-

We demonstrated that our approach enables successful segmentation of intra-retinal layers—


even with low-quality images containing speckle noise, low contrast, and different intensity
ranges throughout—with the assistance of the ANIS feature.

Python

Python is an interpreted high-level programming language for general-purpose


programming. Created by Guido van Rossum and first released in 1991, Python has a design
philosophy that emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.

 Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious repetition of
code. Maintainability also ties into this may be an all but useless metric, but it does say
something about how much code you have to scan, read and/or understand to
troubleshoot problems or tweak behaviors. This speed of development, the ease with
which a programmer of other languages can pick up basic Python skills and the huge
standard library is key to another area where Python excels. All its tools have been quick to
implement, saved a lot of time, and several of them have later been patched and updated
by people with no Python background - without breaking.

Modules Used in Project :-

Tensorflow

TensorFlow is a free and open-source software library for dataflow and differentiable
programming across a range of tasks. It is a symbolic math library, and is also used
for machine learning applications such as neural networks. It is used for both research and
production at Google.‍

TensorFlow was developed by the Google Brain team for internal Google use. It was
released under the Apache 2.0 open-source license on November 9, 2015.

Numpy

Numpy is a general-purpose array-processing package. It provides a high-performance


multidimensional array object, and tools for working with these arrays.
It is the fundamental package for scientific computing with Python. It contains various
features including these important ones:

 A powerful N-dimensional array object


 Sophisticated (broadcasting) functions
 Tools for integrating C/C++ and Fortran code
 Useful linear algebra, Fourier transform, and random number capabilities
Besides its obvious scientific uses, Numpy can also be used as an efficient multi-dimensional
container of generic data. Arbitrary data-types can be defined using Numpy which allows
Numpy to seamlessly and speedily integrate with a wide variety of databases.

Pandas

Pandas is an open-source Python Library providing high-performance data manipulation and


analysis tool using its powerful data structures. Python was majorly used for data munging
and preparation. It had very little contribution towards data analysis. Pandas solved this
problem. Using Pandas, we can accomplish five typical steps in the processing and analysis
of data, regardless of the origin of data load, prepare, manipulate, model, and analyze.
Python with Pandas is used in a wide range of fields including academic and commercial
domains including finance, economics, Statistics, analytics, etc.

Matplotlib

Matplotlib is a Python 2D plotting library which produces publication quality figures in a


variety of hardcopy formats and interactive environments across platforms. Matplotlib can
be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web
application servers, and four graphical user interface toolkits. Matplotlib tries to make easy
things easy and hard things possible. You can generate plots, histograms, power spectra, bar
charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see
the sample plots and thumbnail gallery.

For simple plotting the pyplot module provides a MATLAB-like interface, particularly when
combined with IPython. For the power user, you have full control of line styles, font
properties, axes properties, etc, via an object oriented interface or via a set of functions
familiar to MATLAB users.
Scikit – learn

Scikit-learn provides a range of supervised and unsupervised learning algorithms via a


consistent interface in Python. It is licensed under a permissive simplified BSD license and is
distributed under many Linux distributions, encouraging academic and commercial use.
Python

Python is an interpreted high-level programming language for general-purpose


programming. Created by Guido van Rossum and first released in 1991, Python has a design
philosophy that emphasizes code readability, notably using significant whitespace.

Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.

 Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious repetition of
code. Maintainability also ties into this may be an all but useless metric, but it does say
something about how much code you have to scan, read and/or understand to
troubleshoot problems or tweak behaviors. This speed of development, the ease with
which a programmer of other languages can pick up basic Python skills and the huge
standard library is key to another area where Python excels. All its tools have been quick to
implement, saved a lot of time, and several of them have later been patched and updated
by people with no Python background - without breaking.

Install Python Step-by-Step in Windows and Mac :

Python a versatile programming language doesn’t come pre-installed on your computer


devices. Python was first released in the year 1991 and until today it is a very popular high-
level programming language. Its style philosophy emphasizes code readability with its
notable use of great whitespace.
The object-oriented approach and language construct provided by Python enables
programmers to write both clear and logical code for projects. This software does not come
pre-packaged with Windows.

How to Install Python on Windows and Mac :

There have been several updates in the Python version over the years. The question is how to
install Python? It might be confusing for the beginner who is willing to start learning Python but
this tutorial will solve your query. The latest or the newest version of Python is version 3.7.4 or
in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.

Before you start with the installation process of Python. First, you need to know about
your System Requirements. Based on your system type i.e. operating system and based
processor, you must download the python version. My system type is a Windows 64-bit
operating system. So the steps below are to install python version 3.7.4 on Windows 7 device
or to install Python 3. Download the Python Cheatsheet here.The steps on how to install Python
on Windows 10, 8 and 7 are divided into 4 parts to help understand better.

Download the Correct version into the system

Step 1: Go to the official site to download and install python using Google Chrome or any other
web browser. OR Click on the following link: https://www.python.org
Now, check for the latest and the correct version for your operating system.

Step 2: Click on the Download Tab.

Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color
or you can scroll further down and click on download with respective to their version. Here,
we are downloading the most recent python version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.

Step 5: Here you see a different version of python along with the operating system.

• To download Windows 32-bit python, you can select any one from the three options:
Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86 web-
based installer.
•To download Windows 64-bit python, you can select any one from the three options: Windows
x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x86-64 web-
based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding which
version of python is to be downloaded is completed. Now we move ahead with the second part
in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the Release
Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the installation
process.

Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.
Step 3: Click on Install NOW After the installation is successful. Click on Close.

With these above three steps on python installation, you have successfully and correctly
installed Python. Now is the time to verify the installation.
Note: The installation process might take a couple of minutes.

Verify the Python Installation


Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.

Step 3: Open the Command prompt option.


Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter.

Step 5: You will get the answer as 3.7.4


Note: If you have any of the earlier versions of Python already installed. You must first
uninstall the earlier version and then install the new one.

Check how the Python IDLE works


Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”.

Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click
on Save

Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have
named the files as Hey World.
Step 6: Now for e.g. enter print
6.SYSTEM TEST

The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.

TYPES OF TESTS

Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of individual
software units of the application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific business process,
application, and/or system configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and contains clearly defined inputs
and expected results.

Integration testing
Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically aimed at
exposing the problems that arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system documentation, and
user manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to identify Business
process flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.

System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An example of
system testing is the configuration oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least its purpose.
It is purpose. It is used to test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other kinds
of tests, must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing

Unit testing is usually conducted as part of a combined code and unit test phase
of the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in
detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Test cases1:

Test case for Login form:

FUNCTION: LOGIN

EXPECTED RESULTS: Should Validate the user and check his existence in
database

ACTUAL RESULTS: Validate the user and checking the user against the
database

LOW PRIORITY No

HIGH PRIORITY Yes


Test case2:

Test case for User Registration form:

FUNCTION: USER REGISTRATION

EXPECTED RESULTS: Should check if all the fields are filled by the user
and saving the user to database.

ACTUAL RESULTS: Checking whether all the fields are field by user or
not through validations and saving user.

LOW PRIORITY No

HIGH PRIORITY Yes

Test case3:

Test case for Change Password:

When the old password does not match with the new password ,then this results in
displaying an error message as “ OLD PASSWORD DOES NOT MATCH WITH THE
NEW PASSWORD”.

FUNCTION: Change Password

EXPECTED RESULTS: Should check if old password and new password


fields are filled by the user and saving the user to
database.

ACTUAL RESULTS: Checking whether all the fields are field by user or
not through validations and saving user.

LOW PRIORITY No

HIGH PRIORITY Yes

Test case 4:
Test case for Forget Password:

When a user forgets his password he is asked to enter Login name, ZIP code, Mobile number. If
these are matched with the already stored ones then user will get his Original password.

Modu Functio Test Case Expected Results Actual Result


le nality Results Priority

1. Navigate To A Pass High


User Login A Validation
Www.Sample.Co Validation
Usecase Should Be As
m Has Been
Below “Please
Populated
Enter Valid
As
2. 2.Click On Username &
Expected
Password”
Submit Button
Without Entering
Username and
Password
1. aNavigate To A
A Validation Pass
Www.Sample.Co Validation
Should Be As High
m Is Shown
Below “Please
As
Enter Valid
Expected
1. 2. Click On Password Or
Password Field Can
Submit Button
Not Be Empty “
With Out Filling
Password And
With Valid
UsernameTest
UsernameField
1. NNavigate To High
A Validation A Pass
Www.Sample.Co
Shown As Below Validation
m
“The Username Is Shown
Entered Is Wrong” As
Expected

2. Enter Both
Username And
Password Wrong
And Hit Enter
1. Navigate To
Validate Username Main Page/ Pass
Www.Sample.Co
And Password In Home Page High
m
DataBase And Has Been
Once If They Displayed
Correct Then Show

2. Enter Validate The Main Page

Username And
Password And
Click On Submit
7.SCREENSHOTS
INDEX

Admin login

Admin home
Upload dataset

Preprocess
Decision tree accuracy

Knn accuracy
Ann accuracy

Comparision graph
User registration

Login
Userhome

View profile
Predict disease
8. CONCLUSION AND FUTURE ENHANCEMENT
CONCLUSION

The Prediction of liver illness in patients has been examined and analysed in this
paper. By using various techniques the data has been cleaned by imputation of
missing values with median, then dummy encoding is applied followed by outlier
eliminated to improve the performance. In this research paper, various
classification algorithm have been applied such as Decision Tree, KNNeighbor
and Artifical Neural Network Based on algorithm applied, it is observed that
models Decision Tree, KNNeighbor algorithm gives better accuracy than the other
classification algorithm. Therefore concluding that Decision Tree is appropriate for
predicting liver disease. When a training data set is available, our proposed
classification schemes can significantly enhance classification performance. Then,
using a machine learning classifier, good and bad values are classified. Thus, the
outputs of the proposed classification model show accuracy in predicting the result.

FUTURE ENHANCEMENT:

The extent of our work is that we will apply deep learning techniques to predict
liver disease. Some of the future directions are improve the accuracy of liver
disease prediction and classification models is to include more diverse data
sources, improving liver disease prediction and classification is to combine
multiple machine learning techniques, machine learning models can be trained to
predict the likelihood of liver disease in individuals based on their unique
characteristics. Another important direction in liver disease prediction and
classification using machine learning is to develop models that are explainable.
This means that the models should provide clear and interpretable insights into the
factors that contribute to liver disease. Explainable models can help healthcare
professionals to make better decisions and provide better care for patients.
9. REFERENCES
[1] M. Sameer and B. Gupta, “Beta Band as a Biomarker for Classification
between Interictal and Ictal States of Epileptical Patients,” in 2020 7th
International Conference on Signal Processing and Integrated Networks (SPIN),
2020, pp. 567–570, doi: 10.1109/SPIN48934.2020.9071343.

[2] S. K. B. Sangeetha, N. Afreen, and G. Ahmad, “A Combined Image


Segmentation and Classification Approach for COVID-19 Infected Lungs,” J.
homepage http//iieta. org/journals/rces, vol. 8, no. 3, pp. 71–76, 2021.

[3] M. Sameer, A. K. Gupta, C. Chakraborty, and B. Gupta, “Epileptical Seizure


Detection: Performance analysis of gamma band in EEG signal Using Short-Time
Fourier Transform,” in 2019 22nd International Symposium on Wireless Personal
Multimedia Communications (WPMC), 2019, pp. 1–6, doi:
10.1109/WPMC48795.2019.9096119.

[4] A. Mahajan, K. Somaraj, and M. Sameer, “Adopting Artificial Intelligence


Powered ConvNet To Detect Epileptic Seizures,” in 2020 IEEE-EMBS Conference
on Biomedical Engineering and Sciences (IECBES), 2021, pp. 427–432, doi:
10.1109/IECBES48179.2021.9398832.

[5] N. Nasir, N. Afreen, R. Patel, S. Kaur, and M. Sameer, “A Transfer Learning


Approach for Diabetic Retinopathy and Diabetic Macular Edema Severity
Grading,” Rev. d’Intelligence Artif., vol. 35, pp. 497–502, Dec. 2021, doi:
10.18280/ria.350608.

[6] M. Sameer and B. Gupta, “ROC Analysis of EEG Subbands for Epileptic
Seizure Detection using Naive Bayes Classifier,” J. Mob. Multimed., pp. 299–310,
2021.
[7] M. Sameer and B. Gupta, “Time–Frequency Statistical Features of Delta Band
for Detection of Epileptic Seizures,” Wirel. Pers. Commun., 2021, doi:
10.1007/s11277-021-08909-y.

[8] S. M. Beeraka, A. Kumar, M. Sameer, S. Ghosh, and B. Gupta, “Accuracy


Enhancement of Epileptic Seizure Detection: A Deep Learning Approach with
Hardware Realization of STFT,” Circuits, Syst. Signal Process., 2021, doi:
10.1007/s00034-021 01789-4.

[9] S. Gupta, M. Sameer, and N. Mohan, “Detection of Epileptic Seizures using


Convolutional Neural Network,” in 2021 International Conference on Emerging
Smart Computing and Informatics (ESCI), 2021, pp. 786–790, doi:
10.1109/ESCI50559.2021.9396983.

[10] P. Porwal et al., “Indian Diabetic Retinopathy Image Dataset (IDRiD): A


Database for Diabetic Retinopathy Screening Research,” Data , vol. 3, no. 3. 2018,
doi: 10.3390/data3030025.

[11] M. Sameer and P. Agarwal, “Coplanar waveguide microwave sensor for label-
free real-time glucose detection,” Radioengineering, vol. 28, no. 2, p. 491, 2019.

[12] M. Sameer and B. Gupta, “Detection of epileptical seizures based on alpha


band statistical features,” Wirel. Pers. Commun., vol. 115, no. 2, pp. 909–925,
2020, doi: 10.1007/s11277-020-07542-5.

[13] M. Sameer, A. K. Gupta, C. Chakraborty, and B. Gupta, “ROC Analysis for


detection of Epileptical Seizures using Haralick features of Gamma band,” in 2020
National Conference on Communications (NCC), 2020, pp. 1–5, doi: Computer
Networks 10.1109/NCC48643.2020.9056027.
[14] N. Afreen, R. Patel, M. Ahmed, and M. Sameer, “A Novel Machine
Learning Approach Using Boosting Algorithm for Liver Disease Classification,” in
2021 5th International Conference on Information Systems and Computer
Networks (ISCON), 2021, pp. 1–5.

[15] N. Jiwani, K. Gupta, and P. Whig, “Novel HealthCare Framework for


Cardiac Arrest With the Application of AI Using ANN,” in 2021 5th International
Conference on Information Systems and (ISCON), 2021, pp. 1–5, doi:
10.1109/ISCON52037.2021.9702493.

You might also like