Implementation of Web Application For Disease Prediction Using AI
Implementation of Web Application For Disease Prediction Using AI
Abstract. The Internet is the largest source of information created by humanity. It contains a variety of materials
available in various formats such as text, audio, video and much more. In all web scraping is one way. It is a set of
strategies here in which we get information from the website instead of copying the data manually. Many Web-based
data extraction methods are designed to solve specific problems and work on ad-hoc domains. To enable Web Scrap-
ing, a variety of tools and technologies have been created. Regrettably, the propriety and ethics of employing these
Web Scraping programmes are frequently neglected. There are hundreds of online scraping applications available
today, the most of which are written in Java, Python, or Ruby. There is both commercial software and open source
software. For novices in web cutting, web-based applications such as YahooPipes, Google Web Scrapers, and Outwit
Firefox plugins are the finest options. Web extraction is basically used to cut this manual extraction and editing pro-
cess and provide an easy and better way to collect data from a web page and convert it into the desired format and
save it to a local or archive directory. . In this paper, among others the kind of scrub, we focus on those techniques
that extract the content of a Web page. In particular, we use scrubbing techniques for a variety of diseases with their
own symptoms and precautions.
Keywords: Web Scraping, Disease, Legality, Software, Symptons.
6
Implementation of Web Application for Disease Prediction Using AI 7
measures that the patient needs to take care of in order to Hyper Text Markup Language (HTML)
overcome them and to treat the infection.
Exploration languages for query data, such as XQuery and
Hyper Text Query Language (HTQL), can be used to scan
OVERVIEW OF WEB SCRAPING HTML pages and obtain and alter material on the page.
Web scraping is an excellent method for extracting random
data from websites and organising that data so that it can Release Structure
be saved and examined in a database. Web scraping is also
known as data extraction from the online, data removal The main purpose is to convert the published content into
from the web, web harvesting, or screen scanning. Web a formal representation for further analysis and retention.
scraping is a type of data mining. The goal of web crawl- Although this final stage is on the Web scraping side, some
ing is to collect information from websites and transform technologies are aware of post-results, including memory
it into a usable format, such as spreadsheets, databases, data formats and text-based solutions, such as cables or
or comma-separated files (CSV), as illustrated in Figure files (XML or CSV files).
1. With web termination, data such as item pricing, stock
price, different reports, market prices, and product details
may be gathered. Extracting information from websites LITERATURE SURVEY
allows you to make more informed business decisions.
Python has a rich set of libraries available for downloading
digital content online. Among the libraries available, the
following three are the most popular: BeautifulSoup, LXml
and RegEx. Statistical research was performed on the avail-
able data sets; indicates that RegEx was able to deliver the
requested information at an average rate of 153.6 ms. How-
ever, RegEx has those limitations of data extraction of web
pages with internal HTML tags. Because of this demerit
RegEx is used to perform complex data extraction only.
Some libraries such as BeautifulSoup and LXml are able
to extract content from web pages under a complex envi-
Figure 1. Web scraping structure.
ronment that has yielded a response rate of 457.66 ms and
203 ms respectively.
PRACTICES OF WEB SCRAPING The main purpose of data analysis is to get useful
information from data and make decisions based on data
• Data scraping analysis. Web deletion refers to the collection of data on the
• Research web. Web scraping is also known as data scraping. For the
• Web mash up—integrate data from multiple sources purpose of data analysis can be divided into several steps
• Extract business details from business directory web- such as cleaning, editing etc. Scrapy is the most widely
sites such as Yelp and Yellow pages used source of information needed by the user. The main
purpose of using scrapy is to extract data from its sources.
• Collect government data
Scrapy, which crawls on the web and is based on python
• Market Analysis programming language, is very helpful in finding the data
The Web Data Scraper process, a software agent, also we need by using the URLs needed to clear the data from
known as a Web robot, mimics browsing communica- its sources? Web scraper is a useful API to retrieve data
tion between Web servers and a person on a normal Web from a website. Scrapy provides all the necessary tools to
browser. Step by step, the robot enters as many Websites as extract data from a website and process data according to
it needs, transfers its content to find and extracts interest- user needs and store data in a specific format as defined by
ing data and builds that content as desired. The following users.
text describes how AP scraping APIs and frameworks meet The Internet is very much looking at web pages that
the most frequent online data scrapers engaged in attaining include a large number of descriptive elements including
various recovery goals: text, audio, graphics, video etc. This process, Web Scraping
is mainly responsible for the collection of raw data from the
website. It is a process in which you extract data automa-
Hypertext Transfer Protocol (HTTP)
tion very quickly. The process enables us to extract specific
This approach extracts data from both static and dynamic data requested by the user. The most popular method used
web pages. Data may be obtained by utilising a socket is to create individual web data structure using any known
system to send HTTP requests to a remote web server. language.
8 Manasvi Srivastava et al.
be used in the same way to extract data using APIs and then write integration tests to ensure that acceptance
for example Amazon AWS or as a very important web criteria are met before approving a feature to be used. Con-
browser. Scrappy is written in python. Let’s take a Wiki tinuous integration servers can ensure that all these tests
example related to one of these problems. A simple online are passed before they are incorporated into production.
photo gallery can provide three options to users as defined
by HTTP GET parameters at URL. If there are four ways
Algorithm Used
to filter images with three thumbnail-sized options, two
file formats, and a user-provided disabling option, then the Linear Regression is a standard mathematical method that
same content set can be accessed with different URLs, all of allows us to study a function or relationship in a given set
which can be linked to the site. This carefully crafted com- of continuous data. For example, we are given some of the
bination creates a problem for the pages as they have to corresponding x and y data points and we need to study
plan with an endless combination of subtitle changes to get the relationship between them called hypothesis.
different content. In the event of a line reversal, the hypothesis is a straight
line, i.e.
Methodology Where the vector is called Weights and b is a scale called
Bias. Weights and Bias are called model parameters.
The method used by the project to collect all the required All we need to do is estimate the value of w and b
data is extracted and extracted from various sources such from the set of data given that the result of the assump-
as the CDC’s database database and Kaggle resources. tion has produced the minimum cost J defined by the next
Then analyze the extracted data using texts written in cost function.
python language according to project requirements. Pan- Where m the number of data points in the data pro-
das and NumPy are widely used to perform various func- vided. This cost function is also called Mean Squared Error.
tions on the database. To find the optimized value of the J’s minimum param-
After sorting the data according to each need, it is then eters, we will be using a widely used optimizer algorithm
uploaded to the database. In the database we have used called Gradient Descent. The following is a fake Gradient
Cloud Firestore as it is a real-time NoSQL database with Descent code:
extensive API support.
Further in the TensorFlow project is used to train our
model according to needs. RESULT DISCUSSION
In this project we predict the disease because of the
The overall results of the project are useful in predicting
given symptoms.
diseases with the given symptoms. The script that was
Training data set – 70% written to extract data can be used later to compile and for-
mat it according to needs.
Setting test data – 30% Users can pick up symbols by typing them themselves
or by selecting them in the given options. The training
TensorFlow supports Linear Regression which is used to
model will predict the disease according to it. Users are
predict diseases based on the given indicators.
able to create their own medical profile, where they can
submit their medical records and prescribed medication,
Coding this greatly helps us to feed our database and better predict
disease over time as some of these diseases occur directly
Project Frontend is written using ReactJS & TypeScript.
during the season.
Although we have used the MaterialUI kit from Google
Moreover, the analysis performed showed a very sim-
ReactJS to speed up our development process.
ilar disease, but the training model lacks the size of the
To provide our app, Electron is used. Our web system
database.
supports MacOS and Windows. Most of today’s web app
is written with the help of Electron JS.
CONCLUSIONS AND FUTURE SCOPE
Testing
The use of the Python program also emphasizes under-
The project is tested using an Electron built-in test frame- standing the use of pattern matching and general expres-
work called Spectron. sions for web releases. Database data is compiled from
The project is being implemented in the browser. The factual reports, directly to Government media outlets for
output generated turns out to be completely consistent and local media where it is considered reliable. A team of
the generated analysis is approximate. experts and analysts who validate information from a con-
Electron’s standard workflow with Spectron can involve tinuous list of more than 5,000 items is likely to be the
engineers who write unit tests in the standard TDD format site that collects data effectively. User-provided inputs are
10 Manasvi Srivastava et al.
analyzed and deleted from the website and the output is [2] Shreya Upadhyay, Vishal Pant, Shivansh Bhasin and Mahantesh K
extracted as the user enters the user interface encounters. Pattanshetti “Articulating the Construction of a Web Scraper for Mas-
sive Data Extraction”, 2017 IEEE.
Output is generated in the form of text. This method is [3] Amruta Kulkarni, Deepa Kalburgi and Poonam Ghuli, “Design of
simple and straightforward to eradicate the disease from Predictive Model for Healthcare Assistance Using Voice Recogni-
companies and provides vigilance against that disease. tion”, 2nd IEEE International Conference on Computational Sys-
For future work, we plan tests that aim to show the med- tems and Information Technology for Sustainable Solutions 2017,
PP: 61–64.
ication that a patient can take for treatment. Also, we are [4] Dimitri Dojchinovski, Andrej Ilievski, Marjan Gusev Interactive
looking to link this website to various hospitals and phar- home healthcare system with integrated voice assistant MIPRO 2019,
macies for easy use. PP: 284–288 Posted: 2019.
[5] Mohammad Shahnawaz, Prashant Singh, Prabhat Kumar and Dr.
Anuradha Konidena, “Grievance Redressal System”, International
REFERENCES Journal of Data Mining and Big Data, 2020, Vol. 1, No. 1,
PP. 1–4.
[1] Thivaharan. S, Srivatsun. G and Sarathambekai. S, “A Survey on
Python Libraries Used for Social Media Content Scraping”, Proceed-
ings of the International Conference on Smart Electronics and Com-
munication (ICOSEC 2020) IEEE Xplore Part Number: CFP20V90-
ART; ISBN: 978-1-7281-5461-9, PP: 361–366.