Mali
Mali
Mali
SUBMITTED BY
P.PranithaPriyanka
ROLL NO:20502012
B.VOC SOFTWARE DEVELOPMENT – IV Semester
GUNTUR
DEPARTMENT OF COMPUTER
SCIENCE
This is certified to be the bonafide work of the student in the B.Voc
IV Sem (Software Development) Project Work entitled
Date: ExternalExaminer
Declaration
P. PRANITHAPRIYANKA
20502012
ACKNOWLEDGEMENT
Sr Name Page
1 Abstract 6
2 Objectives of the project and modules 7
3 Project Category and Tools 9
6 Software Requirement Specification (SRS) 10
5 Project Scheduling: PERT and GANTT Chart 16
6 ER AND DFD 16-20
7 Database Design 20
8 Security and Code Improvement 25
9 Future Scope 26
10 Source Code 27-63
11 Reference 66
Abstract
In today’s modern colleges, there should be a platform where people can communicate and can
exchange information. Sometimes, there is a big gap between different departments of a college and
students don’t get information on time. There should be a centralized platform where students can check
information and notices. Online Notice Board is a service which provides this platform.
Teachers will be able to register on this website and they will get special privileges. They can
post notice and information and our website will sort this information by departments.
Students can view the information posted by teachers. There will be an admin panel to control
the teachers account. It will have the options to delete users, delete their posts, add new
users.etc.
It uses CORE PHP to manage the database. PHP is used to create new users, performing
CRUD operations, creating admin panel to manipulate registered users.etc
A lot of effort has been taken to keep the website simple and beautiful to maximize loading
speed and minimizing resp/
Institute Vision and Mission
INSTITUTION VISION
To produce eminent and ethical Engineers and Managers for society by
imparting quality professional education with emphasis on human values and
holistic excellence.
INSTITUTION MISSION
o To incorporate benchmarked teaching and learning pedagogies in curriculum.
Department MISSION
UNIT II - ANALYSIS
2.1 Requirements Analysis
2.1.1 Functional Requirements Analysis
2.1.2 User Requirements
2.1.3 Non Functional
Requirements
2.1.6 System
Requirements
2.2 Modules Description
2.3 Feasibility Study
2.3.1 Technical Feasibility
2.3.2 Operational Feasibility
2.3.3 Behavioral Feasibility
2.6 Process Model used
2.5 Hardware and Software Requirements
2.6 SRS Specification
UNIT- 5 TESTING
UNIT- 6 IMPLEMENTAION
6.1 Implementation Process
6.2 Implementation Steps
6.3 Implementation procedure
6.6 User Manual
UNIT- 7 CONCLUSION AND FUTURE
ENHANCEMENTS
7.1 Conclusion
7.2 Future Enhancements
UNIT- 8BIBLIOGRAPHY
8.1 Books Referred
The project will involve collecting and analyzing data on various factors
that are known to influence heart disease, such as age, gender, blood
pressure, cholesterol levels, smoking history, and family history. The
collected data will be preprocessed and cleaned to ensure that it is suitable
for use in the machine learning model.
Once the data is ready, the project will involve using various machine
learning algorithms such as logistic regression, decision trees, and random
forests to train and test the model. The performance of the model will be
evaluated using various metrics such as accuracy, precision, recall, and F1
score.
There are already several existing systems for heart disease prediction using
machine learning. Here are some examples:
1. The Cleveland Clinic Foundation Heart Disease Dataset: This dataset is a popular
benchmark dataset used for heart disease prediction. It contains data on 303 patients
and 16 features including age, sex, blood pressure, serum cholesterol levels, and
electrocardiographic readings. Machine learning models can be trained on this
dataset to predict the likelihood of heart disease.
4. Heart disease prediction using machine learning: This is a research paper that
proposes a machine learning-based system for heart disease prediction. The system
uses a combination of logistic regression and artificial neural networks to analyze
data from the Framingham Heart Study dataset. The system achieved an accuracy
of 85% in predicting the likelihood of heart disease.
These are just a few examples of existing systems for heart disease prediction using
machine learning. There are many other systems and research studies in this field,
and the technology is constantly evolving to improve the accuracy and effectiveness
of heart disease prediction.
There are several benefits of heart disease prediction using machine learning. Here
are a few of them:
Early detection: One of the primary benefits of heart disease prediction is early
detection. Machine learning algorithms can analyze large amounts of data to
identify patterns and predict the likelihood of heart disease in individuals. Early
detection allows healthcare professionals to take preventive measures and provide
timely treatment to prevent the progression of the disease.
Personalized treatment: Heart disease prediction using machine learning can help
healthcare professionals provide personalized treatment to individuals. By
analyzing data on various risk factors such as age, sex, blood pressure, cholesterol
levels, and family history, machine learning algorithms can identify the most
effective treatment options for each individual.
4. Cost savings: Heart disease is a major healthcare cost, and early detection and
prevention can help reduce healthcare costs. By predicting the likelihood of heart
disease in individuals and providing preventive measures, healthcare professionals
can reduce the need for costly treatments and hospitalizations.
5. Public health benefits: Heart disease prediction using machine learning can
also have broader public health benefits. By identifying patterns and trends in
heart disease risk factors, healthcare professionals can develop effective public
health strategies to prevent heart disease at the population level.
Overall, heart disease prediction using machine learning can lead to improved
patient outcomes, cost savings, and public health benefits. It is a promising area of
research that has the potential to make a significant impact on healthcare.
Requirement analysis :
Requirement analysis is a critical step in the development of any machine learning
project, including heart disease prediction. Here are some key requirements to
consider when conducting requirement analysis for a heart disease prediction
system:
1. Data collection: The first requirement is to collect relevant data. The data should
include various risk factors for heart disease, such as age, sex, blood pressure,
cholesterol levels, smoking history, and family history. The data should be large
enough and diverse enough to train and test the machine learning model
effectively.
5. User interface: The heart disease prediction system should have a user interface
that is easy to use and provides clear and concise information about the predicted
likelihood of heart disease. The user interface should also be secure and protect the
privacy of patient data.
Types of GPA:
There are several types of heart disease prediction that can be performed using
machine learning. Here are some examples:
These are just a few examples of the types of heart disease prediction that can be
performed using machine learning. The specific type of prediction that is used will
depend on the goals of the project and the data that is available.
Overall, these studies demonstrate that machine learning algorithms have high
accuracy in predicting heart disease and cardiovascular disease risk factors.
However, further research is needed to develop and validate these algorithms in
larger and more diverse populations.
TECHNIQUE :
1. Age
2. Gender
4. Blood pressure
5. Cholesterol levels
6. Diabetes status
7. Smoking status
The registration phase of a heart disease prediction project involves collecting and
organizing the data that will be used to develop the prediction model. Here are some
steps involved in the registration phase:
1. Define the research question: The first step is to define the research question or
hypothesis. For example, the research question may be to predict the likelihood of
heart disease in a specific population or to identify risk factors for heart disease.
2. Identify the study population: The next step is to identify the population that
will be included in the study. This may include patients from a specific clinic,
hospital, or geographic region.
4. Collect data: Data can be collected from various sources such as medical
records, laboratory results, imaging studies, and patient interviews. It is important
to ensure that data is collected in a standardized and consistent manner.
5. Preprocess data: Before the data can be used to develop a prediction model, it
may need to be cleaned, organized, and transformed. This may include
removing missing values, scaling variables, and encoding categorical variables.
6. Split data: The dataset is then split into a training set and a testing set. The
training set is used to develop the prediction model, while the testing set is used
to evaluate the performance of the model.
7. Store data: The data should be stored in a secure and organized manner to
ensure data privacy and facilitate analysis.
By completing the registration phase, researchers can ensure that the data used in
the heart disease prediction project is reliable and of high quality. This phase
lays the groundwork for developing an accurate and effective heart disease
prediction model.
A login page for a heart disease prediction project would typically be used to
authenticate users and grant access to the system. Here are some key components
that may be included in a login page for a heart disease prediction system:
1. Username and password fields: Users would enter their unique username and
password to access the system. These credentials would be verified against a
database of registered users to ensure that only authorized users are able to
access the system.
2. Forgot password link: A "forgot password" link would allow users who have
forgotten their password to reset it using their registered email address or
phone number.
4. Login button: The login button would submit the user's credentials to the
system for authentication.
Overall, the login page of a heart disease prediction system would play a crucial
role in ensuring that only authorized users are able to access sensitive patient data
and that the system remains secure.
in incorrect sequence, then he will be unable to login or access the system. To resist
the shoulder surfing, images will be shown in random order. During each time of
login order of images will be different from the previous login. Figure 3 show the
login phase of our system, after successful login system will redirect to dashboard
of the user.
in incorrect sequence, then he will be unable to login or access the system. To resist
the shoulder surfing, images will be shown in random order. During each time of
login order of images will be different from the previous login.
Sample coding:
rile Edi‹ seie c‹ion view sa Run ierminal H eip Hean a se ass Predict on. py ns vis•ai s udi° c°d• ] g Q] 0,° a x
cted M d - ded for sa fe code browsing. Trust thfs window to enable aii features. M daAge Learn More X
g Heart oisease Prediction.ipynb x @ @
c: Users › Taiathoti vamri › ppDa a › Locai › Temp › Tempt Heart oisease Prediction master.zip › Heart oisease Predictio n master › 9 H ean aide are Predictio n ipynb › •• Heart Disease Predict ion › •• import libraries › "• import numpy as np
—I- d —I- M kdorn ”= outi ne , , sei ct r: m i
In this machine learning oroject, I have collected the dataset from Kaggle (h ltDs ,‘/\‘ \‘/\\ kaggle cont,‘t oniif/heai t disease uri) and I will be using Machine Learning to make oredictions on
whether a person is suffering from Heart Disease or not.
Import libraries
Let's first import all the necessary lioraries. I'll use numpy and pa nd a s to start with. For visualization, I will use pyplot subpackage of matplot I ib, use rcParams to add styling to the plots
and ra in bow for colors. For implementing Machine Learning models and processing of data. I will use the s klearn library.
For processing the data, I'll import a few libraries. la split the available dataset for testing and training, I'll use the t nai n_t est_s plit method. Io scale the features, I am using Standards c aie r.
CI Search
For processing the data, I'll import a few libraries. Io split the available dataset for testing and training, I'll use the erai n_test_s plit method. Io scale the features, I am using standa rdscaie r.
Next, I'll import all the Machine Learning algorithms I will be using.
1. K Neighoors Classifier
2. Suoport Vector Classifier
3. Decision Tree Classifier
4. Random Forest Classifier
Now that we have all the libraries we wilI need, I can import the dataset and take a look at it. +he dataset is stored in the file data s et . csv. I'll use the pandas nead_c s v method to read the
dataset.
ENG
CI Search
) ri e Ed ‹ se e c‹ on view sa Run ierminal H eip Hcan aise ass P r•dici on ipynb vis•ai siudi° c°a• ] g Q] 0,° a x
’ ' Restricted Mode s intended for sa fe code browsing. Trust thfs window to enable aii M daAge Learn Mo re X
features.
@ @
B Heart Disease Predietla n pynb X
c Use s › Ta a hot vamri › ppDa a › ocai › Temp › Tempt Heart oisease Prediction master.zip › Heart oisease Predictio n master › 9 H ean aide are Predictio n ipynb › •• Heart Di ea e Predict ion › •• import libraries › "• import numpy as np
@ —I- code —I- Ma kdor n — out ne % set ect r:em e
Import dataset
Now that we have all the libraries we will need, I can import the dataset and take a look at it. The dataset is stored in the file data set . csv. I'll use the pandas nead_c s v method to read
the p dataset.
The dataset is now loaded into the variable dataset. I'll just take a glimpse of the data using the des ri be( ) and in-Fa( ) methods before I actually start processing and visualizing it.
chO1
ENG
CI Search
Taking a look at the correlation matrix above, it's easy to see that a few features have negative correlation with the target value while some have positive. Next, I'll take a look at the
histograms for each variable.
ENG
CI Search
se rs To “r t 'la psi › A* r la › ’ cv › e r › e r 1 H° '“ a s°ase Pr°u c cn ma e r zip He a a °a se Pr°I ie i n ir as te r › - ° r+ ois°ass Pr°uie n r rL ^• • He rt u ed s° e‹ r ar ^• •
r de Ma rkdor n — Out ne
Looks like the dataset has a total of 303 rows and there are no missing values. There are a total of 1* -fieat ures along with one target value which we wish to find.
The scale of each feature column is different and quite varied as well. While the maximum for age reaches 77, the maximum of c noL (serum cholestoral) is 566.
0 • §g L “'¥ • @’ -
art Dise ase P redic ti on i py nb- Visual St udio Code
’
c Users › Taiath oti va mci › up poa a › Lo Lai › Mem p › Tern p1 Hearn a sea se Pred ct on master.zip › Hear oisea se Predictio n mas ter › 9 H ean aide are Predictio n ipynb › •• Heart aidea e P rediciion › •• im port iibr ari es › "• impo rt nu mpy as np
—F code —F uarkdo n -= outline select r:ernei
@ Data Processing
After exploring the dataset, I observed that I need to convert some categorical variables into dummy variables and scale all the values before training the Machine Learning models. First, I'll use
the get_dummi es method to create dummy columns for categorical variables.
d ata set = pd . get dummy es(dat as et column s = [ ' sex ' 'cp' ' lbs ' , ' neste c g ' , ' exa n g ' , ' s TOpe ' , ' ca ' , ' thaI ' )
Now, I will use the standards caLer from skleann to scale my dataset.
Machine Learning
I'll now import tra i n_test_s omit to split our dataset into training and testing datasets. Then, I'll import all Machine Learning models I'll be using to train and test the data.
ENG
Ct Search
I'll now import tra i n_tes t_s plit to split our dataset into training and testing datasets. Then, I'll import all Machine Learning models I'll be using to train and test the
data.
K Neighbors Classifier
The classification score varies based on different values of neighbors that we choose. Ihus, I'll plot a score graph for different values of K (neighbors) and check when do I achieve the best score.
I have the scores for different neighbor values in the array knn_s cores. I'll now plot it and see for which value of K did I get the best scores.
ENG
Ct Search
FRONT END:
Output screens:
Tlulium Test: Nax. Healt Rate
PREDICTION
Get In Touch
Heart Disease Predictor
Heart disease prediction testing refers to the process of evaluating the performance
of a heart disease prediction model. The purpose of testing is to determine the
accuracy, reliability, and effectiveness of the model in predicting the presence or
risk of heart disease.
There are several types of testing that can be conducted to evaluate a heart disease
prediction model, including:
Overall, heart disease prediction testing is an important step in the development and
validation of a heart disease prediction model. It helps ensure that the model is
accurate, reliable, and effective in predicting the presence or risk of heart disease,
and can ultimately help improve patient outcomes.
TYPES OF TESTING :
There are several types of testing that can be used to evaluate the performance of a
heart disease prediction model:
1. Internal validation: This involves testing the model on the same dataset that
was used to develop the model. Internal validation can provide an estimate of the
model's performance on new data. Techniques such as bootstrapping and k-fold
cross-validation can be used for internal validation.
2. External validation: This involves testing the model on a new dataset that was
not used to develop the model. External validation can provide an estimate of the
model's performance in a real-world setting.
To create test cases and test reports for a graphical authentication password project,
you will need to consider the following:
Understand the project requirements: Before you can start creating test cases,
you need to understand the requirements of the graphical authentication
password project. This will help you to determine the expected behavior of the
system and what needs to be tested.
Identify the use cases: Once you have a good understanding of the project
requirements, you can start to identify the use cases that need to be tested.
These are the scenarios that a user might encounter when using the system.
⮚ Create test cases: For each use case, you should create a set of test cases that
cover all possible scenarios. These test cases should include inputs,
expected outputs, and any conditions that need to be met for the test to be
considered successful.
⮚ Prioritize the test cases: You may not be able to test every scenario, so it's
important to prioritize the test cases based on their importance and potential
impact on the system.
⮚ Conduct the tests: Once you have created the test cases, you can start
conducting the tests. Make sure to record the results of each test, including
any issues that were encountered.
⮚ Create a test report: Once all the tests have been completed, you can create a
test report that summarizes the results. This report should include a summary
of the test cases, the results of each test, and any issues that were encountered.
Test Case 1: Login using correct graphical password Inputs: User enters the
correct graphical password Expected Output: User should be logged into the
system.
Test Case 2: Login using incorrect graphical password Inputs: User enters an
incorrect graphical password Expected Output: User should not be logged into the
system and an error message should be displayed.
Test Case 3: Create a new graphical password Inputs: User creates a new graphical
password.
TEST REPORTS:
Overall Result:
6/5 tests passed. Issue identified in Test Case 2 where error message was not
displayed.
IMPLEMENTATION:
System implementation is the stage of the project that the theoretical design is
turned into a working system. If the implementation stage is not properly planned
and controlled, it can cause error. Thus it can be considered to be the most crucial
stage in achieving a successful new system and in giving the user confidence that
the new system will work and be effective.
IMPLEMENTATION:
1. Data Collection: The first step is to collect data on patients with and without
heart disease. The data should include a range of clinical and demographic
variables, such as age, gender, blood pressure, cholesterol levels, and family
history of heart disease. This data can be collected from medical records or through
surveys.
6. Deployment: Once the model has been developed and validated, it can be
deployed as a web application or integrated into an electronic health record
system. The system should be easy to use and provide accurate predictions of heart
disease risk or presence.
CONCLUSION:
Overall, heart disease prediction is an important area of research and has the
potential to significantly improve the detection and management of heart disease,
which is one of the leading causes of death worldwide.
Reference websites: