Book Recommendation System
Book Recommendation System
BACHELOR OF TECHNOLOGY
in
Computer Science and Engineering
by
Place: Meerpet
Date: DD/MM/YYYY
This is to certify that the main project report entitled PROJECT TITLE, being submit-
ted by Mr./Ms.Student Name, bearing ROLL.NO:.XXK9XA05XX in partial fulfillment of
requirements for the award of degree of Bachelor of Technology in Computer Science and
Engineering, to the TKR College of Engineering and Technology is a record of bonafide
work carried out by him/her under my guidance and supervision.
Place: Meerpet
Date: DD/MM/YYYY
ABSTRACT
ACKNOWLEDGEMENT
LIST OF FIGURES
LIST OF TABLES
LIST OF SYMBOLS AND ABBREVIATIONS
1 INTRODUCTION
Motivation
Problem definition
Limitations of existing system
Proposed system
2 LITERATURE REVIEW
Review of Literature
3 REQUIREMENTS ANALYSIS
Functional Requirements
Non-Functional Requirements
4 DESIGN
DFDs and UML diagrams
Algorithm
Sample Data
5 CODING
Pseudo Code
6 IMPLEMENTATION and RESULTS
Explanation of Key functions
Meth of Implementation
Forms
Output Screens
Result Analysis
7 TESTING and VALIDATION
Design of Test Cases and Scenarios
Validation
Use of Abbreviations
8 CONCLUSION
REFERENCES
ABSTRACT
Fraudsters are now more active in their attacks on credit card transactions than ever before.
With the advancement in data science and machine learning, various algorithms have been
developed to determine whether a transaction is fraudulent. Which is where a machine learning
model comes in handy and allows the banks and major financial institutions to predict whether
the customer, they are giving the loan to, will default or not.
We study the performance of three different machine learning models: Random forest, and
KNN(k-nearest neighbors) to classify, predict, and detect fraudulent credit card transactions.
We compare these models’ performance and show that KNN (k-nearest neighbors) produces a
maximum accuracy ,in predicting and detecting fraudulent credit card transactions. Thus, we
recommend random forest as the most appropriate machine learning algorithm for predicting
and detecting fraud in credit card transactions. Credit Card holders above 60 years were found
to be mostly victims of these fraudulent transactions,
i
ACKNOWLEDGEMENT
The satisfaction and euphoria that accompanies the successful completion of any task
would be incomplete without the mention of the people who made it possible and whose
encouragement and guidance have crowned my efforts with success.
I am also indebted to the Head of the Department, Dr. A. Suresh Rao, Professor,
Computer Science and Engineering, TKR College of Engineering and Technology, for his
support and guidance throughout my Thesis/Dissertation.
I extend my deep sense of gratitude to the Principal, Dr. D. V. Ravi Shankar, TKR
College of Engineering and Technology, for permitting me to undertake this Thesis/Dissertation.
Finally, I express my thanks to one and all that have helped me in successfully
completing this Thesis/Dissertation. Furthermore, I would like to thank my family and
friends for their moral support and encouragement
Place: Meerpet
Date: D/MM/YYYY
ii
LIST OF FIGURES
iii
LIST OF TABLES
iv
1
Chapter 1
INTRODUCTION
Banks used to provide only in-person services to customers until 1996 when the first internet
banking application was introduced in the United States of America by Citibank and Wells Forgo
Bank . After the introduction of internet banking, the use of credit cards over the internet was
adopted. This has increased rapidly during the past decade and services like e-commerce, online
payment systems, working from home, online banking, and social networking have also been
introduced and widely used . Due to this, fraudsters have intensified their efforts to target online
transactions utilizing various payment systems .
In recent times, improvements in digital technologies, particularly for cash transactions, have
changed the way people manage money in their daily activities. Many payment systems have
transitioned tremendously from physical pay points to digital platforms . To sustain productivity
and competitive advantage, the use of technology in digital transactions has been a game-
changer and many economics have resorted to it. Hence, internet banking and other online
transactions have been a convenient avenue for customers to carry out their financial and other
banking transactions from the comfort of their homes or offices, particularly through the use of
credit cards
According to a credit card is designed as a piece of plastic with personal information
incorporated and issued by financial service providers to enable customers to purchase goods and
services at their convenience worldwide. The unlawful use of another person’s credit card to get
money or property either physically or digitally is known as credit card fraud . Events involving
credit card fraud occur often end in enormous financial losses . It is simpler to commit fraud now
than it was in the past because an online transaction environment does not require the actual card
and the card’s information suffices to complete a payment , postulate that monetary policy as
well as business plans and methods used by big and small businesses alike have been imparted
by the introduction of credit cards.
transactions as either fraudulent or not fraudulent.
The Bank of Ghana (BoG) reported an estimated loss value of GH¢ 1.26 million ($250,000) in
2019 due to credit card fraud which increased to GH¢ 8.20 million ($1.46 million) in 2020,
(BoG, 2021). This represented an estimated 548.0% increase in losses in year-toyear terms. All
payment channels have experienced persistent increases in fraud in recent years, with digital
transactions seeing the largest rise . One such instance is payment fraud, which includes checks,
deposits, person-to-person (P2P), wire transfers, automated clearing, house transfers, internet
payments, automated bill payments, debit and credit card transactions, and Automated Teller
Machine (ATM) transactions.
Following similar patterns, compliance and risk management services employed to identify
online fraud have shown a lot of interest in AI and machine learning models Some of these
models include Random Forest, k-nearest neighbors. This has become necessary because credit
card fraud detection is a classification and prediction problem. Supervised machine learning
models have been proved as the best models to detect fraud using the above-mentioned
algorithms . This study therefore seeks to compare three classification and prediction techniques,
namely; Rrandom Forest, k-nearest neighbors and in classifying and predicting financial
Transactions Credit acceptance remains a challenge for moneylenders as it is difficult to forecast
2
whether consumers pose an acceptable credit risk and should be granted credit. This is
especially true in emerging nations, where established rules and models from industrialized
nations may not apply. Therefore, productive methods for automatic credit approval that can aid
bankers in analysing consumer credit must be investigated.
Each bank receives tens of thousands of credit card applications each month. Banks have to
manually skim through each of these applications, while paying close attention to these factors to
determine whether the applicant is to be granted a credit card
or not. Due to the time-intensive nature of this activity and the growing likelihood of error as the
number of applications increases, banks are seeking prediction-based algorithms that can do this
task effectively and accurately.
3
PROPOSED SYSTEM :
1) Dataset: The dataset has been taken from Kaggle’s Credit Card Approval Prediction
page. We have merged two datasets containing application and credit records of the
applicants on primary key 'ID'. After merger, our columns contain variety of information
of the applicant through which the lending corporation can easily take a decision whether
to lend out to a particular candidate.
2) Pre-processing: The dataset had column names in the camel case format which we
converted into more readable format.
3) In addition, we integrating an extra K-Nearest Neighbors (KNN) algorithm into a
proposed system can offer several advantages, it's essential to consider factors such as
feature selection, distance metric choice, handling of missing values, and model
evaluation techniques to ensure optimal performance.
EXISTING SYSTEM:
Compares the prediction accuracy of : logistic regression, random forest, and decision trees
Classifier in the credit cardapproval process, with the Balanced Accuracy as the
performance criteria. The dataset contains 2 types of features, numerical and categorical.
Some of them include debt, age, income, education, income, etc. Based on the model
implementation, Random forest, has showcased the best prediction performance among
the models, with a Balanced Accuracy of around 98.9 %. However, the performance for
each model would fluctuate slightly depending on the data processing, parameter tuning
process and data features. One of the limitations to this paper is that further
comprehensive factors such as the computational efficiency, reject inference and outlier
handling to assess the prediction performance are not included
4
Chapter 2
LITERATURE REVIEW
6
Chapter 3
REQUIREMENTS ANALYSIS
REQUIREMENT ANALYSIS
The project involved analyzing the design of few applications so as to make
the application more users friendly. To do so, it was really important to keep the
navigations from one screen to the other well ordered and at the same time
reducing the amount of typing the user needs to do. In order to make the
application more accessible, the browser version had to be chosen so that it is
compatible with most of the Browsers.
REQUIREMENT SPECIFICATION
Functional Requirements
Security:
Data Encryption: All sensitive data transmitted between the user, merchant, and financial
institutions should be encrypted using secure protocols to prevent interception and tampering.
Access Control: Implement stringent access controls to ensure that only authorized personnel
can access sensitive systems and data related to credit card transactions.
Authentication: Employ multi-factor authentication mechanisms to verify the identity of users
and mitigate unauthorized access.
7
Reliability:
System Availability: Ensure high availability of the credit card transaction system to minimize
downtime and ensure that legitimate transactions can be processed without interruption.
Fault Tolerance: Implement redundancy and failover mechanisms to ensure that the system can
continue to operate in the event of hardware or software failures.
Data Integrity: Guarantee the integrity of transaction data throughout the entire process to
prevent unauthorized modifications or tampering.
Performance:
Response Time: Ensure that transaction processing times meet acceptable thresholds to provide a
seamless user experience and minimize delays for both merchants and customers.
Throughput: Optimize system performance to support a high volume of concurrent transactions
without degradation in speed or reliability.
Scalability:
Growth Planning: Ensure that the system architecture is designed to accommodate future growth
in transaction volume and user base without compromising performance or security.
Software Requirements
For developing the application the following are the Software Requirements:
1. Python
Operating Systems supported
1. Windows 7
2. Windows XP
3. Windows 8
Hardware Requirements
For developing the application the following are the Hardware Requirements:
Processor: Pentium IV or higher
RAM: 256 MB
Space on Hard Disk: minimum 512MB
8
Chapter 4
DESIGN
Accuracy,
As a measurement metric, measures the ratio of the total number of correct
predictions of fraud to the total number of predictions (both fraud and not fraud)
made by the model [43]. It is calculated as
𝑇𝑁 + 𝑇𝑃
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
𝑇𝑁 + 𝑇𝑃 + 𝐹𝑁 + 𝐹𝑃
9
common among its k nearest neighbors (k is a positive integer, typically
small).If k = 1, then the object is simply assigned to the class of that single nearest
neighbor.
In k-NN regression, the output is the property value for the object. This value is
the average of the values of its k nearest neighbors.
Accuracy,
As a measurement metric, measures the ratio of the total number of correct
predictions of fraud to the total number of predictions (both fraud and not fraud)
made by the model [43]. It is calculated as
𝑇𝑁 + 𝑇𝑃
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
𝑇𝑁 + 𝑇𝑃 + 𝐹𝑁 + 𝐹𝑃
10
ARCHITECTURE
UML DIAGRAMS
11
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.
GOALS:
12
CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language
(UML) is a type of static structure diagram that describes the structure of a system
by showing the system's classes, their attributes, operations (or methods), and the
relationships among the classes. It explains which class contains information.
13
SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction
diagram that shows how processes operate with one another and in what order. It is
a construct of a Message Sequence Chart. Sequence diagrams are sometimes called
event diagrams, event scenarios, and timing diagrams.
14
ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities
and actions with support for choice, iteration and concurrency. In the Unified
Modeling Language, activity diagrams can be used to describe the business and
operational step-by-step workflows of components in a system. An activity
diagram shows the overall flow of control.
15
Table 4.1
Basic Statistics for the character variables.
Name Cou Uniq Top Freque
nt ue ncy
Transac 5557 5447 2020-12-19 4
tion 19 60 16:02:22
date
and
time
Mercha 5557 693 fraud_Kilbac 1859
nt 19 k LLC
Categor 5557 14 gas_transpor 56370
y 19 t
First 5557 341 Christopher 11443
19
Last 5557 471 Smith 12146
19
Gender 5557 2 F 304886
19
Street 5557 924 444 Robert 1474
19 Mews
City 5557 849 Birmingham 2423
19
State 5557 50 TX 40393
19
Job 5557 478 Film/video 4119
19 editor
Date of 5557 910 1977-03-23 2408
birth 19
Transac 5557 5557 2da90c7d74 1
tion 19 19 bd46a
number
16
Table 4.2
Basic Statistics for the numeric variables.
Name Count Mean Std Min 25% 50% 75% Max
Unique identifier 555719 277859 160422.4 0 138929.5 277859 416788.5 555718
Credit card number of customers 555719 4178387 1309837 6041621 1800429 3521417 4635331 4992346
Amount 555719 69.39 156.75 1 9.63 47.29 83.01 22768.11
Zip 555719 48842.63 26855.28 1257 26292 48174 72011 99921
Latitude 555719 38.54 5.061 20.03 34.67 39.37 41.89 65.69
Longitude 555719 −90.23 13.72 −165.67 −96.8 −87.48 −80.18 −67.95
City population 555719 88221.89 300390.9 23 741 2408 19685 2906700
Time (s) 555719 1380679 5201104 1371817 1376029 1380762 1385867 1388534
Merchant latitude 555719 38.54 5.1 19.03 34.76 39.38 41.95 66.68
Merchant longitude 555719 −90.23 13.73 −166.67 −96.91 −87.45 −80.27 −66.95
Fraud status 555719 0.0039 0.062 0 0 0 0 1
17
Chapter 5
CODING
main = tkinter.Tk()
main.title("Machine Learning Algorithm for detecting and Predicting Fraud in
Credit Card Transactions") #designing main screen
main.geometry("1300x1200")
18
global filename
global cls
global X, Y, X_train, X_test, y_train, y_test
global random_acc # all global variables names define in above lines
global clean
global attack
global total
def generateModel(): #method to read dataset values which contains all five
features data
global X, Y, X_train, X_test, y_train, y_test
train = pd.read_csv(filename)
X, Y, X_train, X_test, y_train, y_test = traintest(train)
text.insert(END,"Train & Test Model Generated\n\n")
text.insert(END,"Total Dataset Size : "+str(len(train))+"\n")
text.insert(END,"Split Training Size : "+str(len(X_train))+"\n")
text.insert(END,"Split Test Size : "+str(len(X_test))+"\n")
19
def runRandomForest():
headers =
["Time","V1","V2","V3","V4","V5","V6","V7","V8","V9","V10","V11","V12","V13","V1
4","V15","V16","V17","V18","V19","V20","V21","V22","V23","V24","V25","V26","V27"
,"V28","Amount","Class"]
global random_acc
global cls
global X, Y, X_train, X_test, y_train, y_test
cls =
RandomForestClassifier(n_estimators=50,max_depth=2,random_state=0,class_weight='
balanced')
cls.fit(X_train, y_train)
text.insert(END,"Prediction Results\n\n")
prediction_data = prediction(X_test, cls)
random_acc = cal_accuracy(y_test, prediction_data,'Random Forest
Accuracy')
#str_tree = export_graphviz(cls, out_file=None,
feature_names=headers,filled=True, special_characters=True, rotate=True,
precision=0.6)
#display.display(str_tree)
def runKNeighborsClassifier():
headers =
["Time","V1","V2","V3","V4","V5","V6","V7","V8","V9","V10","V11","V12","V13","V1
4","V15","V16","V17","V18","V19","V20","V21","V22","V23","V24","V25","V26","V27"
,"V28","Amount","Class"]
global random_acc
global cls
global X, Y, X_train, X_test, y_train, y_test
cls = KNeighborsClassifier(n_neighbors=50, weights='distance',
algorithm='auto', p=2, metric='minkowski')
cls.fit(X_train, y_train)
text.insert(END,"Prediction Results\n\n")
prediction_data = prediction(X_test, cls)
random_acc = cal_accuracy(y_test, prediction_data,'knn Accuracy')
#str_tree = export_graphviz(cls, out_file=None,
feature_names=headers,filled=True, special_characters=True, rotate=True,
precision=0.6)
#display.display(str_tree)
def predicts():
global clean
global attack
global total
clean = 0;
attack = 0;
text.delete('1.0', END)
20
filename = filedialog.askopenfilename(initialdir="dataset")
test = pd.read_csv(filename)
test = test.values[:, 0:29]
total = len(test)
text.insert(END,filename+" test file loaded\n");
y_pred = cls.predict(test)
for i in range(len(test)):
if str(y_pred[i]) == '1.0':
attack = attack + 1
text.insert(END,"X=%s, Predicted = %s" % (test[i], 'Contains
Fraud Transaction Signature')+"\n\n")
else:
clean = clean + 1
text.insert(END,"X=%s, Predicted = %s" % (test[i], 'Transaction
Contains Cleaned Signatures')+"\n\n")
def graph():
height = [total,clean,attack]
bars = ('Total Transactions','Normal Transaction','Fraud Transaction')
y_pos = np.arange(len(bars))
plt.bar(y_pos, height)
plt.xticks(y_pos, bars)
plt.show()
21
modelButton.config(font=font1)
main.config(bg='LightSkyBlue')
main.mainloop()
@app.route("/about")
def about():
return render_template('about.html')
@app.route("/contact")
def contact():
return render_template('contact.html')
@app.route("/register",methods=["POST","GET"])
def register():
if request.method=="POST":
name=request.form["firstname"]
lname=request.form["lastname"]
uemail=request.form["email"]
Password=request.form["password"]
print(name,lname,uemail,Password)
import sqlite3
con=sqlite3.connect("test.db")
cur=con.cursor()
table="CREATE TABLE if not exists user (name varchar(255),lastname
varchar(255),email varchar(255),password varchar(255))"
22
cur.execute(table)
a=f"select email from user where email='{uemail}'"
cur.execute(a)
result=cur.fetchone()
if result!=None:
return "email alredy registered"
else:
#a="create table emp(name varchar(100),lastname varchar(100),email
varchar(100),password varchar(100))"
cur.execute("INSERT INTO user('name', 'lastname', 'email',
'password') VALUES (?,?,?,?)",(name,lname,uemail,Password))
con.commit()
con.close()
return "successfully registered"
return render_template("register.html")
@app.route("/login",methods=["POST","GET"])
def login():
if request.method=="POST":
uemail=request.form["email"]
upassword=request.form["password"]
print(uemail,upassword)
import sqlite3
con=sqlite3.connect("test.db")
cur=con.cursor()
a = "SELECT *FROM user WHERE email='"+uemail+"' AND
password='"+upassword+"'"
cur.execute(a)
result=cur.fetchone()
print("database",result)
if result==None:
return "enter valid details"
else:
return redirect(url_for("passhome"))
return render_template("login.html")
@app.route("/logout")
def logout():
return redirect(url_for("main"))
@app.route("/home",methods=["POST","GET"])
def home():
return render_template('home.html')
##########
if __name__ == '__main__':
23
#DEBUG is SET to TRUE. CHANGE FOR PROD
app.run(debug=True)
Pseudo Code
2. Preprocess data:
- Remove irrelevant features ( customer name, transaction ID)
- Handle missing values ( imputation, removal)
- Normalize/standardize features ( scaling)
24
Graph:
25
MODULES
1.Load Dataset:
Load data set using pandas read_csv() method. Here we will read the excel
sheet data and store into a variable.
Split the data set to two types. One is train data test and another one is test
data set.here we will remove missing values from the dataset.
Train data set will train our data set using fit method. 80% of data from
dataset we use for training the algorithm.
Test data set will test the data set using algorithm. 20% of data from dataset
we use for testing the algorithm.
Predict() method will predict the results. In this step we will predict the
ranking of the google play store app.
26
Chapter 6
Index page:
27
Register Page:
Login Page:
28
Screen:
29
After uploading dataset will get below screen
Now click on ‘Generate Train & Test Model’ to generate training model for
Random Forest Classifier
In above screen after generating model we can see total records available in
dataset and then application using how many records for training and how
many for testing. Now click on “Run Random Forest Algorithm’ button to
generate Random Forest model on train and test data and on “KNN
Algorithm’ button to generate KNN model on train and test data
In above screen we can see Random Forest generate 99.78% percent accuracy
And KNN 99.83% percent accuracy while building model on train and test data.
Now click on ‘Detect Fraud From Test Data’ button to upload test data and to
predict whether test data contains normal or fraud transaction
In above screen I am uploading test dataset and after uploading test data will
30
get below prediction details
In above screen beside each test data application will display output as
whether transaction contains cleaned or fraud signatures. Now click on
‘Clean & Fraud Transaction Detection Graph’ button to see total test
transaction with clean and fraud signature in graphical format. See below
screen
Result Analysis:
31
In above graph we can see total test data and number of normal and fraud
transaction detected. In above graph x-axis represents type and y-axis represents
count of clean and fraud transaction
32
Chapter 7
33
Unit Testing is done on singular modules as they are finished and turned out to be
executable. It is restricted just to the planner's prerequisites. It centers testing
around the capacity or programming module. It Concentrates on the interior
preparing rationale and information structures. It is rearranged when a module is
composed with high union
• Reduces the quantity of experiments
• Allows mistakes to be all the more effectively anticipated and
revealed
Black Box Testing
It is otherwise called Functional testing. A product testing strategy whereby
the inward workings of the thing being tried are not known by the analyzer. For
instance, in a discovery test on a product outline the analyzer just knows the
information sources and what the normal results ought to be and not how the
program touches base at those yields. The analyzer does not ever inspect the
programming code and does not require any further learning of the program other
than its determinations. In this system some experiments are produced as
information conditions that completely execute every single practical prerequisite
for the program. This testing has been utilizations to discover mistakes in the
accompanying classifications:
\
• Incorrect or missing capacities
• Interface blunders
• Errors in information structure or outside database get to
• Performance blunders
• Initialization and end blunders.
In this testing just the yield is checked for rightness.
White Box testing
It is otherwise called Glass box, Structural, Clear box and Open box testing
. A product testing procedure whereby express learning of the inner workings of
the thing being tried are utilized to choose the test information. Not at all like
34
discovery testing, white box testing utilizes particular learning of programming
code to inspect yields. The test is precise just if the analyzer comprehends what the
program should do. He or she would then be able to check whether the program
veers from its expected objective. White box testing does not represent blunders
caused by oversight, and all obvious code should likewise be discernable. For an
entire programming examination, both white box and discovery tests are required.
• Execute every single intelligent choice on their actual and false Sides.
Integration Testing
Coordination testing guarantees that product and subsystems cooperate an
entirety. It tests the interface of the considerable number of modules to ensure that
the modules carry on legitimately when coordinated together. It is characterized as
a deliberate procedure for developing the product engineering. In the meantime
reconciliation is happening, lead tests to reveal blunders related with interfaces. Its
Objective is to take unit tried modules and assemble a program structure in view of
the recommended outline
Two Approaches of Integration Testing
• Non-incremental Integration Testing
• Incremental Integration Testing
System Testing
35
framework from client perspective, with the assistance of particular report. It
doesn't require any inward learning of framework like plan or structure of code.
Acceptance Testing
Acknowledgment testing, a testing method performed to decide if the
product framework has met the prerequisite particulars. The principle motivation
behind this test is to assess the framework's consistence with the business
necessities and check in the event that it is has met the required criteria for
conveyance to end clients. It is a pre-conveyance testing in which whole
framework is tried at customer's site on genuine information to discover blunders.
The acknowledgment test bodies of evidence are executed against the test
information or utilizing an acknowledgment test content and afterward the
outcomes are contrasted and the normal ones.
The acknowledgment test exercises are completed in stages. Right off the
bat, the essential tests are executed, and if the test outcomes are palatable then the
execution of more intricate situations are done.
7.3 TEST APPROACH
A Test approach is the test system usage of a venture, characterizes how
testing would be done. The decision of test methodologies or test technique is a
standout amongst the most intense factor in the achievement of the test exertion
and the precision of the test designs and gauges.
Testing should be possible in two ways
• Bottom up approach
• Top down approach
36
Bottom up Approach
Testing can be performed beginning from littlest and most reduced level
modules and continuing each one in turn. In this approach testing is directed from
sub module to primary module, if the fundamental module is not built up a
transitory program called DRIVERS is utilized to recreate the principle module. At
the point when base level modules are tried consideration swings to those on the
following level that utilization the lower level ones they are tried exclusively and
afterward connected with the already inspected bring down level modules
Top down Approach
In this approach testing is directed from fundamental module to sub module.
in the event that the sub module is not built up an impermanent program called
STUB is utilized for mimic the sub module. This sort of testing begins from upper
level modules. Since the nitty gritty exercises more often than not performed in the
lower level schedules are not given stubs are composed. A stub is a module shell
called by upper level module and that when achieved legitimately will restore a
message to the calling module demonstrating that appropriate association
happened.
7.4 VALIDATION
The way toward assessing programming amid the improvement procedure or
toward the finish of the advancement procedure to decide if it fulfills determined
business prerequisites. Approval Testing guarantees that the item really addresses
the customer's issues. It can likewise be characterized as to exhibit that the item
satisfies its proposed utilize when sent on proper condition.
The framework has been tried and actualized effectively and along these
lines guaranteed that every one of the prerequisites as recorded in the product
necessities determination are totally satisfied.
7.5 Test Cases
Experiments include an arrangement of steps, conditions and sources of info
that can be utilized while performing testing undertakings. The principle
expectation of this action is to guarantee whether a product passes or bombs as far
37
as usefulness and different perspectives. The way toward creating experiments can
likewise help discover issues in the prerequisites or plan of an application.
Experiment goes about as the beginning stage for the test execution, and in the
wake of applying an arrangement of information esteems, the application has a
conclusive result and leaves the framework at some end point or otherwise called
execution post condition.
38
Chapter 8
CONCLUSION
In this project, we have mentioned various machine learning methods to predict whether a credit
card will be approved for an individual or not. Several parameters were taken into consideration
as these parameters make the model more effective and help institutions make better decisions to
avoid fraud and losses. We applied a lot of data pre-processing techniques as good amount of
data pre-processing contributes effectively to developing better performance of traditional
machine learning models. During Exploratory Data Analysis, we plotted a lot of graphs and
charts to study the dataset deeply so that we can get a better understanding of the dataset. This
was done so that we can decide which models to apply which can perform well on this dataset
and can correctly predict whether to approve a credit card or not. This prediction system can be
helpful to various banks as it makes their task easier and increases efficiency as compared to the
manual system which is currently used by many banks and this system is cost effective.
39
REFERENCES
[1] Alishahi, K., Marvasti, F., Aref, V. and Pad, P. [2009], ‘Bounds on the sum capac-
ity of synchronous binary cdma channels’, IEEE transactions on information theory
55(8), 3577–3593.
[2] Alred, G. J., Brusaw, C. T. and Oliu, W. E. [2019], Handbook of technical writing,
Bedford/St. Martin’s Macmillan Learning.
[3] Babington, P. [1993], The title of the work, Vol. 4 of 10, 3 edn, The name of the publisher,
The address. An optional note.
[4] Caxton, P. [1993], ‘The title of the work’, How it was published, The address of the
publisher. An optional note.
[6] Doan, A., Madhavan, J., Domingos, P. and Halevy, A. [2002], Learning to map between
[8] Draper, P. [1993], The title of the work, in T. editor, ed., ‘The title of the book’, Vol. 4
of 5, The organization, The publisher, The address of the publisher, p. 213. An optional
note.
[9] Duzdevich, D., Redding, S. and Greene, E. C. [2014], ‘Dna dynamics and single-
molecule biology’, Chemical reviews 114(6), 3072–3086.
[10] Farindon, P. [1993], The title of the work, in T. editor, ed., ‘The title of the book’, 3
40
edn, Vol. 4 of 5, The name of the publisher, The address of the publisher, chapter 8,
pp. 201–213. An optional note.
[11] Gainsford, P. [1993], The title of the work, 3 edn, The organization, The address of the
publisher. An optional note.
[14] Harwood, P. [1993], The title of the work, Master’s thesis, The school of the thesis,
The address of the publisher. An optional note.
[15] Haykin, S. [2004], Kalman filtering and neural networks, Vol. 47, John Wiley and Sons.
[17] Isley, P. [1993], ‘The title of the work’, How it was published. An optional note.
[18] Joslin, P. [1993], The title of the work, PhD thesis, The school of the thesis, The address
of the publisher. An optional note.
[19] Kidwelly, P., ed. [1993], The title of the work, Vol. 4 of 5, The organization, The name
of the publisher, The address of the publisher. An optional note.
[20] Kothari, C. R. [2004], Research methodology: Methods and techniques, New Age
Inter- national Publications.
[22] Neil, W. and David, H. [2016], CMOS VLSI Design, Pearson Education.
[23] Waldron, S. [2008a], ‘Generalized welch bound equality sequences are tight frames’,
IEEE Transactions on Information Theory 49(2), 2307–2309.
41
[24] Waldron, S. [2008b], ‘Ontology learning for the semantic web’, International Journal
of Mathematics 16(2), 72–79.
42