0% found this document useful (0 votes)
25 views37 pages

Research Paper Final

This document describes a study that evaluated the effectiveness of a machine learning model called CORONET for predicting cancer and COVID-19 patient outcomes. The study compared how healthcare professionals made recommendations for patients when presented with the CORONET score alone versus when also given an explanation of how the model arrived at the score. Artificial patient cases were created and reviewed by experts, and two challenging cases aimed to test the model's limitations. The study design involved each participant reviewing two cases under each condition to minimize other influences and ensure all saw the same cases.

Uploaded by

Nitesh verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views37 pages

Research Paper Final

This document describes a study that evaluated the effectiveness of a machine learning model called CORONET for predicting cancer and COVID-19 patient outcomes. The study compared how healthcare professionals made recommendations for patients when presented with the CORONET score alone versus when also given an explanation of how the model arrived at the score. Artificial patient cases were created and reviewed by experts, and two challenging cases aimed to test the model's limitations. The study design involved each participant reviewing two cases under each condition to minimize other influences and ensure all saw the same cases.

Uploaded by

Nitesh verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

UNIVERSITY D E PA RT M E N T

RAJASTHAN TECHNICAL UNIVERSITY,KOTA

A Presentation on Phishing Website Detection using


Machine Learning Algorithms
Submitted by:- Submitted to:-
Aadish Jain(21/176) Dr. Harish Sharma
Aditi Sir
Shankhwal(21/181)
A bout P aper

Author Article Info

Rishikesh Mahajan Publication: International


Irfan Siddavatam Journal
Published on: October 2018
Content
1. Abstract 4. Results

2. Introduction 5. Discussion

3. Methods 6. Conclusion
0 Abstract
1

1. Phishing attack is a simplest way to obtain sensitive


information from innocent users. Aim of the phishers is to
acquire critical information like username, password and bank
account details
.
2. Cyber security persons are now looking for
trustworthy and steady detection techniques for phishing
websites detection.
02 Introduction
1 The Term phishing refers to an attack using mail program
. or website to trick the users into revealing sensitive
information that can be used for criminal purpose.
2. In general , hackers conduct phishing attacks by using
email messages or website that looks to appear as through
they come from a genuine source like post office , bank ,
online services.
3. Experts can identify fake websites but not all the users can
identify the fake website and such users become the victim of
phishing attack. Main aim of the attacker is to steal banks
account credentials.
4.
Hinging on Trust and Comprehension:
For optimal performance, doctors' trust and grasp of these
models are pivotal.
03 Types

3.1 Spear phishing:- Spear phishing is a


phishing method that
targets specific individuals
or groups within an
organization. It is a potent
variant of phishing, a
malicious tactic which
uses emails, social media,
instant messaging, and
other platforms to get
users to divulge personal
information or perform
actions that cause
03 Types

3.1 Whaling Phishing:- Whaling is a common cyber


attack that occurs when an
attacker utilizes spear
phishing methods to go after
a large, high-profile target,
such as CEO,manager.
03 Methods

3.1 Vising phishing attack:-

Vishing (voice or VoIP


phishing) is a type of cyber
attack that uses voice and
telephony technologies to
trick targeted individuals
into revealing sensitive
data to unauthorized
entities like bank person or
insurance agent.
03 Methods

3.2 Smishing Phishing:-

Smishing is a social engineering


attack that uses fake mobile text
messages to trick people into
downloading malware, sharing
sensitive information, or sending
money to cybercriminals. The term
“smishing” is a combination of
“SMS”—or “short message
service,” the technology behind
text messages—and “phishing.”
Email phishing is the most common type of
03 Methods phishing, and it has been in use since the
1990s. Hackers send these emails to any email
3.2 E-mail Phishing:- addresses they can obtain. The email usually
informs you that there has been a
compromise to your account and that you
need to respond immediately by clicking on a
provided link. These attacks are usually easy to
spot as language in the email often contains
spelling and/or grammatical errors.
Some emails are difficult to recognize as
phishing attacks, especially when the language
and grammar are more carefully crafted.
Checking the email source and the link you’re
being directed to for suspicious language can
give you clues as to whether the source is
legitimate
Fig. 2. User’s interface of the CORONET available at https://coronet.manchester.ac.uk. The user inputs the values manually and is warned in case of values out of
expected range (Supp Fig. S.7). ‘Calculate NEWS2’ button leads to a pop-up window with a calculator (Supp Fig. S.8). ‘Convert’ button refers to http:// unitslab.com/.
After pressing ‘Calculate’ the output is generated and presented to the user in the same window, below the ‘Patient Details’ field. Of note, during the experiment the
participants did not directly interact with the interface. Static images with the tool’s output were provided.
03 Methods

3.2 Clinical setting & supporting model:-


Recommendation (R):-

CORONET Model and Score:


C O RO N E T model advises using CO RO N E T score.
Helps decide actions for cancer and COVID-19 patients.
Score Interpretation:
Score on color bar, 0 to 3.
<1.0: Discharge recommendation.
1.0: Hospital admission.
2.3: High risk of severe COVID-19.
Model Type and Training:
Regression random forest model.
Trained with categorized patient data: discharged, admitted without/with
oxygen, admitted with oxygen and death.
03 Methods

3.2 Clinical setting & supporting model:-


Recommendation (R):-
03 Methods

3.2 Clinical setting & supporting model:-


Model explanation (Exp):-

Explanation Visuals:
Model's explanation comprises two visuals:
Scatter plot: Dots show patient predictions (x-axis) vs. actual outcomes
(color).
Bar plot: Bars display influential features, length signifies importance,
color indicates discharge or admission direction.
Scatter Plot:
Maps model predictions vs. actual outcomes.
Reveals prediction accuracy and errors.
Bar Plot:
Displays features affecting specific patient prediction.
Bar length: feature importance.
Bar color: push towards discharge or admission recommendation.
03 Methods
3.2 Clinical setting & supporting
model:-
03 Methods
3.2 Clinical setting & supporting
model:-
03 M ethods

3.3 Study Design:-


In this study, each participant used the tool in two different situations. First, they
were given information about a patient's condition along with a recommendation
and a score from 0 to 3 (CORONET score) indicating the severity of the predicted
outcome. And, in the second condition, they were provided with the same
recommendation and score but also an explanation of how the tool arrived at that
score (CORONET score plus explanation).
This approach helps minimize any differences in participants' clinical and
technological backgrounds or previous experience. All participants saw the same
patient cases in the same order.
The study consisted of six stages:
03 M ethods

3.3 Study
Design:-
03 M ethods

3.3 Study Design:-

Artificial Patient Cases:


Ten cases created, reviewed by oncology experts.
Challenging Cases:
Two cases designed to test model's limitations.
Scenario: Model Lacks Consideration:
Designed to challenge cases where model might incorrectly
recommend discharge due to missing oncological
emergencies.
Purpose of Challenging Cases:
Check if healthcare professionals recognize need for admission
despite model's suggestion.
Testing Automation Bias:
Aim to see if professionals over-rely on model's recommendation.
03 M ethods

3.4 Selection of
participants:-
Ethical Approval
E t h i c a l A p p r o v a l : E t h i c a l A p p r o v a l : E t h i ca l A p p r o v a l : E t h i ca l
A pp ro val:

Participant Criteria

Included Healthcare
Roles

Exclusion Criteria
04 Results

4.1 Participants:-
In the experiment, 23 healthcare professionals took part, and they had different
levels of experience and expertise, as shown in Table 1. When asked about their
knowledge in managing patients with COVID-19, the median score was five out of
seven, indicating a moderate to high level of knowledge. Most of the healthcare
professionals felt very comfortable using new technologies.
04 R esults

4.2 HCPs want to know both contributing features and


uncertainty.:-

01 02 03
Importance of Uncertainty:
HCPs' Key Aspects of
Interest: HCPs crucially wanted to
Expecta tions
grasp recommendation
Studied: Majority (87%)
uncertainty.
Researchers sought
Interest in understanding
assessed HCPs' understanding of
why ML m odel m a kes
expecta tions for features guiding
pa tient suggestions.
ML- based rec om m enda tions
Explanations and uncertainty level
decision support. .
valued more than technical
91% va lued workings.
explanations for
individual
04 R esults

4.2 HCPs want to know both contributing features and


uncertainty.:-
04 R esults
4.3 Visual explanations were easy to
interpret Ease of Understanding Color Bar:
83% of HCPs easily
co mprehended the color bar,

Researchers delved into h o w indicating a score on a 0-3 scale.

healthcare professionals (HCPs)


grasped visual explanations from the Influence on Decision-Making:
Surprisingly, 48% of HCPs
m o d e l and whether these
based decisions solely on the
explanations i m pact ed their decision- provided score, even without
making. extra explanation.

High Positive Response:


A m o n g these HCPs, overall
o p t i m i s m about the ML-based
decision support system (DSS)
was significantly high.
04 R esults
4.3 Visual explanations were easy to
interpret
04 R esults

4.4 Explanations did not significantly impact on the decision-


making:-
Explanations and Attitude:
The researchers conducted a The addition of explanations
comparison of healthcare professionals' didn't yield a statistically
significant shift in HCPs' attitude
(HCPs) responses in t wo distinct
towards the model.
scenarios. One scenario involved only
the model's score without an Perceived Tool Helpfulness:

explanation (CS - without explanation), Notably, a minor positive


change emerged in perceiving
while the other scenario included both
the tool's usefulness in
the model's score and an explanation situations where HCPs were
(CS +Exp - with explanation). less confident in decision-
making.
Although not statistically sign ific an t
(p = 0 .0 56), it suggested a trend
towards enhanced perceived
utility.
04 R esults

4.4 Explanations did not significantly impact on the decision-


making:-
E x p e r t i s e a n d T o o l H e l p f u l n e s s:
L e s s e x p e r t i s e i n HCPs l e d t o a p e r c e p t i o n o f g r e a t e r t o o l u se f u l n e ss, e v e n w i t h o u t e x p l a n a t i o n s (r = -
0.482, p = 0.02).
L e s s e x p e r i e n c e d HCPs f o u n d t h e m o d e l ' s s c o r e a l o n e va lu ab le .

Need for Features' Understanding:


S t r o n g e r d e s i r e f o r u n d e r s t a n d i n g c o n t r i b u t i n g f e a t u r e s l e d t o r e d u c e d u s e f u l n e s s o f CS o u t p u t ( m o d e l ' s
s c o r e w i t h o u t e x p l a n a t i o n ) f o r s a f e d e c i s i o n s (r = -0.653, p = 0.001) a n d l e s s c o n f i d e n t c a s e s (r = -0.553, p
= 0.006).
HCPs w a n t i n g m o d e l r e a s o n i n g f o u n d t h e s c o r e a l o n e l e s s h e l p f u l i n c e r t a i n s i t u a t i o n s .
04 R esults
04 R esults

4.5 Quicker decisions due to over-reliance on


explanations:-
Quicker Decisions w i t h Model Agreement:
A l i g n e d Decisio n s:
HCPs d e c i d e d f a s t e r w h e n t h e i r c h o i c e s m a t c h e d m o d e l ' s s u g g e s t i o n .
CORONET s c o r e (CS) only: A l i g n e d d e c i s i o n s i n a b o u t 4 8 s e c o n d s ; d i f f e r i n g d e c i s i o n s i n a r o u n d
61 seconds.

Explanation's I m p a ct on Decision Time:


CS+Exp E f f e c t :
When CORONET s c o r e + e x p l a n a t i o n (CS+Exp) p ro vi de d, d e c i s i o n t i m e d i f f e r e d s i g n i f i c a n t l y .
Suggestion Alignment:
Aligned w i t h model: About 38 seconds.
Against model: Approximately 84 seconds.
04 R esults

4.5 Quicker decisions due to over-reliance on


explanations:-
04 R esults

4.6 Positive
feedback:-

POSITIVE EXPLANATORY CLINICAL


IMPRESSION: COMPONENT: APPLICATION:

Fig. 9: Positive Fig. 10: Explanation's Explanation didn't


feedback received. effect on clinical utility change attitude towards
Model and user assessed. CORONET model's real-
interface seen as user- Explanatory part didn't world clinical use.
friendly. significantly alter In summary, model
Most respondents model's practical found useful and user-
willing to recommend to usefulness. friendly, explanation
colleagues. didn't shift perception of
practical application
04 R esults

4.6 Positive
feedback:-
05 Discussion
CORONET explanation among the existing
taxonomies:-

0 Study's
1
Purpose
0 Involvement of Real
2
HCPs
0 User-Friendly
3
Visuals
0
4
Study's
Purpose
05Model's Process
Limitation
05 Discussion

Characterizing HCP's attitude on the


explanations:-
P A R TIC IP A NT 87% value understanding feature contribution (model's impact).
F E E D BAC K Key: Knowing model's uncertainty.
'Strongly agree' highest for uncertainty aspect.

HCPs favor risk and uncertainty comprehension.


CO M FO RT IN Over technical feature details interpretation.
AS S E S S I N G R I S K

Model's uncertainty disclosure boosts trust and prediction


adherence.
PTRUST
R ED IC AN
TIO\ N Especially with simple presentation.
ND
06 Conclusion

Crucial Alignment with Healthcare Pros:


Ensuring machine learning tools align well with healthcare professionals is vital.
Balanced Explanations for Trust:
Study reveals importance of balanced, well-designed explanations for trust
and safety.
Practical Evaluation Approach:
Study introduces practical evaluation framework for clinical decision
support.
Comparing Models with HCPs:
Within-subject design, 23 H C P s compared explainable ML model.
Mixed Impact of Explanations:
Explanations important but varied; some doctors found them challenging.
Enhanced understanding, but complexities might hinder some.
Benefits of Explanations:
Improved decision-making when differing from model.
Effective in uncertain situations, aiding new learners.
Progress in Trustworthy Healthcare ML:
Study advances understanding for trustworthy ML tools in healthcare.
07 References
1 Kevin Bauer, Moritz von Zahn, Oliver Hinz, Expl(Ai)Ned: the impact of explainable Artificial Intelligence on cognitive processes, SSRN Electron. J. (ISSN
1556-5068) (2021), https://www.ssrn .com /abstract =3872711.
2 Zachary C. Lipton, The mythos of model interpretability, arXiv:1606 .03490, Mar. 6, 2017.
3 Leilani H. Gilpin, et al., Explaining explanations: an overview of interpretability of machine learning, in: 2018 IEEE 5th International Conference on
Data Science and Advanced Analytics (DSAA), Oct. 2018, pp. 80–89.
4Wojciech Samek, Klaus-Robert Mü ller, Towards explainable Artificial Intelligence, in: Wojciech Samek, et al. (Eds.), Explainable AI: Interpreting,
Explaining and Visualizing Deep Learning, in: Lecture Notes in Computer Science, vol. 11700, Springer International Publishing, Cham, 2019, pp. 5–22,
ISBNs 978-3-030-28953-9, 978-3-030-28954-6.
5 W. James Murdoch, et al., Definitions, methods, and applications in interpretable machine learning, Proc. Natl. Acad. Sci. 116 (44) (Oct. 2019) 22071–
22080, https://doi .org /10 .1073 /pnas .1900654116, ISSNs 0027-8424, 1091-6490.
6 Mokanarangan Thayaparan, Marco Valentino, André Freitas, A survey on explainability in machine reading comprehension, arXiv preprint, arXiv:2010 .
00389, 2020.
7 Andreas Holzinger, et al., Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical Artificial
Intelligence, Inf. Fusion (ISSN 1566-2535) 79 (Mar. 2022) 263–278, https://doi .org /10 .1016 /j .inffus .2021.10 .007.
8 Scott M. Lundberg, Su-In Lee, A unified approach to interpreting model predictions, in: Advances in Neural Information Processing Systems, vol. 30,
Curran Associates, Inc., 2017, https://proceedings .neurips .cc /paper /2017 /hash /8a20a8621978632d76c43dfd28b67767 -Abstract .html.
9 Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin, Anchors: high-precision model-agnostic explanations, in: Proceedings of the AAAI Conference on
Artificial Intelligence, vol. 1, vol. 32, 2018.
10 Satoshi Hara, Kohei Hayashi, Making tree ensembles interpretable: a Bayesian model selection approach, in: Proceedings of the Twenty-First
International Conference on Artificial Intelligence and Statistics, PMLR, Mar. 2018, pp. 77–85.
11 Arnaud Van Looveren, Janis Klaise, Interpretable counterfactual explanations guided by prototypes, arXiv:1907.02584, Feb. 2020.
12 Amit Dhurandhar, et al., Explanations based on the missing: towards contrastive explanations with pertinent negatives, arXiv:1802 .07623, Oct.
2018.
13 Kazuaki Hanawa, et al., Evaluation of similarity-based explanations, arXiv:2006 .04528, Mar. 2021.
14 Daniel W. Apley, Jingyu Zhu, Visualizing the effects of predictor variables in black box supervised learning models, arXiv:1612 .08468, Aug.
2019.
15 Heinrich Jiang, et al., To trust or not to trust a classifier, in: Proceedings of the 32nd International Conference on Neural Information Processing
Thank You

You might also like