0% found this document useful (0 votes)
89 views13 pages

Owasp

The document discusses developing a quantitative model for evaluating the security of web applications based on the OWASP Application Security Verification Standard (ASVS). It aims to unite the ASVS guidelines into an analyzable structure to efficiently assess security levels while providing useful insights. The advantages of quantitative evaluation are mentioned but current research focuses more on qualitative perspectives with limited practical models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views13 pages

Owasp

The document discusses developing a quantitative model for evaluating the security of web applications based on the OWASP Application Security Verification Standard (ASVS). It aims to unite the ASVS guidelines into an analyzable structure to efficiently assess security levels while providing useful insights. The advantages of quantitative evaluation are mentioned but current research focuses more on qualitative perspectives with limited practical models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Downloaded from https://iranpaper.

ir
https://www.tarjomano.com https://www.tarjomano.com

Computers & Security 135 (2023) 103532

Contents lists available at ScienceDirect

Computers & Security


journal homepage: www.elsevier.com/locate/cose

A quantitative security evaluation and analysis model for web applications


based on OWASP application security verification standard
Shao-Fang Wen *, Basel Katt
Department of Information Security and Communication Technology, Norwegian University of Science and Technology, Gjøvik, Norway

A R T I C L E I N F O A B S T R A C T

Keywords: In today’s digital world, web applications are popular tools used by businesses. As more and more applications
Web application security are deployed on the web, they are seen as increasingly attractive targets by malicious actors eager to exploit any
Security evaluation security gaps present. Organizations are always at risk for potential vulnerabilities in their web-based software
Quantitative approach
systems, which can lead to data loss, service interruption, and lack of trust. Therefore, organizations need to have
Security analysis
an effective and efficient method for assessing and analyzing the security of acquired web-based software to
ensure adequate confidence in its use. Quantitative security evaluation employs mathematical and computational
techniques to express the security level that a system reaches. This research focuses on improving the quanti­
tative analysis of web application security evaluation. We strive to unite the Open Web Application Security
Project’s (OWASP) Application Security Verification Standard (ASVS) into a structural and analyzable model,
which aims to efficiently evaluate web application security levels while providing meaningful insights into their
strengths and weaknesses.

1. Introduction them as well as identifying and analyzing security threats, vulnerabil­


ities, and risks (Herrmann, 2002). To ensure stakeholders can make the
Web applications (or web apps) have been the mainstream technol­ most of this data, it must be presented in a suitable format for their
ogy for providing information and services over the Internet. Numerous needs. Quantitative security evaluation is a specialized field where
businesses from various sectors continue to move their operations on­ computational and mathematical techniques are used to evaluate the
line. Most institutions and organizations employ web applications such level of security in a system (Gritzalis et al., 2002; LeMay et al., 2011). It
as blogs, social networks, webmail, banks, and others that provide seeks to access more precisely how much effort is required to defend the
essential operational activities while also storing sensitive data. The system or how high the danger of the system being compromised
widespread adoption of web applications in modern society has also (Vache, 2009). Such a kind of security assessment model leads to clearly
captured the attention of hackers who are intent on taking advantage of measurable security scores, giving a clear indication of the strength of a
any vulnerabilities in these applications to perpetrate malicious acts, system’s protection measures (Gritzalis et al., 2002).
resulting in the inefficiency and ineffectiveness of business activities The advantages of quantitative security evaluation approaches are
(Erşahin and Erşahin, 2022). With the prevalence and significance of obvious, however, the research work in this area seems limited. Ac­
web applications nowadays, organizations want confidence that the cording to a recent survey on web application security (Ruan and Yan,
software is developed securely and reliably that takes care of the 2018), there is not yet a practical and effective model for evaluating the
required security mechanisms while minimizing the risks to the assets. security of web applications, while a majority of the research conducted
To gain the necessary confidence in acquiring and maintaining has only looked at the typical security vulnerabilities associated with the
software systems, organizations need a thorough method for evaluating application. Consequently, much must be done to develop an efficient
and analyzing the security of the software (Erşahin and Erşahin, 2022). model for assessing web security (Ruan and Yan, 2018). Not only that, in
Security evaluation seeks to provide trustworthy results for a recent systematic literature review research, Shukla et al. (Shukla
decision-makers to utilize (Gritzalis et al., 2002). The process involves et al., 2021) concluded that the majority of security assurance and
evaluating the adequacy of security controls and procedures to address evaluation research has been focused on qualitative perspectives, with

* Corresponding author.
E-mail address: [email protected] (S.-F. Wen).

https://doi.org/10.1016/j.cose.2023.103532
Received 12 April 2023; Received in revised form 1 September 2023; Accepted 5 October 2023
Available online 8 October 2023
0167-4048/© 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Table 1 by introducing OWASP ASVS and offering compelling examples that


Examples of ASVS requirements. demonstrate some security assurance analysis scenarios utilizing ASVS.
Section Requirement Description L1 L2 L3 CWE NIST By intertwining theory with practical applications, this section high­
800–63 lights the gap between conceptual understanding and real-world
2.1 2.1.1 Verify that user-set V V V 521 5.1.1.2 implementation, reinforcing the significance of ASVS as a valuable
Password passwords are at least 12 resource for ensuring analyzability.
Security characters in length (after
multiple spaces are
combined).
2.1. OWASP application security verification standard
2.1.3 Verify that password V V V 521 5.1.1.2
truncation is not The OWASP is a non-profit, community-driven organization that
performed. promotes software security through educational materials, open-source
2.1.4 Verify that any V V V 521 5.1.1.2
software, and other initiatives. The OWASP ASVS is an open standard for
printable Unicode
characters, including performing web application security verification, which is designed to
language-neutral characters methodically test application and environment-level technical security
such as spaces and Emojis, controls. With this, it is possible to identify various potential vulnera­
are permitted in passwords. bilities, for example, Cross-Site Scripting (XSS) and SQL injection. The
2.1.5 Verify users can V V V 620 5.1.1.2
change their password.
ASVS Project has designed its standard for practical, “commercially
2.1.11 Verify that "paste" V V V 521 5.1.1.2 workable”. With extensive coverage and flexibility, the ASVS can be
functionality, browser applied in various situations, from intimate internal security measuring
password helpers, and to instructing developers how to suitably implement safety functions or
external password
evaluating third-party software and contractual development agree­
managers are permitted.
ments. The latest stable version of ASVS is 4.0.3 released in October
2021.
minimal effort devoted to developing quantitative methodologies. The ASVS contains 286 verification requirements that are grouped
To complement the research gap, this paper focuses on the modeling into 14 higher-level categories (named “Chapter”) and sub-categories
of web application security evaluation that is more amenable to quan­ (named “Section”) that are of similar functionality. Additionally, from
titative analysis. Specifically, we strive to synthesize Open Web Appli­ version 4.0, ASVS provides a comprehensive mapping to the Common
cation Security Project’s (OWASP) Application Security Verification Weakness Enumeration (CWE) (MITRE 2023). The Common Weakness
Standard (ASVS) (OWASP 2022) and integrate it into a structural and Enumeration (CWE) is a list of weaknesses in software that can lead to
analyzable model. OWASP ASVS is widely used for web application security issues. While the CWE list is long, it is also prioritized by
security assessment and security requirements elicitation, as it provides severity of risk, providing organizations and developers with a good idea
a comprehensive overview of all security-related topics. Despite having about how to best secure applications. Where applicable, ASVS re­
advantages for conducting security assessments, the operational nature quirements are also mapped to (or aligned with) different security
of ASVS makes it challenging to generate meaningful data for analysis. standards, including OWASP Proactive Control (OWASP 2023) and the
Our quantitative approach enables the conversion of ASVS data into U.S. National Institute of Standards and Technology (NIST) Digital
informative, understandable information. By combining meaningful Identity Guidelines (NIST 800–63) (Grassi et al., 2017) (See Table 1).
data sets with sound analytics, security stakeholders can make informed The former describes the most important control categories that every
choices that drive organizational decision-making (Wen et al., 2022). In architect and developer should follow, while the latter introduces
this paper, we also demonstrate and illustrate how such models are used modern, evidence-based, and advanced authentication controls. Fig. 1
for the analysis of security strengths and weaknesses as well as quanti­ depicted the whole data structure of ASVS.
tative aspects through aggregating ASVS verification results.
The rest of this paper is organized as follows. Section 2 presents the 2.2. Motivating examples of security assurance analysis
scientific background. In Section 3, we provide an overview of related
work. In Section 4, we discuss the concept of system boundaries, which There is no doubt that the OWASP ASVS is of great utility when it
forms the foundation of our proposed model. In Section 5 the proposed comes to conducting web application security assessments based on the
security evaluation model is discussed in detail. Subsequently, Section 6 predefined list of specifications (Harrison et al., 2016; Sönmez, 2019).
provides an example of data analytics based on this model to better This initiative makes it possible for organizations to carry out a
illustrate it. Lastly, the conclusion and future works are presented in comprehensive security review and quickly get the risk posture. While
Section 7. such a security control ’checklist’ can provide an output corresponding
to the requirements of the standard, it is insufficient for organizations
2. Scientific background that need to make security decisions. In this section, we provide a
motivating example, demonstrating some analysis scenarios using ASVS.
In this section, we provide the scientific background for our research Let us assume that a security assessment is conducted on an open-

Fig. 1. ASVS data structure.

2
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Table 2 scores of “process” related requirements (involving development and


An example of an ASVS assessment result. operation phases) should be distinguished from the above. Such ‘as­
Chapter Pass Fail Score pects’ provide a tidy collection of the stakeholders’ views of the areas of
interest in security evaluation (which will be elaborated on later).
Architecture, Design, and Threat Modeling 38 8 38
Authentication 27 8 27 Another deficiency we identify in ASVS is the lack of capability of
Session Management 16 3 16 system diagnosis for subject-of-matter at a granular level. A granular-
Access Control 5 2 5 level diagnosis goes beyond surface-level assessments and delves into
Validation, Sanitization, and Encoding 24 2 24 a more comprehensive and meticulous analysis of various facets of the
Stored Cryptography 13 1 13
Configuration 48 2 48
system (Delen and Ram, 2018; Pröllochs and Feuerriegel, 2020). It in­
volves scrutinizing system components, configurations, and processes in
detail, leaving no stone unturned in the pursuit of identifying potential
issues and vulnerabilities. For example, say an analyst wants to make
Table 3
further analysis of password security; to identify “concrete” security
Exemplary security mechanisms with associated ASVS requirements.
mechanisms that need improvement or development in this category. In
Security Mechanism ASVS Requirement this regard, rather than analyzing scattered descriptive statements, it is
Password strength V2.1.1-Verify that the user-set passwords are at least 8 more effective and efficient to use concise items to represent the asso­
policy characters in length (after multiple spaces are combined). ciated requirements. Table 3 provides examples of mechanisms for
V2.1.2-Verify that passwords of at least 64 characters are
password security, as well as the applicable requirements found in the
permitted and that passwords of more than 128
characters are denied. ASVS. To enable a deeper level of security evaluation, we suggest that
V2.1.4-Verify that any printable Unicode character, security requirements should be organized into a synthesizable and
including language-neutral characters such as spaces and analyzable format.
Emojis, are permitted in passwords. In addition to analyzing the positive side of system security, orga­
V2.1.7-Verify that passwords submitted during account
registration or password change are checked against an
nizations should strive for a comprehensive view of their system security
available set of, at least, the top 3000 passwords. posture by assessing possible structural flaws and weaknesses. ASVS
V2.1.9-Verify that there are no password composition provides consolidated mapping to CWE; however, additional informa­
rules limiting the type of characters permitted. There tion is required to answer crucial analysis questions, for example, what
should be no requirement for an upper or lower case or
are the threats (or risks) that the weakness could result in? To what
numbers or special characters.
V2.1.10-Verify that the application does not require extent does the found weakness impact the security properties of the
periodic credential rotation. system? The ASVS framework offers a comprehensive set of measures for
V2.3.1-Verify system-generated initial passwords or assessing the security of web applications. However, raw data obtained
activation codes SHOULD be securely randomly from ASVS alone is relatively challenging to process and analyze. Our
generated, SHOULD be at least 6 characters long, MAY
contain letters and numbers, and expire after a short
modeling approach is to bridge the gap by facilitating the trans­
period. These initial secrets must not be permitted to formation of ASVS operational data into insightful, easy-to-analyze in­
become the long-term password. formation, from which security stakeholders can then generate
Password input V2.1.11-Verify that "paste" functionality, browser actionable knowledge, to be encouraged a better understanding of the
functionality password helpers, and external password managers are
existing context.
permitted.
Password changing V2.1.5-Verify users can change their password.
functionality V2.1.6-Verify that password change functionality 3. Related work
requires the user’s current and new password.
Password processing V2.1.3-Verify that passwords are not truncated.
There is a wealth of research on security assurance and evaluation
logic
methods. Over the years, numerous frameworks and standards have
been developed to analyze security. Common Criteria (CC) (Herrmann,
source web application, which is installed and hosted on a virtual ma­ 2002) is one of the most well-known efforts in this area. CC is an in­
chine in the organization’s intranet. The assessment result for each ASVS ternational ISO/IEC 15,408 standard for the security evaluation of IT
requirement is given either 0 or 1, where value 0 means the requirement products. The standard outlines a clear set of guidelines and specifica­
is “Fail”, and 1 means “Pass”. The analysis task would be, first, to derive tions that provide organizations with the necessary information to
scores for chapters and sections using an aggregation algorithm, for accurately specify their security functional requirements and security
example, computing the sum of the associated requirements. Table 2 assurance requirements. In addition, the CC offers a strict, standardized,
shows an example of such datasets, which permit scores to be analyzed and repeatable methodology to ensure the safe implementation, evalu­
from the levels of chapters. Although the dataset provides a categorical ation, and operation of a product at a suitable security level as pre­
view of security scores, it does not come up with a broader perspective scribed by the operational environment. Furthermore, while
and more strategic point of view. Instead of immediately delving into the comprehensive in scope, using this standard results in detailed docu­
complex relationship between process and technology, a common mentation that often requires substantial effort when assessing products
analytical approach is, to begin with a top-down analysis (B. Garrette or services against a specific CC assurance grade (Ekclhart et al., 2007;
et al., 2018; Sabatier, 1986). This involves examining the broader macro Zhou and Ramacciotti, 2011). In addition, there are several security
aspects first, from which a governing thought is arrived. The phrase maturity models available for the software security domain, such as the
"governing thought" implies the identification of a central or over­ Building Security In Maturity Model (BSIMM) (McGraw et al., 2009) and
arching idea that can guide the subsequent analysis (B. Garrette et al., OWASP Software Assurance Maturity Model (OpenSAMM) (OWASP
2018). This governing thought serves as a foundational framework for 2022). BSIMM is a research initiative that investigated the various ap­
further investigation and interpretation. Once such macro aspects have proaches to software security employed by businesses, leading to the
been examined, a key principle or concept emerges that plays a crucial development of a framework featuring 116 activities and 12 practices.
role in shaping the understanding of the subject matter. Thus, in this Like BSIMM, openSAMM is an open software security framework
assessment context, to recognize discrepancies between the “runti­ developed by OWASP, which provides guidelines on which software
me/operational environment” and the “software” itself, it is important security practices should be used and how to assess them. Such maturity
to first separate the assessment scores of two entities. Likewise, the models provide frameworks, especially in a qualitative fashion, to
evaluate the security posture of the process and culture practiced in an

3
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

organization.
Although various research studies have been conducted on web
application security evaluation, few attempts have been made to
establish a generic approach that quantifies the results systematically.
Below are several papers that discuss this research area. The authors in
(Hai and Nga, 2018) presented a security evaluation framework for
web-portal security assessment, which integrates ISO/IEC 15,408
(ISO/IEC 2023) and OWASP evaluation model Common Criteria Web
Application Security Scoring (CCWAPSS) (Charpentier, 2023). This
framework facilitates numerical rankings via the use of a scoring system
to assess the significance of each factor within the criteria. By doing so, it
provides practical security evaluations that web portal developers can
quickly understand and implement. Okamura et al. (Okamura et al.,
2013) discussed a quantitative security evaluation approach for soft­
ware systems from the vendor’s viewpoint, centering on the analysis of
collectible vulnerability data. They apply a stochastic model using a
non-homogeneous Poisson process to explain this data, and then use Fig. 2. The system boundary of software systems.
numerical examples to evaluate the security measures relative to the
content management system of an open-source project. Yautsiukhin
et al. (Yautsiukhin et al., 2008) introduced a method of computing the
security qualities of software architectures with the adoption of security
patterns. The core metric used in this evaluation was threat coverage,
and an algorithm was proposed to aggregate low-level measures asso­
ciated with these patterns into a single high-level indicator. Lastly,
Banaei and Khorsandi (Banaei and Khorsandi, 2012) presented a hier­
archical structure for web service security, complete with a model that
evaluates various aspects of security from an analytical perspective.
They use the Analytical Hierarchy Process (AHP) theory to prioritize
weighted averaging of critical security properties, such as authorization,
confidentiality, and availability — all to provide greater levels of cus­
tomization in terms of provider/consumer needs.
Furthermore, alternative methods for quantitative security assurance
of IT systems have been proposed by some researchers. These concepts
could be applied in software systems/web applications. For instance, Fig. 3. The system boundary of the system of interest.
Katt and Prasher (Weldehawaryat and Katt, 2018) outlined a quantifi­
cation method to evaluate the security assurance of systems. This reasonably draw a boundary (i.e., the system boundary), as depicted in
framework measures two parts: (1) the confidence that existing mech­ Fig. 2. The software structure is the core subset of the software system,
anisms are sufficient to meet security requirements; and (2) which po­ meaning any source code or object code made to perform a specific task
tential security threats might leave a system vulnerable. The framework (s). An environment is a set of factors (e.g., facilities, operating condi­
has been validated through case studies on public REST APIs. Ouedraogo tions, or influences) that are available to a software component when it
et al. (Ouedraogo et al., 2009) utilized quantitative risk measurement is being installed, executed, or operated. It is useful to think of an
techniques to create indicators that can be used to assess IT infrastruc­ environment as being made up of things that are not part of the software
ture security, alongside aggregation procedures. The primary algorithms component but can affect the software system’s behavior.
used to perform operational aggregation are the recursive minimum, From the perspective of a security evaluation, the system boundary
maximum, and weighted sum algorithms. Each of these tools has been defines what should be analyzed within the System-of-Interest (SOI), and
designed to take into consideration a wide range of datasets when thus distinguishes it from the external environment. Our security eval­
consolidating information. Pham and Riguidel (Pham and Riguidel, uation model divides this external environment into two categories:
2007) introduced an aggregational method that can be applied in the software environment and operational environment. This classification
calculation of the security assurance value of the whole system when is made by distinguishing a line that serves as a conceptual boundary
combining several entities, which have been evaluated independently. between them. The scope of SOI is comprised of the software environ­
The effects of the emergent relations are taken into account in the ment and the system structure, whereas the operational environment is
calculation of the security assurance value of an attribute in the context external to the system (depicted in Fig. 3). In our definition, a software
of a system. environment refers to the complete set of hardware and software (tools,
resources, systems, and services) that are the necessities to secure build,
4. Model from the boundary of software system maintain, and scale the software components. Simple examples of soft­
ware environments are a hardware environment, a software-based
As a web-based application can incorporate various resources and execution environment, or some combination of these. On the other
multiple environmental elements, it is crucial to take a wide-ranging hand, the operational environment contains elements and further sys­
approach to view the system, which is often not clear. In this section, tems that interact in some way with the software system, for example, a
we will first provide an overview of the idea of system boundaries for user, a system administrator, an organization, a LAN, or a general office
available software systems. This will supply a groundwork that can be environment. The operational environment might also be incorrectly
utilized for our proposed model, and also show its pertinence to the field implemented and managed and consequently contain errors that would
of security evaluation. result in flaws, however, following the definition of the system bound­
In general, a software system is comprised of elements that have been ary, no assessment is made regarding the correctness of the operational
purposefully incorporated into the environment. These include the environment. In terms of a security evaluation, it is assumed that the
software structure and environmental factors around which we can operational environment is absolutely precise and will aid the software

4
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Fig. 4. The conceptual framework of the modeling approach for security evaluation.

Fig. 5. Illustration of the layers that make up the hierarchical approach for the security-strength evaluation.

system in delivering its features accurately and securely. interest, how the security mechanisms can be represented structurally,
and how the weakside of system security can be derived from ASVS.
5. Creating a structured and analyzable model

For a more complete security evaluation and analysis, it is essential 5.1. Security strength evaluation model
to take into account both the advantages and disadvantages of the sys­
tem’s security. Our strategy is to make ASVS quantifiable by breaking it The security strength of a system is defined as its security state,
down into two core components: security strength evaluation and se­ which reflects its readiness for security measures to defend against po­
curity vulnerability evaluation. The aim is to gain measurable insights tential threats (Schechter, 2004). As shown in Fig. 5, we evaluate system
that will aid in deepening our comprehension of the ASVS verification security strength through a structural hierarchy of five levels to gain
result. As shown in Fig. 4, our modeling approach is built upon the ASVS insight into its security capability. We begin by categorizing the strength
framework. The security strength evaluation measures system security assessment into three aspects: structure, environment, and process. Each
by analyzing and gaging relevant security requirements. Additionally, evaluation aspect includes a two-level categorization method to classify
the assessment of security weaknesses brings understanding to the the security mechanisms connected with the ASVS requirements.
overall system’s risk posture—including how any uncovered weaknesses The process of "evaluation" utilizes mathematical algorithms to
(i.e., CWE) impact the elements’ security characteristics and their po­ assign numeric values to each component (i.e., evaluation components).
tential interactions with threats. At the simplest level, the security- In our approach, the scores of evaluation components are computed
strengths model can provide a measure of assurance that the system using a bottom-up approach, which involves the estimation of the lowest
will be able to withstand attack, while the weaknesses model can possible level in the model. Each ASVS requirement is assigned a nu­
identify the potential consequences when the security mechanisms are merical score and the scores are aggregated to create an overall score.
not properly implemented. In the following sections, we describe our Score aggregation is beneficial as it reduces the subjective bias in eval­
approach to modeling the evaluation component of the system of uating claims and provides a more objective method for determining the
accuracy of claims (Andrews et al., 2006). The evaluation task begins by

5
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

determining the scores of ASVS requirements and aggregates their Table 4


values by using an Average scheme to rate a set of evaluation compo­ Corresponding evaluation criteria for each evaluation aspect.
nents along the hierarchy. Since the security-strength evaluation hier­ Evaluation Aspect Evaluation Criteria
archy is designed in such a way that lower-level nodes (successors) will
Software Structure Authentication
cover a part of the one node at the higher level, the components we add Access Control
together are similar ones. Under this condition, using “Average” can Input Validation and Output Encoding
consider all the relevant items so that we can derive a representative Session Management
score of the whole data set. Finally, the overall score of the SOI is Cryptography
Error Handling and Logging
calculated using a weighted average scheme using the scores of evalu­ Privacy and Data Protection
ation scores along with their assigned weighting factors. This single Communication
value provides an objective measure of the system’s security level. The Intrusion Detection and Prevention
used notation and the detailed evaluation process are discussed in the Business Logic Security
Files and Resources Security
following.
Memory Management
Web Service and API Security
5.1.1. Evaluation of ASVS requirements Software Environment Environment Management
The first step of assessment is responsible for the evaluation of the Communication Hardening
ASVS verification requirements. Initially, each ASVS verification Configuration Hardening
Software Process General Practice
requirement is mapped to one verification case to determine its fulfill­ Security Requirement
ment. As such, a score for each verification requirement could be Secure Design
determined, based on the observation of the verification case. Results for Secure Coding
verification cases are quantified as 1, 0, and 0.5, depending on the level Secure Code Review
Secure Build and Deployment
of fulfillment. The score 1 is given to the cases that pass the verification,
indicating the corresponding requirements are fully fulfilled, while
0 means the requirements are not fulfilled (i.e., the verification case
failed). A score of 0.5 implies the requirement is considered a partial Table 5
fulfillment. Partial fulfillment means that the actual result matches its Evaluation elements in evaluation criteria.
expected result, however, there might encounter unnecessary (or su­ Evaluation Criteria Evaluation Element
perfluous) exceptions/messages that are caught during the test-case Authentication Authentication Architecture
execution. Such a test execution state is usually applied in the context Password Security
of manual testing, heavily reliant on the tester’s judgment (Reddy). Authenticator Security
At this step, we use S(ASVSi) to denote the score of the ith ASVS Credential Storage
Credential Update
requirement, which can be expressed in Eq. (1). Credential Recovery
Multi. Factor Authentication
S(ASVSi ) ∈ {0, 1, 0.5 } (1)
Authentication Logging
Service Authentication
5.1.2. Evaluation of security mechanisms Access Control Access Control Architecture
It is possible to consider the requirements and mechanisms to be Operation Level Access Control
HTTP Request Access Control
synonyms because they are frequently used in an abstract context. In the
Access Control Logging
security evaluation and analysis approach, we attempt to use a more
fine-grained "Security Mechanism" than descriptive ASVS requirements.
Security mechanisms can be treated as the fundamental means and work is planned to achieve. These criteria are selected, tested, and
methods that are designed to achieve security-relevant purposes. The measured to confirm the sufficiency of system security to be offered to
capabilities and behaviors provided by security mechanisms are speci­ users. Table 4 lists the corresponding evaluation criteria for each eval­
fied in security requirements. That is, a security mechanism is an output uation aspect. Evaluation criteria are then narrated in detail by a set of
of the implementation of (a set of) security requirements. While security evaluation elements. Some examples of evaluation elements are pre­
requirements (in ASVS) are designed for verification, security mecha­ sented in Table 5.
nisms, on the other hand, are for analysis purposes. To provide Similar to the algorithm in the previous level, the score of the ith
analyzability, the mechanism must be small and simple enough to be element-level component, denoted by S(Elementi) is calculated using Eq.
evaluated. The exemplary security mechanisms can be found in Table 3. (4).
To calculate the scores of security mechanisms, let C(Secur­ ∑
ityMechanismi) denotes a set of ASVS scores associated with the ith se­
C(Elementi )
S(Elementi ) = (4)
curity mechanism, defined by Eq. (2). |C(Elementi )|
{ ( ) }
C(SecuriyMechanismi ) = S ASVSj → SecuriyMechanismi (2) where:
{ ( ) }
We use S(SecurityMechanism) to represent a measurement to reflect C(Elementi ) = S ASVSj → Elementi
the actual (calculated) score of the security mechanism. The following
Consequently, the formula for calculating the score of the ith
formula (Eq. (3)) represents the calculation of the ith security mecha­
criterion-level components is as Eq. (5).
nism, which uses the average function to derive the score.

∑ C(Criterioni )
C(SecuriyMechanismi ) S(Criterioni ) = (5)
S(SecuriyMechanismi ) = (3) |C(Criterioni )|
|C(SecuriyMechanismi )|
where:
5.1.3. Evaluation of criterion and element levels { ( ) }
The term “criteria" as used in this model refers to a higher, more C(Criterioni ) = S Elementj →Criterioni
abstract level of meaning that can be thought of as a standard in the
SOI’s application domain. These criteria are part of the "target" that the

6
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

5.1.4. Evaluation of aspect level Table 6


Aspects are the viewpoints about how stakeholders can describe the Security level.
security strength at the highest level. While ASVS categorical analysis Score Security level
provides stakeholders with a detailed state of the security criteria, the
[0.0 – 1.0) No Security
aspect will allow them to see which particular viewpoint the system [1.0 – 4.0) Low Security
performs in a positive, neutral, or negative way. Therefore, they can dive [4.0 – 7.0) Moderate Security
deeper into details to get a complete picture of the dedicated aspect. [7.0 – 9.0) Good Security
While defining the aspects, we first include the predominant attributes [9.0 – 10.0] Excellent Security

of SOI, that is, software structure and software environment, mentioned in


Section 3. The evaluation of the software structure aims to access the The overall assurance score of the SOI, represented by S(SOI) is
sufficiency of the ‘technical’ security mechanisms of the software system calculated b Eq. (7).
itself, including security architectures and security functionalities. The
evaluation criteria under the software structure are, for example, ∑
3
S(SOI) = S(Aspecti ) × wi × 10 (7)
authentication, access control, and cryptography. The evaluation of the i=1
software environment entails an examination of the environmental
factors that contribute to the production and maintenance of the soft­ where:
ware system. organizational and physical facilities (for example, wi: the weight that corresponds to the ith evaluation aspect (0<wi<1

development, production, delivery, and operation) are among these and wi = 1)
factors. To offer better comprehensibility, we apply a discrete rating scheme
In addition to the security aspect described above, developing and in the final score of SOI. Table 6 is adapted from the NVD Vulnerability
maintaining secure systems rely on the processes linking people and Severity Ratings (W3C 2022). In NVD, the higher score represents
technologies (Kim and Solomon, 2010). Therefore, a secure software greater severity. However, our table shows the opposite definition, i.e.,
system should also provide evidence that it is developed and operated levels of security. With this table, we can be used to convert the score to
using adequate software processes, and conformance to implementation a textual form.
standards, including secure coding standards, adequate testing, verifi­
cation and validation, and suitable specification and documentation for 5.2. Security weakness evaluation model
all system aspects. The ASVS also sets out security requirements related
to the software process, for example, “V1.1.7-Verify availability of a From the analysis view, for security weakness evaluation the most
secure coding checklist, security requirements, guideline, or policy to all de­ concerned subject is the consequence of the security weakness. The se­
velopers and testers”. In certain circumstances, however, evaluating the curity weakness evaluation aims to describe the consequence of the
software process is unfeasible, such as in open-source software devel­ found weakness in the SOI. This involves defining the taxonomy of ef­
opment contexts. To work around this limitation, stakeholders must fects, including security risks, potential threats, and the impact scopes (i.
consider whether to factor in the aspect of the software process when e., the violated security properties), covering the relationships among
conducting their security evaluation. them. The comprehensive mapping to CWE in ASVS allows us to derive a
Following the previous score calculation pattern, the formula for set of (negative) components, resulting from the weakness (depicted in
calculating the score of the ith aspect-level components is defined as Eq. Fig. 6), including impact scopes (i.e., the violated properties), technical
(6). impacts, threats, and security risks. By analysis of CWE, we could design

C(Aspecti ) the foundations of mentioned components. Therefore, the core idea of
S(Aspecti ) = (6) the modeling approach is that we can use existing CWE databases in
|C(Aspecti )|
combination with the well-known threat and vulnerability analysis
where: methodologies to generate threat and security risk catalogs for each
{ ( ) } ASVS element. In the following, we explain each component and the
C(Aspecti ) = S Criterionj →Aspecti derivation rules.

5.1.5. Evaluation of SOI 5.2.1. Evaluation of CWE


So far, we have defined the measurement for the individual evalua­ In security weakness evaluation, we use the CWE identities to
tion aspect, then we can derive a measure of the overall score for the calculate scores for security weakness components. This calculation is
SOI, with a simple reduction of a set of scores to a single ‘figure-of-merit. conducted through a summation function, which accounts for every
The higher the value, the better the trustworthiness of the security element in the ASVS verification. We focus on assessing the severity of
strength. The aggregation function used in calculating the score of SOI is weaknesses in the SOI and measuring its significance based on these
the weighted average, an approach that is highly intuitive and values.
comprehensible. Rather than each assurance aspect contributing equally Let CWE i denote a CWE ID that existed in the CWE repository. Eq. (8)
to the result, the use of a weighing factor emphasizes the contribution of defines the set of ASVS scores mapped to a given CWE.
particular assurance aspects over others to the security evaluation result, { ( ) }
thereby highlighting those aspects in comparison to others in the anal­ C(CWE i) = S ASVSj → CWE i (8)
ysis. The stakeholder can decide which of the three aspects is either For each ASVS with a score of 0 (i.e., not fulfilled requirements), the
more or less important to him (or her), given what the software is corresponding CWE is assigned the value 1. The total score for CWE i is
functioning for the organization. For example, the stakeholder could calculated by accumulating the ASVS score with Eq. (9).
assign the following weighting factors 0.6, 0.3, and 0.1 to the aspects of ∑
software systems, software environments, and software processes S(CWE i) = ej (9)
respectively, to account for the proportion of corresponding importance. j∈C(CWE i)

The exact weighting values are calculated or assigned based on the


where:
opinions of stakeholders by applying decision-making techniques that
{ ( )
must be carried out based on the verification context. Finally, we stan­ 1, if S ASVSj = 0,
dardize the score by scaling the value in the range of [0, 10]. ej =
0, otherwise.

7
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Fig. 6. Security Weakness Evaluation Model.

5.2.2. Evaluation of impact scope Table 7


Each SOI comprises the main security objective that needs to be Mapping of STRIDE categories based on CWE “Technical Impact”.
achieved, like confidentiality and availability, which are commonly STRIDE Category CWE/Technical Impact
captured by a security property. For every security property, the impact
Spoofing Gain Privileges or Assume the identity
must be evaluated. The Impact Scope element identifies the security Tampering Modify Application Data,
property that is violated due to the existence of the weakness. As for the Modify Memory,
evaluation component, the impact scope is used to evaluate the severity Modify Files or Directories,
of weakness with generic/abstract security requirements for the SOI. In Unexpected State,
Alter Execution Logic
the CWE model, the impact scope can be found in the attributes of Repudiation Hide Activities
Common Consequences. For example, the weakness “CWE-116: Improper Information Disclosure Read Application Data,
Encoding or Escaping Output” impacts the security properties of Integ­ Read Memory,
rity, Confidentiality, Availability, and Access Control. Other impact Read Files or Directories,
Denial of Service DoS: Instability,
scopes defined in CWE are Authentication, Authorization, and Non-
DoS: Resource Consumption (CPU),
repudiation. DoS: Resource Consumption (Memory),
To determine the score of the impact scope, we add up the corre­ DoS: Crash or Exit or Restart,
sponding CWE scores with Eq. (10). DoS: Resource Consumption (Other)
∑ Elevation of Privilege Execute unauthorised Code or Commands,
S(ImpacScopei ) = C(ImpactScorei ) (10) Bypass Protection Mechanism

where:

C(ImpacScopei ) = {S(CWE j) → ImpacScorei } S(TechnicalImpcti ) = C(TechnicalImpacti ) (11)

where:
5.2.3. Evaluation of technical impact
Technical Impact is the potential result that can be produced by the C(TechnicalImpacti ) = {S(CWE j) → TechnicalImpacti }
weakness, assuming that the weakness can be successfully reached and
exploited. This is expressed in terms that are more fine-grained than 5.2.4. Evaluation of threat
confidentiality, integrity, and availability. The technical impact is an To have a clear picture of the dangers, it is important to formulate an
important criterion that can be useful to any organization that needs assessment of the threats to the SOI. Threat assessment is often per­
reasonable security assurance for their software-based solutions. The formed on a higher level, especially addressing legal or business-related
CWE ‘Common Consequence’ also describes the Technical Impact that issues. In our test-based approach, threats are identified and evaluated
arises if an adversary succeeds in exploiting this weakness. Security based on the catalogs of known CWEs, deriving from the relevant veri­
weaknesses can cause a lot of damage if they are successfully exploited. fication results of ASVS. CWE with its Common Consequences provides a
This information then evaluates the different types of damage that can point where we could start. In terms of threat categories, we use the
be caused, and how severe the damage can be. Examples of technical STRIDE framework (Shostack, 2014), which is a mature and optimal
impact are: Modify Data, Read Data, Unreliable Execution, Resource approach, to classify threats in areas where mistakes are often made. The
Consumption and Execute unauthorised Commands. acronym “STRIDE” stands for the threat categories of Spoofing,
Similar to “Impact Scope”, the “Technical Impact” score is yielded by Tampering, Repudiation, Information Disclosure, Denial of Service, and
summing the results of the relevant CWEs using Eq. (11). Elevation of Privilege.

8
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Fig. 7. An example of navigating to OWASP Top 10 in CWE (CWE-521).

The CWE Schema offers an alternative method for mapping between


Table 9
the CWE and the STRIDE, mediated by the attribute of “Technical
Examples of non-applicable verification cases.
impact”. We map CWE in the dataset against STRIDE using the “Tech­
nical Impact” attribute elicited from the previously mapped CWE. Each Criteria Element Security Mechanism ASVS
Requirement
STRIDE category had a relationship with one or more enumerations of
the Technical Impact. The mapping of STRIDE to CWE Technical Impact Authentication Authenticator Look-up secrete V2.6.1-V2.6.3
is presented in Table 7. Security security
Out-of-band verifier V2.7.1-V2.7.6
Based on the mapping table, threat scores are calculated using Eq. security
(12). OTP verifier security V2.8.1-V2.8.7
∑ Input Validation Input Validation Input validation for V5.3.7
S(Threati ) = C(Threati ) (12) and Sanitization LDAP Query
XPath query V5.3.10
where: parameterization
Privacy and Data Server-side Data Health Data V6.1.2
{ ( ) }
C(Theati ) = S TechnicalImpactj → Threati Protection Protection Encryption
Financial Data V6.1.3
Encryption
5.2.5. Evaluation of security risk Web Service and SOAP Web Add integrity check to V13.3.2
While the mapped CWE list in ASVS is extensive, it can be grouped API Security Service Security SOAP payload
and ranked by risk severity. The OWASP Top 10 categories provide an GraphQL GraphQL logic V13.4.1-
V13.4.2
easy, clear at-a-glance summary of the ten most critical application
vulnerabilities, which are arranged according to their impact and the
security risk involved. The condensing of the numerous kinds of CWS
into a small number of categories gives an easier way to analyze the Criticality = Probability × Serverity.
security weakness in the software system. Instead of making an effort to The equation to derive the score of a security risk is defined as Eq.
eradicate all vulnerabilities, one can decide which of the ten risks is (13).
either more or less hazardous to the organization. This provides ana­ ∑ ∑
lyzers with a good idea about how to draw stakeholders’ attention to S(SecurityRiski ) = C(CWE j) × ci = C(CWE j) × pi × si (13)
certain issues that are the most common problems at the time.
In our model, “Security risk” is derived from the “Memberships” where:
attribute in CWE. Fig. 7 represents an example where the security-risk
category (A07-Identification and Authentication Failures) is mapped ci: the criticality that corresponds to the i th SecurityRisk
for CWE-521: Weak Password Requirements. The evaluation of security pi: the probability that corresponds to the i th SecurityRisk
risks involves the quantification of risks and the associated Criticality si: the severity that corresponds to the i th SecurityRisk
factor. When security risks are identified, it is difficult to remove all of
them simultaneously due to the limited resources available for vulner­ To evaluate the criticality, we refer to the data factors listed for each
ability mitigation. Criticality is a numerical value that we give to a se­ of the OWASP Top 10 categories (OWASP 2022), which are systemati­
curity risk that communicates how serious it is and determines the cally derived using CVSS v3. Two data factors are considered: “Average
mitigation to be applied first. The higher the criticality, the more urgent Incidence Rate” and “Average Weighed Impact”. The former represents
the need to act. A common criticality assessment method is based on the the Probability while the latter is the Severity. Table 8 shows the snapshot
probability of failure and consequences. Criticality can be calculated data factors of the one security risk “Broken Access Control” in OWASP
using the following equation: Top 10. According to the table, the criticality factor of the security risk is

Table 8
Data factors of “Broken Access Control” in OWASP Top 10.
Max Incidence Rate Avg Incidence Rate Avg Weighted Exploit Avg Weighted Impact Max Coverage Avg Coverage Total Occurrences

55.97 % 3.81 % 6.92 5.93 94.55 % 47.72 % 318,487

9
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Table 10 calculated as 3.18 % × 5.93 = 0.225.


Summary of evaluation-aspect scores.
Score of SOI Security Level Evaluation Aspect Weight Score 6. Case study
7.721 Good Security Software Structure 0.6 0.45
Software Environment 0.3 0.26 To show the effectiveness of the suggested methodology, a manual
Software Process 0.1 0.06 security evaluation and analysis for a single web application has been
performed. It’s important to note that due to the technologies used in the
software, some mechanisms were not put in place that would meet ASVS
requirements. For example, requirements of V2.6.1 to V2.6.3 in the

Fig. 8. Analysis of evaluation criteria scores.

Fig. 9. Drill-down analysis based on the security strength evaluation model.

10
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Fig. 10. Analysis of Impact Scopes.

Authentication criteria define the security mechanism of “Look-up


Table 12
secrete security”. However, the application does not feature the speci­
Summary of threat scores with corresponding technical impacts.
fied functionality. Therefore, the relevant requirements may be
excluded from the verification scope. As a result of this, these re­ Threat Score Technical Impact Score

quirements were marked as "Not Applicable." Examples of non- Spoofing 12 Gain Privileges or Assume Identity 12
applicable ASVS requirements in this case study are listed in Table 9. Tampering 15 Modify Application Data 7
Modify Memory 0
In summary, there are 261 out of 286 ASVS requirements have been
Modify Files or Directories 4
determined to be "applicable" to the security verification. Unexpected State 3
Alter Execution Logic 1
Repudiation 4 Hide Activities 4
6.1. Analysis results using the model Information 17 Read Application Data 12
Disclosure Read Memory 0
In this section, we describe analysis which is used mainly to support Read Files or Directories 5
security analysts in formulating analytics for discovering, interpreting, Denial of Service 7 DoS: Instability 0
DoS: Resource Consumption (CPU) 3
and communicating significant patterns in data. We begin by calculating DoS: Resource Consumption (Memory) 1
the security strength using the evaluation model and aggregating the DoS: Crash, Exit, or Restart 1
verification findings of ASVS. Table 10 presents the summary of the SOI DoS: Resource Consumption (Other) 2
and evaluation-aspect scores. The SOI score is 7.721, which indicates Elevation of Privilege 11 Execute unauthorised Code or 1
Command
that the SOI has a “Good Security” rating. The weight factors for the
Bypass Protection Mechanism 10
three evaluation aspects are given using a subjective weighting
approach. In this case, the stakeholders rated the higher weight on
“Software Structure” among the three aspects. functioning properly. Fig. 9 illustrates the drill-down scenarios in the
Fig. 8 depicts an example of the "next-level" security strength anal­ security strength analysis. For instance, it is discovered that the security
ysis, concentrating on the aspect of the software structure, in which the mechanism is deficient in "Notification Functionality of Credential Up­
evaluation criterion scores are shown alongside the distribution of date" after looking into the low-scoring "Credential Update” (score =
verification-case fulfillment. Among the 11 evaluation criteria, “Files 0.25). Additionally, the evaluation’s findings indicate that the "Pass­
and Resource Security” has the highest score (1.00) while “Intrusion word Input Functionality" (score = 0) doesn’t seem to be prepared for
Detection and Prevention” is the lowest (0.438). “Authentication” has "Password Security." Furthermore, in the criteria of “Privacy and Data
the greatest number of verification cases and gains a moderate score of Protection”, "Cache Data Protection" is the only one of the five crucial
0.708. security mechanisms in "Sever-Side Data Protection" that does not meet
The built-in hierarchical structure in the security strength model the requirements.
allows for a very thorough breakdown and makes it simple to determine An analysis of the effect of the found CWE on the security properties
whether the necessary security mechanisms are implemented and

Fig. 11. Analysis of security risk.

11
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Table 11 further investigations. First, this model primarily focuses on technical


Summary of security risks. security mechanisms and does not take into account human factors, such
Security Risk Number of Criticality Score Rank as social engineering attacks or insider threats. Additionally, the model
CWEs does not guide how to mitigate risks, as it is meant to be used as an
A01-Broken Access Control 8 0.23 1.81 1 analysis tool and not a prescriptive guide. Overall, while the model is a
A02-Cryptographic Failures 3 0.31 0.92 4 useful tool for analyzing security posture, it should be used in
A04-Insecure Design 2 0.20 0.41 6 conjunction with other frameworks and considerations to build a
A07-Identification and 10 0.17 1.66 2 comprehensive security strategy. A future endeavor is to develop more
Authentication Failures
A08-Software and Data Integrity 3 0.16 0.49 5
practical security metrics based on the model and incorporate them into
Failures a security analytics application, as recommended in (Wen et al., 2022).
A09-Security Logging and 5 0.32 1.62 3 This application serves to bolster data analysis, granting organizations
Monitoring Failures the ability to thoroughly measure key elements of security. With its help,
they can get a clearer picture of what needs to be done to ensure security
and compliance. As an additional step, automation of the security
(i.e., the impact scope) is carried out by plotting it into a bar chart.
evaluation process may be considered to enhance its efficacy and allow
Fig. 10 shows the analysis of impact scopes, where the horizontal axis
for real-time monitoring. This marks an important development in
denotes the numbers of CWE. In contrast to the positive connotation of
continuously refining the system’s security measures by providing
security strength scores, in security weakness evaluation, the higher the
well-structured metrics and analytics.
score, the more serious a weakness/threat/security risk is. That means
the scores of all weakness components always result in a negative effect
on the result. Thus, it is clear from the figure that the system’s flaws have Declaration of Competing Interest
the greatest impact on the security properties of "Access Control" and
"Confidentiality". The authors declare the following financial interests/personal re­
We also assessed how CWEs result in OWASP’s Top 10 security risks, lationships which may be considered as potential competing interests:
which are summarized in Table 11 and depicted in Fig. 11. The results of Shao-Fang Wen reports financial support was provided by SFI-
our evaluation show that six of the ten critical risks are associated with Norwegian center for Cybersecurity in Critical Sectors.
SOI. The six risks were then ranked in order of their scores. As shown in
the table, it is observed that even though the numbers of CWEs in Data availability
“Identification and Authentication Failure” are the highest (10), the
criticality rating is relatively low (0.17). Consequently, this risk is placed No data was used for the research described in the article.
second according to the calculated score (1.66). Among the six risks,
“Broken Access Control” is the most critical one with a score of 1.81,
while “Insecure Design” is identified as least critical. Funding
Finally, threat scores are produced using Eq. (12). The analysis of
threats is shown in Table 12, highlighting the severity of threats appli­ The Research Council of Norway financially supports this research
cable to the system. Out of the six threats, “Information Disclosure” is work through the SFI-Norwegian center for Cybersecurity in Critical
evaluated as the most serious. "Information Disclosure" is considered to Sectors (NORCICS, NFR project number: 310105).
be the relatively serious threat out of the six listed, with which the most
significant technical impact is “Read Application Data”. Acknowledgement

7. Conclusion This research work is financially supported by the SFI Norwegian


center for Cybersecurity in Critical Sectors (NORCICS, NFR project
This paper presents a model to quantify the security evaluation of number: 310105).
web applications. By using methods such as aggregation, a gathering of Declaration of Generative AI and AI-assisted technologies in the
secure scores, and analytics, organizations can better understand the writing process
security posture of the system of interest and make an informed decision. Not applicable.
Our method allows for the incorporation of ASVS operational data into
knowledge-based information, which can provide insight into the se­ References
curity strength of a given system and any potential vulnerabilities or
threats. This approach to security evaluation ensures that the ASVS re­ Andrews, R., Boyne, G.A., Walker, R.M., 2006. Subjective and objective measures of
organizational performance: An empirical exploration. Public Service Performance:
quirements are adequately considered, while also providing a compre­
Perspect. Measure. Manage. 14–34 pages.
hensive view of system security from the macro aspects of “structure”, Banaei, O., Khorsandi, S., 2012. A new quantitative model for web service security. In:
“environment”, and “process”. From there, the problem is broken down, 2012 IEEE 14th International Conference on Communication Technology. IEEE.
identifying the components at the next level down, with a special focus Charpentier, F., "Common Criteria Web Application Security Scoring CCWAPSS";
Available from: https://dl.packetstormsecurity.net/papers/web/ccwapss_1.1.pdf.
on the critical or vital granular level of security mechanisms. Security (Accessed on Feb. 3, 2023).
mechanisms are the concise objects that provide a bridge between the Delen, D., Ram, S., 2018. Research challenges and opportunities in business analytics.
descriptive ASVS requirements and their eventual analysis. In addition, J. Busi. Anal. 1 (1), 2–12 pages.
Ekclhart, A., et al., 2007. Ontological mapping of common criteria’s security assurance
in our test-based approach, we integrate the ASVS into the evaluation of requirements. In: IFIP International Information Security Conference. Springer.
discovered vulnerabilities (i.e., mapped CWE). This modeling approach Erşahin, B., Erşahin, M., 2022. Web application security. South Florida J. Dev. 3 (4),
applies existing CWE databases with effective mapping techniques to 4194–4203 pages.
Garrette, B., et al., 2018. Sell the Solution: Core Message and Storyline. Cracked it! How
yield threat and risk catalogs that correspond with each ASVS element. to solve big problems and sell solutions like top strategy consultants 197–221 pages.
This approach is achieved by taking the verification result as explicit Garrette, B., et al., 2018. Sell the solution: Recommendation report and delivery. Cracked
input to the evaluation, which allows for a more precise and focused it! How to solve big problems and sell solutions like top strategy consultants
223–249 pages.
evaluation of any potential negative effect and thus enhances the overall Grassi, P.A., Garcia, M.E., Fenton, J.L., 2017. Digital identity guidelines. NIST Special
analysis results. Publication 800 pages 63-3.
It is important to recognize the limitations of this work to inform Gritzalis, D., Karyda, M., Gymnopoulos, L., 2002. Elaborating quantitative approaches
for IT security evaluation. Secur. Inform. Soc. Vis. Perspect. 67–77 pages.

12
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

S.-F. Wen and B. Katt Computers & Security 135 (2023) 103532

Hai, H.D., Nga, P.T., 2018. Evaluating the security levels of the Web-Portals based on the Sabatier, P.A., 1986. Top-down and bottom-up approaches to implementation research: a
standard ISO/IEC 15408. In: in Proceedings of the 9th International Symposium on critical analysis and suggested synthesis. J. Public Policy 6 (1), 21–48 pages.
Information and Communication Technology. Schechter, S.E., 2004. Computer Security Strength and risk: a Quantitative Approach.
Harrison, S., et al. 2016. "A security evaluation framework for UK e-goverment services Harvard University.
agile software development". arXiv preprint. Shostack, A., 2014. Threat modeling: Designing for Security. John Wiley & Sons.
Herrmann, D.S., 2002. Using the Common Criteria For IT Security Evaluation. CRC Press Shukla, A., et al. 2021. "System Security Assurance: A Systematic Literature Review".
volume. arXiv preprint.
ISO/IEC, "Information security, cybersecurity and privacy protection — Evaluation Sönmez, F.Ö., 2019. Security qualitative metrics for open web application security
criteria for IT security — Part 1: Introduction and general model"; Available from: project compliance. Proced. Comp. Sci. 151, 998–1003 issue, pages.
https://www.iso.org/standard/72891.html. (Accessed on Jan. 21, 2023). Vache, G., 2009. Vulnerability analysis for a quantitative security evaluation. In: 2009
Kim, D., Solomon, M.G., 2010. Fundamentals of Information Systems Security. Jones & 3rd International Symposium on Empirical Software Engineering and Measurement.
Bartlett Publishers volume. IEEE.
LeMay, E., et al., 2011. Model-based security metrics using adversary view security W3C, "RDF 1.1 XML Syntax"; Available from: https://www.w3.org/TR/rdf-syntax
evaluation (advise). In: 2011 Eighth International Conference on Quantitative -grammar/. (Accessed on Jan. 26, 2022).
Evaluation of SysTems. IEEE. Weldehawaryat, G.K., Katt, B., 2018. Towards a quantitative approach for security
McGraw, G., Chess, B., Migues, S., 2009. Building security in maturity model. Fortify & assurance metrics. In: The 12th International Conference on Emerging Security
Cigital. Information.
MITRE, "Common Weakness Enumeration (CWE) "; Available from: https://cwe.mitre. Wen, S.F., Shukla, A., Katt, B., 2022. Developing Security Assurance Metrics to Support
org/index.html. (Accessed on Feb. 3, 2023). Quantitative Security Assurance Evaluation. J. Cybersecur. Priv. 2 (3), 587–605
Okamura, H., M. Tokuzane, and T. Dohi. 2013. “Quantitative security evaluation for pages.
software system from vulnerability database”. Yautsiukhin, A., et al., 2008. Towards a quantitative assessment of security in software
Ouedraogo, M., et al., 2009. Security assurance metrics and aggregation techniques for it architectures. Nordic Workshop on Secure IT Systems (NordSec), Date: 2008/10/01-
systems. In: 2009 Fourth International Conference on Internet Monitoring and 2008/10/01. Copenhagen, Denmark, Location.
Protection. IEEE. Zhou, C., Ramacciotti, S., 2011. Common criteria: Its limitations and advice on
OWASP, "OWASP Proactive Controls"; Available from: https://owasp.org/www-project- improvement. Inform. Syst. Secur. Assoc. ISSA J. 24–28 pages.
proactive-controls/. (Accessed on Feb. 3, 2023).
OWASP, "OWASP Top10 Introduction"; Available from: https://owasp.org/Top10/A00
Shao-Fang Wen (Ph.D.) is currently working as a post-doctoral research fellow in in
_2021_Introduction/. (Accessed on Apr. 27, 2022).
Norwegian University of Science and Technology. The areas of his research include
OWASP, "Software Assurance Maturity Model v2.0″; Available from: https://www.open
● Socio-technical security analysis
samm.org/. (Accessed on Apr. 30, 2022).
● Security education and learning
OWASP, "Application Security Verification Standard (ASVS)"; Available from: http
● Software security and secure programming
s://owasp.org/www-project-application-security-verification-standard/. (Accessed
● Security assurance and security testing
on Jun. 3, 2022).
● Knowledge management and ontology
Pham, N., Riguidel, M., 2007. Security assurance aggregation for it infrastructures. In:
2007 Second International Conference on Systems and Networks Communications
(ICSNC 2007). IEEE. Basel Katt is currently working as a Professor in Norwegian University of Science and
Pröllochs, N., Feuerriegel, S., 2020. Business analytics for strategic management: Technology. The areas of his research are:
Identifying and assessing corporate challenges via topic modeling. Inform. Manage. ● Software security and vulnerability analysis
57 (1), 103070 pages. ● Security assurance and security testing
Reddy, N. “An Excellent Compilation of Software Testing Concepts (Manual Testing)”. ● Model driven software development and model driven security
Ruan, Y.L., Yan, X.Q., 2018. Research on key technology of web application security test ● Access control, usage control and privacy protection
platform. In: Proceedings of International Conference on Education, Management ● Security monitoring, policies, languages, models and enforcement
and Social Science (EMSS 2018 (2018).

13

You might also like