2009 Fruhwirth
2009 Fruhwirth
net/publication/221494773
CITATIONS READS
92 1,755
2 authors:
All content following this page was uploaded by Tomi Männistö on 28 May 2014.
Abstract 1. Introduction
The growing number of software security Analyzing and resolving software security
vulnerabilities is an ever-increasing challenge for vulnerabilities are core tasks of security incident
organizations. As security managers in the industry management, one of the most complex activities in
have to operate within limited budgets they also have organizational security management [7]. Resolving
to prioritize their vulnerability responses. The vulnerabilities is also labor intensive and thus a
Common Vulnerability Scoring System (CVSS) aids in significant cost factor for IT departments world wide.
such prioritization by providing a metric for the Security vulnerabilities in software can originate
severity of vulnerabilities. In its most prominent from a variety of problems, such as errors in design,
application, as the severity metric in the U.S. National misconfigured systems, or defects, commonly known
Vulnerability Database (NVD), CVSS scores omit as bugs. Research found that between 1 to 5 % of
information pertaining the potential exploit victims’ software bugs are also security risks [2]. The problem
context. Researchers and managers in the industry of software vulnerabilities is growing and according to
have long understood that the severity of records of the U.S. National Vulnerability Database
vulnerabilities varies greatly among different (NVD) [24], as of October 2008, almost 20 new
organizational contexts. Therefore the CVSS scores software vulnerabilities are being published every day
provided by the NVD alone are of limited use for [9]. By July 2009, the number of software vulnerability
vulnerability prioritization in practice. Security records in the NVD had grown to over 37,000 [24].
managers could address this limitation by adding the The actions that are necessary to resolve certain
missing context information themselves to improve the vulnerabilities are called vulnerability response
quality of their CVSS-based vulnerability processes and described in an organization's security
prioritization. It is unclear for them, however, whether management policy. Today, all larger companies that
the potential improvements are worth the additional comply with the Sarbanes-Oxley (SOX) legislation
effort. We present a method that enables practitioners apply IT service management frameworks, such as the
to estimate these improvements. Our method is of IT Infrastructure Library (ITIL) [17][20] or COBIT
particular use to practitioners who do not have the [5][11][26], and thus have to have vulnerability
resources to gather large amounts of empirical data, response processes in place.
because it allows them to simulate the improvement Security managers who live with limited budgets
potential using only publicly available data in the NVD are challenged to balance their companies' need for
and distribution models from the literature. We applied security with the available resources to ensure that
the method on a sample set of 720 vulnerability vulnerabilities are resolved in an efficient manner.
announcements from the NVD and found that adding Aside from other issues, this means that each
context information significantly improved the vulnerability should be addressed by a response
prioritization and selection of vulnerability response process that is appropriate to its severity and that more
process. Our findings contribute to the discourse on severe vulnerabilities should be prioritized over less
returns on security investment, measurement of severe ones.
security processes and quantitative security The inner workings of the actual vulnerability
management. response activities lie outside the scope of this paper.
We encourage the reader to refer to standards like the
ITIL[17][20] or ISO/17999 for more details on that Empirical research has also shown that the actual
matter [5][19]. For the purpose of this work it shall be impact of security incidents varies significantly among
sufficient to note that vulnerability response processes different types of organizations, businesses and users
can differ among characteristics like response time, [18][30]. Since different organizations perceive the
involved roles, impact on or disruption of production severity of a particular vulnerability differently they
operations and ultimately total costs. We define the also prioritize its mitigation differently [21]. CVSS can
total costs of a vulnerability response process as the account for these differences to a certain extent if the
sum of its direct costs (employed human resources, temporal- and environmental metric groups are applied
license fees, etc.) and indirect costs (productivity in the scoring process together with the base metric.
losses, interruption of production processes due to Therefore we argue that by using the two optional
unscheduled reboots after patching, etc.) metrics, the quality of the scores can be improved
Choosing an efficient process means to choose one because they better reflect the actual impact of a
which resolves the targeted vulnerability in a timely vulnerability in a particular organization’s
manner while generating the lowest total costs environment. Higher quality scores can in turn be used
compared to alternative solutions. to improve the prioritization of vulnerabilities from a
security management perspective.
1.1. CVSS
2.1. Need
The Common Vulnerability Scoring System
(CVSS) was introduced by the National Infrastructure CVSS’s most prominent application is its use in the
Advisory Council (NIAC) and is now managed by the National Vulnerability Database (NVD) of the
Forum of Incident Response and Security Teams National Institute of Standards and Technology
(FIRST). CVSS aids security managers in the (NIST). The NVD is a public directory of software
prioritization of vulnerabilities by providing a metric vulnerabilities and serves as one of the standard data-
for their relative severity [22]. CVSS assigns each sources for security management applications. The
vulnerability a value, or “score” on a scale of 0 to 10 NVD employs the CVSS Base-Metric as a severity
where higher values indicate greater severity. CVSS indicator for all recorded vulnerabilities. The two
was designed as an open framework, consisting of context aware CVSS metrics (Environmental- and
three different metric groups: 1.) the “Base Metric”, Temporal), however, are omitted. Since the Base-
which describes the general characteristics of Metric is unaware of an organization’s context, the
vulnerabilities 2.) the optional “Temporal Metric”, NVD scoring alone is of limited use for vulnerability
which represents changes in the severity over time and prioritization in practice [26].
3.) the optional “Environmental Metric”, which A practical example for this discrepancy between
introduces context information that is unique to a scored- and actual severity could be a denial-of-service
particular user, organization or business environment. (DoS) vulnerability: The NVD entry “CVE-2009-
The base metric can either be used alone or in 0609” describes such a vulnerability in the Sun Java
combination with the other two optional metrics. A full System Directory Server that received a base score of
description of all the information used to calculate 7.8 points. This score may render it “important”, but
CVSS scores is available in [22]. Details that are not “critical” in the minds of most security managers.
relevant for the scope of this work will be laid out in If the affected company’s business, however, has a
section 4.4. and pictured in Figure 1. high availability requirement (which is endangered by
a DoS attack) for the directory server and vulnerability
2. Problem description exploits are already available to the hacker community,
the vulnerability suddenly becomes of “critical”
Where CVSS is employed in its most basic form, importance to the business. In this case, the
using only the input defined in the “Base Metric”, the information about the existence of an exploit and the
scoring omits all information about a vulnerability’s high availability requirement of the affected server is
context and will thus output the same severity scores context information that improves the vulnerability
regardless of the characteristics of the affected prioritization. If such information is applied at the time
organization. Authors like Rieke [26] acknowledge that of the score’s calculation in the CVSS environmental
problem and advise that “prioritizing of vulnerabilities and temporal metric-group, the output score would
based on such measures should be used with caution” increase from 7.8 to 10, and thus be closer to the real
[26]. world severity of the vulnerability.
In practice, severity scores are further used to 2. What is the impact of these score changes on
categorize vulnerabilities in classes [21]. the prioritization of vulnerabilities and the selection of
Vulnerabilities with scores of, for example, more than vulnerability response processes?
9 can be classified as “critical” while scores of less
In the presented method we compare the NVD’s CVSS
than 4 are considered “low”. The class of a
Base-scores with context-enriched scores that apply
vulnerability then determines how and when a certain
CVSS’s additional Temporal- and Environmental-
response process is triggered within the organization to
Metric. Our results exemplify the potential
resolve it.
improvements in an organization. The presented
Even though vulnerabilities of different severity
methodology enables security managers to make
classes can often be resolved using the same methods,
informed decisions on whether investing in improved
e.g., automated patch-distribution systems, the total
vulnerability prioritization in their organization is
execution costs of the individual response processes
worth the costs.
still vary. A response to critical vulnerabilities, for
example, needs faster response times and may require The remainder of this work is structured as follows:
unscheduled reboots of critical systems that affect the Section 3 analyzes related work on the issue, Section 4
organization’s productivity. Additional indirect costs introduces the methodology, Section 5 describes the
can occur when critical patches with potential side results and Section 6 presents the conclusions.
effects on other systems have to be rolled-out without
prior testing. Lower priority response processes can
avoid these problems and the associated costs by 3. Related work
resolving vulnerabilities during regularly scheduled
system maintenance windows and using system test The issue of vulnerability prioritization has been
environments. actively discussed in the literature and the need for
Triggering a “critical” response process to resolve a vulnerability prioritization in organizations is widely
“low” vulnerability thus creates unnecessary costs. recognized [9][12][21][27]. This work uses the notion
Hence, improving the quality of vulnerability that organizations should prioritize their remediation
classification has a direct impact on the cost- efforts based on the value of their assets and the
effectiveness of its response process. severity of the vulnerability [12].
Even though adding contextual information has the The literature further supports the concept that every
up side of improving the quality of vulnerability organization evaluates vulnerabilities differently, based
prioritization and classification, the down side lies with on their individual context [9][12][21][27] and that
the additional effort that is necessary for acquiring the vulnerability metrics should account for these context
context information. For instance, security managers differences. Chen[9] describes this challenge as
would have to determine the availability of exploits or moving from value-neutral to value-based metrics.
patches for every vulnerability, or purchase that More research has found that companies also
information from 3rd parties. experience the occurrence of vulnerabilities differently.
Because it is unclear for them whether the potential Ishiguro et al. [18] and Telang et al. [30] empirically
improvements are worth the additional effort, showed that the impact of security vulnerabilities or
managers hesitate to invest in improving the scoring incidents varies with a company’s context.
mechanisms of vulnerabilities. The ability to estimate The estimation of this impact influences the ability to
these improvements prior to the investment could thus determine the return on security investment (ROSI).
encourage more managers to engage in scoring Researchers have addressed that estimation problem
improvement activities. from technological, organizational and economical
In this work we present such a method that aims at perspectives [6] [15]. In the scope of our work, we will
enabling managers to estimate the effects of additional be mostly concerned with the latter. Al-Humaigani et
context information on vulnerability scores using only al. [1], Cavusoglu et al. [8], Sonnenreich [28], and
input data that is available at no additional costs to the Neubauer et al. [23] investigated the issue of security
organization and at the time of the investment decision. investments from a quantitative perspective and found
We use real-world vulnerability announcements that one of the major issues is the determination of the
gathered from the NVD in combination with empirical indirect costs induced by security problems [3][4].
models from the literature as input for our method. The Given these problems in measuring the costs and
questions we seek to answer with the method are: severity of vulnerabilities in absolute terms, the usage
1. How does the use of context information of relative metrics is a practical alternative. The
change the value of CVSS scores on a larger scale? Common Vulnerability Scoring System (CVSS) [22]
provides such relative metrics. CVSS is further
recommended by the National Institute of Standards scenario improves upon the first by introducing
and Technology (NIST) and used in NIST’s National additional context-aware metrics to the scoring.
Vulnerability Database [24] as well as by numerous
Scenario Setting A: A company uses CVSS based
authors [9][12] [21].
software vulnerability scores to prioritize the
Our work stands in line with other research in this
vulnerability patching activities of affected systems.
area of security management that tries to investigate
The NVD is the main source of software vulnerabilities
the possibilities of context and value based security
announcement. Staff members review the vulnerability
measurement [9][23][29]..
announcements on a daily basis. Security managers use
the NVD’s CVSS Base-scores to prioritize the
4. Methodology vulnerabilities and select the corresponding response
processes based on the vulnerabilities’ classification.
The goal of this work was to create a method as an Vulnerabilities are classified in four different severity-
artifact that can be used by practitioners to estimate the categories. Scores below 5 are classified as ‘Low’,
vulnerability response improvements they can achieve greater or equal than 5: ‘Medium’, ≥7: ‘High‘ and ≥9:
within their organization’s information systems by ‘Critical’. The vulnerability response processes vary in
investing in better vulnerability scoring. Hevner et al their response time, the involved organizational roles
[16] has described a comprehensive framework for and their total costs. Vulnerabilities which are
creating such artifacts in information systems’ classified as ‘critical’ are addressed immediately by
research, the “Design Science Approach”. We follow members of the security staff. They are resolved using
the steps laid out by Hevner in designing, evaluating quick-response processes that can require the
and communicating our research. For the evaluation of interruption of production processes or render the
the method in particular, we chose Hevner’s suggestion affected systems temporarily unavailable outside
[16] of using a simulation and executed the developed scheduled maintenance windows (e.g., due to
method with artificial data. unscheduled reboots of database systems during
weekday working hours).
4.1. Assumptions
Lower vulnerability classes are delegated to the
The method is designed to analyze possible helpdesk staff and are resolved in bulk during
efficiency improvements in security vulnerability scheduled maintenance and without interrupting
prioritization and response process selection in an production processes. In total, all vulnerabilities are
organization. We assume that the organization in addressed by at least one instance of a response
question has an established IT security policy and process, and no vulnerabilities are left unattended.
documented vulnerability response processes in place. Scenario Setting B: is the same as Scenario A, except
We further assume that the organization knows the that CVSS base scores are not accepted ‘as is’ from the
costs of their security operations (for example, through NVD. Instead, security managers re-calculate the
internal accounting practices, payroll or other means). scores, using the additional context information
By knowing the costs of their IT security and having specified in the CVSS Environmental and Temporal
documented processes, the organization is also able to Metric groups[22]:
determine the costs of an execution of a vulnerability
response process instance.
4.1. Scenarios
- Exploitability of the vulnerability: [Unproven, collecting this kind of data from secondary sources
Proof-of-Concept, Functional or High] represents a significant organizational or financial
effort and is thus of little attraction to managers. To
- Remediation Level: [Official Fix, Temporary
solve this problem, this method aims at estimating the
Fix, Workaround or Unavailable]
effects of added context information on vulnerability
- Confidentiality-, integrity- and availability prioritization using only data that is freely available at
requirement of the system which is affected by the the time of the investment decision.
vulnerability: [Low, Medium or High] The available data at that time consists of the
information contained in the NVD entry, the
The re-calculation is performed using CVSS’s
organization’s security policy and knowledge of its
standard “Temporal-” and “Environmental Metric”,
security process cost structure (as laid out in section
thus no alternations are made in the scoring method
4.1). In the course of this method we will try to
itself..
artificially create the missing data points by estimating
them based on the existing information. This is further
4.3. Data in line with Hevner [16] who acknowledges the
execution of a simulation with artificial data.
The input data for the presented method is defined by We will define the missing context information (e.g.,
the data points needed to calculate the Base-, the availability of patches or exploits) as dependent
Temporal-, and Environmental-score according to the variables that can be explained by independent
CVSS specification [22]. The data is used to calculate variables, for example the age of a vulnerability.
and subsequently compare the vulnerabilities’ Base- Authors like Frei et al [13] have done this before to
scores in scenario A with the Temporal- and investigate trends in the relationship between the
Environmental scores in scenario B. availability of vulnerability patches, exploits and their
To perform the comparison, the data of all analyzed age. Other authors, like Shari Lawrence Pfleeger [25]
vulnerabilities is first laid out in a spreadsheet. Figure have also called for the use of such trend data in their
1 gives an overview of this spreadsheet and shows work. Frei et al. in particular used empirical findings to
which data points are necessary to calculate each of the developed a distribution model that can be used to
three different CVSS scores. The figure further determine the likelihood that a patch or exploit is
indicates whether these data points are available available a certain number of days after a vulnerability
directly from a vulnerability’s NVD entry. has been published. Frei et al’s work enables us to
estimate the missing data points in the “temporal
As the figure shows, the context information required metric” group by applying his distribution model with
for the Temporal- and Environmental-scores are not the information available from the NVD.
available in the NVD. By default, CVSS specifies the It should be noted that Figure 1 also shows that
state of these missing data points as “NOT three data points in the CVSS metric remained in their
DEFINED”. As long as they remain undefined they do default state because no suitable estimation models
not affect the outcome of the scoring. To fill these were available for them at the time of this writing. The
missing data points, one would normally have to unchanged data points were Report Confidence,
collect them from other sources. In practice, however, Collateral Damage Potential, and Target Distribution.
Because CVSS specifies the default state of these data According to CVSS, these requirements need to
points as “Not defined”, their omission does not affect specify the system’s need for high, medium (the
the outcome of the scoring. default setting) or low confidentiality, integrity and
availability. Ideally, this requirement data can be
4.3.1. Temporal Metrics. Frei et al. [13] used the age obtained from an organization’s security policy. In
of a vulnerability announcement as an independent many cases, however, the security policy
variable and found that the likelihood of the documentation only states high-level requirements,
availability of a patch- and exploit for the vulnerability without specifying details for every single system in an
follows the form of a Weibull- and Pareto distribution organization.
function. Frei et al. further determined the distribution If these detailed requirements are not easily available,
function’s parameters based on an empirical analysis of the necessary data points can be estimated, based either
14,000 vulnerabilities [13]. The age of a vulnerability on the available guidelines in the security policy or a
is calculated by counting the days between the date of survey among the organization’s management team.
its first disclosure and the date the CVSS scoring is To exemplify the latter we chose to base the
conducted (e.g. ‘Today’). requirements used in the example application of this
method in section 5 on a series of interviews with 13
Exploitability: Figure 1 shows how we can use the security managers of 9 different companies. The
result of Frei’s model [13] to fill the missing data in interviews had been conducted as part of related
the temporal score section of each analyzed research in 2007 [14].
vulnerability in the spreadsheet. The likelihood that an The interviewees were asked to prioritize three
exploit is available for the given vulnerability is different security factors according to their importance
calculated using a Frei’s Pareto distribution [13] of the in their company. CVSS already specified “MEDIUM”
form: as the default requirement for each of the factors [22],
thus we only counted the times an interviewee ranked a
factor first or last in their prioritization to determine
whether they had “HIGH” or “LOW” requirements for
a = 0.26, k = 0.00161 it. (This means that security factors that were neither
prioritized first nor last by the interviewees would not
x denotes the age of a vulnerability. A random variable change the default state of the data points in the
rand, with a value between 0 and 1, is generated and spreadsheet, thus they could be omitted). The results of
compared to the output of the Pareto function F(x). If the interviews are presented in Table 1.
the value of rand is less than or equal to F(x), the
exploitability data point for this particular vulnerability Table 1 - Security Requirements
is assigned to “HIGH”. In all other cases it was set to Nr. of Interviewees who ranked the factor:
“UNPROVEN“. Security First (=High security Last (=Low security
factor requirement) requirement)
Confident. 2 (p = .15) 0
Remediation Level: Similar to the case above, the
Integrity 0 8 (p = .62)
patch availability is calculated with Frei’s Weibull Availability 11 (p = .85) 0
distribution [13]:
The interviewees’ responses showed that in 11 out of
13 cases Availability was the top priority, thus there
was an availability high requirement. This finding was
λ = 0.209, k = 4.04 in line with earlier research results in the literature
[14]. High confidentiality was required in only 2 of 13
If the value of rand was less than or equal to F(x), the cases, however none of the respondents ranked it last
remediation level was assigned to “OFFICIAL- either. Integrity requirements were ranked last by 8 out
PATCH”. In all other cases it was set to of 13 interviewees.
“UNAVAILABLE”. The answers presented in Table 1 are used to determine
the likelihood p that each of the data points
4.3.2. Environmental Metrics. Security confidentiality-, integrity- or availability-requirement
Requirements: The calculation of the CVSS of a vulnerability entry in the spreadsheet (Figure 1) is
environmental scores requires information about the assigned a different requirement than the CVSS default
confidentiality, integrity and availability requirements (“Medium”). Using the example results shown in
for each system that is affected by a vulnerability. Table1, this means that, the chance of an availability-
requirement to be set to high was 85%, the chance of a refer to such situations as the prevention of a false-
low integrity requirement was 62% and the chance of a negative. Score changes of -2 or more mean that a
high confidentiality requirement was 15%. It has to be vulnerability was previously over-valued and would
noted that this estimation does not consider the have attracted more attention than justified. Hence, we
possibility that systems can have, for example, high consider these score drops the prevention of a false-
confidentiality requirements but are still of relatively positive.
low overall importance to the organization. One way to
Vulnerability classifications: Vulnerabilities are
address this issue would be to add an additional layer
categorized in severity-classes, according to their
on top of the CVSS environmental metric to account
CVSS scores. In the simulation we used the following
for the relative importance of a system compared to
boundaries for the classification: Scores of <5 are
others. This is however not part of the current CVSS
classified as ‘Low’, <7: ‘Medium’, <9: ‘High‘ and ≥9:
2.0 specification and shall thus remain outside the
‘Critical’. We measure changes in the number of
scope of this particular work.
vulnerabilities that are allocated to each of the classes.
4.4. Conducting the evaluation Total costs of resolving vulnerabilities: A cost-factor
is assigned to each vulnerability class. As already
Section 4.2 describes the two scenarios that are described in section 2.1, the factor represents the total
evaluated and later compared in the presented method. costs created by an instance of a vulnerability response
To conduct the evaluation we collected a sample set of process that resolves a vulnerability of that particular
vulnerability announcements (vulnerability entries), in class. In practice, managers can assign absolute,
a specific time frame from the NVD. Even though both monetary values to the factors; for our example,
scenarios state that the organization reviews new however, it is sufficient to use relative values. We
vulnerability announcements on a daily basis, research assume that the execution of a vulnerability response
by Eschelbeck [12] found that the half-life of critical which resolves a “medium” class vulnerability creates
vulnerabilities in organizations is up to 21 days. Thus costs of 1 unit. Respectively, lower class
we decided to collect at least a sample size of 3 times vulnerabilities create less and higher class ones more
that half-life. In the example application presented in costs. We assigned the following example cost factors
Section 5 we used 74 days of NVD data (720 to the vulnerability classes: Low: 0.25, Medium: 1,
vulnerability entries). High: 1.50, Critical: 3.00. To determine the total costs
of resolving all vulnerabilities, the cost-factors are
Once vulnerability entries are collected from the NVD, multiplied with the number of vulnerabilities in their
they are ordered by their publication date and fed into corresponding class.
the spreadsheet depicted in Figure 1. For each
vulnerability, the CVSS base score is calculated using Savings potential: We compare the total costs of
the information provided by the NVD entry. The resolving all vulnerabilities in both simulation
missing data points for the calculation of the temporal scenarios to determine the more efficient solution. The
and environmental CVSS scores are then artificially cost difference between the scenarios represents the
generated as described in sections 4.3.1 and 4.3.2. potential savings that can be achieved through changes
After the scores for every vulnerability in the in the scoring mechanisms. The calculated savings are
spreadsheet have been calculated, the scenario results subsequently compared with the anticipated costs of
are analyzed and compared to quantify the improving the scoring mechanism.
improvements
results of three queries to the total number of samples Table 2 - Severity classifications of
in the set. The queries had the form: vulnerabilities in the analyzed sample set,
with and without the application of context
Select * FROM Samples WHERE {confidentiality, information
integrity, availability}-impact = “COMPLETE” OR
confidentiality-impact = “PARTIAL”; Severity Scenario A Scenario B Difference
Class CVSS Basic CVSS Score
The results showed that 71% of the used samples had Score only with Context
an impact on confidentiality, 73% on integrity and (cost # of costs # of costs # costs
factor) Vuln Vuln
73% on availability. Thus we can consider the samples
Low 38 10 121 30 +83 +21
in the selected to be equally distributed and none of the (0.25) (+218%)
samples had to be discarded. Medium 248 248 171 171 -77 -77
(1) (-31%)
Changes in vulnerability scores and prioritization: High 303 455 397 586 +94 +141
(1.5) (+31%)
Figure 2 shows how the CVSS score values changed Critical 131 393 31 93 -100 -300
from scenario A to scenario B after they were (3) (-76%)
recalculated with additional context information. The Total 720 1105 720 899 -215
majority of the scores had decreased by -0.5 to -1 -19%
points. The scores of 28 out of 720 of the analyzed
vulnerabilities (about 4%), however, increased by 2 Total costs of resolving vulnerabilities: The number
points or more. In several cases this caused a change in of vulnerabilities was multiplied with the assigned cost
the vulnerabilities’ severity classification from factor of the corresponding class to determine the total
“HIGH” to “CRITICAL”. A score decrease of more costs of the triggered response processes. Table 2
than -2 points was only observed in 12 out of 720 shows that the change in classification led to a
cases. In total, 5.6% of the scores changed by more significant reduction in the execution of more costly
than plus or minus 2 points. vulnerability response processes. In our example
application, more than half of the critical-class
vulnerabilities were found to be only high-class
vulnerabilities, thus did not need to trigger the swifter
response processes that could interrupt production
operations and cause more indirect costs.
[14] C. Frühwirth, “On Business-Driven IT Security 2.0,” Published by FIRST-Forum of Incident Response and
Management and Mismatches between Security Security Teams, 2007.
Requirements in Firms, Industry Standards and Research
Work,” Product-Focused Software Process Improvement, [23] T. Neubauer, M. Klemen, and S. Biffl, “Business
2009, pp. 375-385. process-based valuation of IT-security,” Proceedings of the
seventh international workshop on Economics-driven
[15] L.A. Gordon and M.P. Loeb, “The economics of software engineering research, St. Louis, Missouri: ACM,
information security investment,” ACM transactions on 2005, pp. 1-5.
information and system security, vol. 5, 2002, pp. 438-457.
[16] A.R. Hevner, S.T. March, J. Park, and S. Ram, “Design [24] National Institute of Standards and Technology (NIST),
science in information systems research,” Management “National Vulnerability Database (NVD).,” p. Available
Information Systems Quarterly, vol. 28, 2004, pp. 75-106. online at http://nvd.nist.gov/.
[17] A. Hochstein, R. Zarnekow, and W. Brenner, “ITIL as [25] S.L. Pfleeger and R. Rue, “Cybersecurity Economic
common practice reference model for IT service Issues: Clearing the Path to Good Practice,” IEEE Software,
management: formal assessment and implications for vol. 25, 2008, pp. 35-42.
practice,” e-Technology, e-Commerce and e-Service, 2005.
EEE '05. Proceedings. The 2005 IEEE International [26] G. Ridley, J. Young, and P. Carroll, “COBIT and its
Conference on, 2005, pp. 704-710. utilization: a framework from the literature,” System
Sciences, 2004. Proceedings of the 37th Annual Hawaii
[18] M. Ishiguro, H. Tanaka, K. Matsuura, and I. Murase, International Conference on, 2004, p. 8 pp.
“The Effect of Information Security Incidents on Corporate
Values in the Japanese Stock Market,” International [27] R. Rieke, “Modelling and Analysing Network Security
Workshop on the Economics of Securing the Information Policies in a Given Vulnerability Setting,” Critical
Infrastructure (WESII), 2006. Information Infrastructures Security, 2006, pp. 67-78.
[19] ISO/IEC, “Std. ISO 17799:2005, Information [28] W. Sonnenreich, J. Albanese, and B. Stout, “Return On
Technology – Security Techniques - Code of Practice for Security Investment (ROSI)-A Practical Quantitative
Information Security Management,” ISO, 2005. Model,” Journal of Research and Practice in Information
Technology, vol. 38, 2006, pp. 45-56.
[20] ITIL, “The Open Guide. ITIL Incident Management,”
2007, p. Available online at: www.itlibrary.org/index.php? [29] I. Tashi and S. Ghernaouti-Helie, “Efficient Security
page=Incident_Management. Measurements and Metrics for Risk Assessment,” Internet
Monitoring and Protection, 2008. ICIMP '08. The Third
[21] Y. Lai and P. Hsia, “Using the vulnerability information International Conference on, 2008, pp. 131-138.
of computer systems to improve the network security,”
Computer Communications, vol. 30, Jun. 2007, pp. 2032- [30] R. Telang and S. Wattal, “An empirical analysis of the
2047. impact of software vulnerability announcements on firm
stock price,” IEEE Transactions on Software Engineering,
[22] P. Mell, K. Scarfone, and S. Romanosky, “A Complete vol. 33, 2007, pp. 544-557.
Guide to the Common Vulnerability Scoring System Version