Software Security Issue 1.0 1M7Kfk2
Software Security Issue 1.0 1M7Kfk2
Knowledge Area
Issue 1.0
Frank Piessens KU Leuven
EDITOR
Awais Rashid University of Bristol
REVIEWERS
Eric Bodden Paderborn University
Rod Chapman Altran UK
Michael Hicks University of Maryland
Jacques Klein University of Luxembourg
Andrei Sablefeld Chalmers University of Technology
The Cyber Security Body Of Knowledge
www.cybok.org
COPYRIGHT
© Crown Copyright, The National Cyber Security Centre 2019. This information is licensed
under the Open Government Licence v3.0. To view this licence, visit:
http://www.nationalarchives.gov.uk/doc/open-government-licence/
When you use this information under the Open Government Licence, you should include the
following attribution: CyBOK © Crown Copyright, The National Cyber Security Centre 2018, li-
censed under the Open Government Licence: http://www.nationalarchives.gov.uk/doc/open-
government-licence/.
The CyBOK project would like to understand how the CyBOK is being used and its uptake.
The project would like organisations using, or intending to use, CyBOK for the purposes of
education, training, course development, professional development etc. to contact it at con-
[email protected] to let the project know how they are using CyBOK.
Issue 1.0 is a stable public release of the Software Security Knowledge Area. However, it
should be noted that a fully-collated CyBOK document which includes all of the Knowledge
Areas is anticipated to be released by the end of July 2019. This will likely include updated
page layout and formatting of the individual Knowledge Areas
INTRODUCTION
The purpose of this Software Security chapter is to provide a structured overview of known
categories of software implementation vulnerabilities, and of techniques that can be used
to prevent or detect such vulnerabilities, or to mitigate their exploitation. This overview is
intended to be useful to academic staff for course and curricula design in the area of software
security, as well as to industry professionals for the verification of skills and the design of
job descriptions in this area.
Let us start by defining some terms and concepts, and by defining the scope of this chap-
ter. A first key issue is what it means for software to be secure? One possible definition is
that a software system is secure if it satisfies a specified or implied security objective. This
security objective specifies confidentiality, integrity and availability requirements1 for the sys-
tem’s data and functionality. Consider, for instance, a social networking service. The security
objective of such a system could include the following requirements:
• Pictures posted by a user can only be seen by that user’s friends (confidentiality)
• A user can like any given post at most once (integrity)
• The service is operational more than 99.9% of the time on average (availability)
Different security requirements can be at odds with each other, for instance, locking down a
system on the appearance of an attack is good for confidentiality and integrity of the system,
but bad for availability.
A security failure is a scenario where the software system does not achieve its security ob-
jective, and a vulnerability is the underlying cause of such a failure. The determination of an
underlying cause is usually not absolute: there are no objective criteria to determine what
vulnerability is responsible for a given security failure or where it is located in the code. One
might say that the vulnerability is in the part of the code that has to be fixed to avoid this
specific security failure, but fixes can be required in multiple places, and often multiple mit-
igation strategies are possible where each mitigation strategy requires a different fix or set
of fixes.
The definitions of “security” and “vulnerability” above assume the existence of a security ob-
jective. In practice however, most software systems do not have precise explicit security
objectives, and even if they do, these objectives are not absolute and have to be traded off
against other objectives such as performance or usability of the software system. Hence,
software security is often about avoiding known classes of bugs that enable specific attack
techniques. There are well-understood classes of software implementation bugs that, when
triggered by an attacker, can lead to a substantial disruption in the behaviour of the software,
and are thus likely to break whatever security objective the software might have. These
bugs are called implementation vulnerabilities even if they are relatively independent from
application- or domain-specific security objectives like the example objectives above.
This document, the Software Security KA, covers such implementation vulnerabilities, as well
1
Other common information security requirements like non-repudiation or data authentication can be seen
as instances or refinements of integrity from a software perspective. But from other perspectives, for instance
from a legal perspective, the semantics of these requirements can be more involved.
as countermeasures for them. Many other aspects are relevant for the security of software
based systems, including human factors, physical security, secure deployment and procedu-
ral aspects, but they are not covered in this chapter. The impact of security on the various
phases of the software lifecycle is discussed in the Secure Software Lifecycle CyBOK Knowl-
edge Area [1]. Security issues specific to software running on the web or mobile platforms
are discussed in the Web & Mobile Security CyBOK Knowledge Area [2].
The remainder of this chapter is structured as follows. Topic 1 (Categories) discusses widely
relevant categories of implementation vulnerabilities, but without the ambition of describing
a complete taxonomy. Instead, the topic discusses how categories of vulnerabilities can of-
ten be defined as violations of a partial specification of the software system, and it is unlikely
that a useful complete taxonomy of such partial specifications would exist. The discussion
of countermeasures for implementation vulnerabilities is structured in terms of where in the
lifecycle of the software system they are applicable. Topic 2 (Prevention) discusses how pro-
gramming language and Application Programming Interface (API) design can prevent vulner-
abilities from being introduced during development in software programmed in that language
and using that API. In addition, defensive coding practices can contribute to the prevention
of vulnerabilities. Topic 3 (Detection) covers techniques to detect vulnerabilities in existing
source code, for instance, during development and testing. Topic 4 (Mitigation) discusses
how the impact of remaining vulnerabilities can be mitigated at runtime. It is important to
note, however, that some countermeasure techniques could in principle be applied in all three
phases, so this is not an orthogonal classification. For instance, a specific dynamic check
(say, an array bounds check) could be mandated by the language specification (Prevention,
the countermeasure is built in by the language designer), could be used as a testing oracle
(Detection, the countermeasure is used by the software tester) or could be inlined in the pro-
gram to block attacks at run-time (Mitigation, the countermeasure is applied on deployment).
CONTENT
1 CATEGORIES OF VULNERABILITIES
[3][4, c4,c5,c6,c7,c10,c11][5, c6,c9] [6, c17][7, c5,c9,c11,c13,c17]
As discussed in the Introduction, we use the term implementation vulnerability (sometimes
also called a security bug) both for bugs that make it possible for an attacker to violate a
security objective, as well as for classes of bugs that enable specific attack techniques.
Implementation vulnerabilities play an important role in cybersecurity and come in many
forms. The Common Vulnerabilities and Exposures (CVE) is a publicly available list of en-
tries in a standardised form describing vulnerabilities in widely-used software components,
and it lists close to a hundred thousand such vulnerabilities at the time of writing. Implemen-
tation vulnerabilities are often caused by insecure programming practices and influenced by
the programming language or APIs used by the developer. This first topic covers important
categories of implementation vulnerabilities that can be attributed to such insecure program-
ming practices.
Existing classifications of vulnerabilities, such as the Common Weakness Enumeration (CWE),
a community-developed list of vulnerability categories, are useful as a baseline for vulnera-
bility identification, mitigation and prevention, but none of the existing classifications have
succeeded in coming up with a complete taxonomy. Hence, the categories discussed in this
first topic should be seen as examples of important classes of vulnerabilities, and not as an
exhaustive list. They were selected with the intention to cover the most common implemen-
tation vulnerabilities, but this selection is at least to some extent subjective.
Specific categories of implementation vulnerabilities can often be described as violations of
a (formal or informal) specification of some sub-component of the software system. Such
a specification takes the form of a contract that makes explicit what the sub-component ex-
pects of, and provides to its clients. On violation of such a contract, the software system
enters an error-state, and the further behaviour of the software system is typically behaviour
that has not been considered by the system developers and is dependent on system imple-
mentation details. Attackers of the system can study the implementation details and exploit
them to make the system behave in a way that is desirable for the attacker.
the program violate the memory management contract. The attacker chooses the input such
that the program accesses a memory cell of interest to the attacker:
• In a code corruption attack, the invalid memory access modifies compiled program code
to attacker specified code.
• In a control-flow hijack attack, the invalid memory access modifies a code pointer (for
instance, a return address on the stack, or a function pointer) to make the processor
execute attacker-provided code (a direct code injection attack), or to make the proces-
sor reuse existing code of the program in unexpected ways (a code-reuse attack, also
known as an indirect code injection attack, such as a return-to-libc attack, or a return-
oriented-programming attack).
• In a data-only attack, the invalid memory access modifies other data variables of the
program, possibly resulting in increased privileges for the attacker.
• In an information leak attack, the invalid memory access is a read access, possibly
resulting in the exfiltration of information, either application secrets such as crypto-
graphic keys, or runtime metadata such as addresses which assist prediction of the
exact layout of memory and hence may enable other attacks.
Because of the practical importance of these classes of attacks, mitigation techniques have
been developed that counter specific attack techniques, and we discuss these in Topic 4.
are also known as injection vulnerabilities (e.g., SQL injection, or script injection). The name
‘injection’ refers to the fact that exploitation of these vulnerabilities will often provide data in-
puts that cause the structured output to contain additional code statements, i.e. exploitation
injects unintended new statements in the output. Structured output generation vulnerabilities
are relevant for many different kinds of structured outputs:
• A SQL injection vulnerability is a structured output generation vulnerability where the
structured output consists of SQL code. These vulnerabilities are particularly relevant
for server-side web application software, where it is common for the application to inter-
act with a back-end database by constructing queries partially based on input provided
through web forms.
• A command injection vulnerability is a structured output generation vulnerability where
the structured output is a shell command sent by the application to the operating sys-
tem shell.
• A script injection vulnerability, sometimes also called a Cross-Site Scripting (XSS) vul-
nerability is a structured output generation vulnerability where the structured output is
JavaScript code sent to a web browser for client-side execution.
This list is by no means exhaustive. Other examples include: XPath injection, HTML injec-
tions, CSS injection, PostScript injection and many more.
Several factors can contribute to the difficulty of avoiding structured output generation vul-
nerabilities:
• The structured output can be in a language that supports sublanguages with a signifi-
cantly different syntactic structure. An important example of such a problematic case
is HTML, that supports sublanguages such as JavaScript, CSS and SVG.
• The computation of the structured output can happen in different phases with outputs
of one phase being stored and later retrieved as input for a later phase. Structured out-
put generation vulnerabilities that go through multiple phases are sometimes referred
to as stored injection vulnerabilities, or more generally as higher-order injection vulnera-
bilities. Examples include stored XSS and higher-order SQL injection.
Attack techniques for exploiting structured output generation vulnerabilities generally de-
pend on the nature of the structured output language, but a wide range of attack techniques
for exploiting SQL injection or script injection are known and documented.
The Web & Mobile Security CyBOK Knowledge Area [2] provides a more detailed discussion
of such attack techniques.
environment will interact with the program’s resources. For instance, the specification can
say that the program relies on exclusive access to a set of resources for a specific interval of
its execution: only the actor executing the program will have access to the set of resources
for the specified interval.
Violations of such a specification are concurrency bugs, also commonly referred to as race
conditions, because a consequence of these bugs is that the behaviour of the program may
depend on which concurrent actor accesses a resource first (‘wins a race’). Concurrency, and
the corresponding issues of getting programs correct in the presence of concurrency, is an
important sub-area of computer science with importance well beyond the area of cybersecu-
rity [8].
But concurrency bugs can be security bugs, too. Concurrency bugs often introduce non-
determinism: the behaviour of a program will depend on the exact timing or interleaving
of the actions of all concurrent actors. In adversarial settings, where an attacker controls
some of the concurrent actors, the attacker may have sufficient control on the timing of
actions to influence the behaviour of the program such that a security objective is violated.
A race condition vulnerability is a concurrency bug with such security consequences. A very
common instance is the case where the program checks a condition on a resource, and then
relies on that condition when using the resource. If an attacker can interleave his/her own
actions to invalidate the condition between the check and the time of use, this is called a
Time Of Check Time Of Use (TOCTOU) vulnerability.
Race condition vulnerabilities are relevant for many different types of software. Two impor-
tant areas where they occur are:
• Race conditions on the file system: privileged programs (i.e., programs that run with
more privileges than their callers, for instance, operating system services) often need
to check some condition on a file, before performing an action on that file on behalf
of a less privileged user. Failing to perform check and action atomically (such that
no concurrent actor can intervene) is a race condition vulnerability: an attacker can
invalidate the condition between the check and the action.
• Races on the session state in web applications: web servers are often multi-threaded
for performance purposes, and consecutive HTTP requests may be handled by different
threads. Hence, two HTTP requests belonging to the same HTTP session may access
the session state concurrently. Failing to account for this is a race condition vulnerabil-
ity that may lead to corruption of the session state.
Of course, some APIs are more security sensitive than others. A broad class of APIs that are
security sensitive are APIs to libraries that implement security functionality like cryptography
or access control logic. Generally speaking, a software system must use all the ‘security
components’ that it relies on in a functionally correct way, or it is likely to violate a security
objective. This is particularly challenging for cryptographic libraries: if a cryptographic library
offers a flexible API, then correct use of that API (in the sense that a given security objective
is achieved) is known to be hard. There is substantial empirical evidence [9] that developers
frequently make mistakes in the use of cryptographic APIs, thus introducing vulnerabilities.
An orthogonal concern to secure use is the secure implementation of the cryptographic API.
Secure implementations of cryptography are covered in the Cryptography CyBOK Knowledge
Area [10].
1.6 Discussion
1.6.1 Better connection with overall security objectives needs more complex specifica-
tions
We have categorised implementation vulnerabilities as violations of specific partial specifi-
cations of software components. However, the connection to the security objective of the
overall software system is weak. It is perfectly possible that a software system has an imple-
mentation vulnerability, but that it is not exploitable to break a security objective of the system,
for instance, because there are redundant countermeasures elsewhere in the system. Even
more so, if a software system does not have any of the implementation vulnerabilities we
discussed, it may still fail its security objective.
To have stronger assurance that the software system satisfies a security objective, one can
formalise the security objective as a specification. During the design phase, on decom-
position of the system in sub-components, one should specify the behaviour of the sub-
components such that they jointly imply the specification of the overall system. With such a
design, the connection between an implementation vulnerability as a violation of a specifica-
tion on the one hand, and the overall security objective of the system on the other, is much
stronger.
It is important to note, however, that specifications would become more complex and more
domain-specific in such a scenario. We discuss one illustration of additional complexity. For
the vulnerability categories we discussed (memory management, structured output genera-
tion, race conditions and API vulnerabilities), the corresponding specifications express prop-
erties of single executions of the software: a given execution either satisfies or violates the
specification, and the software has a vulnerability as soon as there exists an execution that
violates the specification.
There are, however, software security objectives that cannot be expressed as properties of
individual execution traces. A widely studied example of such a security objective is informa-
tion flow security. A baseline specification of this security objective for deterministic sequen-
tial programs goes as follows: label the inputs and outputs of a program as either public or
confidential, and then require that no two executions of the software with the same public in-
puts (but different confidential inputs) have different public outputs. The intuition for looking
at pairs of executions is the following: it might be that the program does not leak confidential
data directly but instead leaks some partial information about this data. If collected along
multiple runs, the attacker can gather so much information that eventually relevant parts of
the confidential original data are, in fact, leaked. The above specification effectively requires
that confidential inputs can never influence public outputs in any way, and hence cannot leak
even partial information. In a dual way, one can express integrity objectives by requiring that
low-integrity inputs can not influence high-integrity outputs.
But an information flow specification is more complex than the specifications we considered
in previous sections because one needs two executions to show a violation of the specifica-
tion. Information leak vulnerabilities are violations of a (confidentiality-oriented) information
flow policy. They can also be understood as violations of a specification, but this is now a
specification that talks about multiple executions of the software system. This has profound
consequences for the development of countermeasures to address these vulnerabilities [12].
2 PREVENTION OF VULNERABILITIES
[15, 16, 17] [18, c3]
Once a category of vulnerabilities is well understood, an important question is how the intro-
duction of such vulnerabilities in software can be prevented or at least be made less likely.
The most effective approaches eradicate categories of vulnerabilities by design of the pro-
gramming language or API.
The general idea is the following. We have seen in Topic 1 that many categories of im-
plementation vulnerabilities can be described as violations of a specification of some sub-
component. Let us call an execution of the software system that violates this specification,
an erroneous execution, or an execution with an error. From a security point of view, it is useful
to distinguish between errors that cause the immediate termination of the execution (trapped
errors), and errors that may go unnoticed for a while (untrapped errors) [16]. Untrapped errors
are particularly dangerous, because the further behaviour of the software system after an
untrapped error can be arbitrary, and an attacker might be able to steer the software system
to behaviour that violates a security objective. Hence, designing a language or API to avoid
errors, and in particular untrapped errors, is a powerful approach to prevent the presence of
vulnerabilities. For instance, languages like Java effectively make it impossible to introduce
memory management vulnerabilities: a combination of static and dynamic checks ensures
that no untrapped memory management errors can occur. This effectively protects against
the attack techniques discussed in 1.1. It is, however, important to note that this does not pre-
vent the presence of memory-management bugs: a program can still access an array out of
bounds. But the bug is no longer a vulnerability, as execution is terminated immediately when
such an access occurs. One could argue that the bug is still a vulnerability if one of the se-
curity objectives of the software system is availability, including the absence of unexpected
program termination.
In cases where choice or redesign of the programming language or API itself is not an op-
tion, specific categories of vulnerabilities can be made less likely by imposing safe coding
practices.
This topic provides an overview of these techniques that can prevent the introduction of vul-
nerabilities.
the query is explicit, and the LINQ provider that compiles the query to SQL can provide strong
guarantees that the generated query has the intended structure.
of keeping the encryption keys it uses confidential – it should not be possible to extract the
key from the HSM. Research has shown [6, c18] that maintaining such a security objective is
extremely challenging. The HSM API has an API-level vulnerability if there is a sequence of
API calls that extracts confidential keys from the HSM. Note that this is an API design defect
as opposed to the implementation defects considered in Topic 1.
time, and hence programmers are likely to deviate from the secure practices codified in the
guidelines. Hence, it is important to provide tool support to check compliance of software
with the coding rules. Topic 3.1 discusses how static analysis tools can automatically detect
violations against secure coding rules.
3 DETECTION OF VULNERABILITIES
[5, 25] [18, c4]
For existing source code where full prevention of the introduction of a class of vulnerabilities
was not possible, for instance, because the choice of programming language and/or APIs
was determined by other factors, it is useful to apply techniques to detect the presence of
vulnerabilities in the code during the development, testing and/or maintenance phase of the
software.
Techniques to detect vulnerabilities must make trade-offs between the following two good
properties that a detection technique can have:
• A detection technique is sound for a given category of vulnerabilities if it can correctly
conclude that a given program has no vulnerabilities of that category. An unsound de-
tection technique on the other hand may have false negatives, i.e., actual vulnerabilities
that the detection technique fails to find.
• A detection technique is complete for a given category of vulnerabilities, if any vulnera-
bility it finds is an actual vulnerability. An incomplete detection technique on the other
hand may have false positives, i.e. it may detect issues that do not turn out to be actual
vulnerabilities.
Trade-offs are necessary, because it follows from Rice’s theorem that (for non-trivial cate-
gories of vulnerabilities) no detection technique can be both sound and complete.
Achieving soundness requires reasoning about all executions of a program (usually an infinite
number). This is typically done by static checking of the program code while making suitable
abstractions of the executions to make the analysis terminate.
Achieving completeness can be done by performing actual, concrete executions of a program
that are witnesses to any vulnerability reported. This is typically done by dynamic detection
where the analysis technique has to come up with concrete inputs for the program that trigger
a vulnerability. A very common dynamic approach is software testing where the tester writes
test cases with concrete inputs, and specific checks for the corresponding outputs.
In practice, detection tools can use a hybrid combination of static and dynamic analysis tech-
niques to achieve a good trade-off between soundness and completeness.
It is important to note, however, that some detection techniques are heuristic in nature, and
hence the notions of soundness and completeness are not precisely defined for them. For
instance, heuristic techniques that detect violations of secure coding practices as described
in 2.3 are checking compliance with informally defined rules and recommendations, and it is
not always possible to unambiguously define the false positives or false negatives. Moreover,
these approaches might highlight ‘vulnerabilities’ that are maybe not exploitable at this point
in time, but should be fixed nonetheless because they are ‘near misses’, i.e., might become
easily exploitable by future maintenance mistakes.
Static and dynamic program analysis techniques are widely studied in other areas of com-
puter science. This Topic highlights the analysis techniques most relevant to software secu-
rity.
Another important approach to detection of vulnerabilities is to perform manual code review
and auditing. These techniques are covered in the Secure Software Lifecycle CyBOK Knowl-
edge Area [1]. When using tool-supported static detection, it makes sense to adjust such
subsequent code review and other verification activities. For instance, if static detection is
sound for a given category of vulnerabilities, then one might consider not to review or test
for that category of vulnerabilities in later phases.
3.2.1 Monitoring
For categories of vulnerabilities that can be understood as violations of a specified property
of a single execution (See Topic 1.6), complete detection can be performed by monitoring for
violations of that specification. For other categories of vulnerabilities, or when monitoring
for violations of a specification is too expensive, approximative monitors can be defined.
Monitoring for memory-management vulnerabilities has been studied intensively. It is, in
principle, possible to build complete monitors, but typically at a substantial cost in time and
memory. Hence, existing tools explore various trade-offs in execution speed, memory use,
and completeness. Modern C compilers include options to generate code to monitor for
memory management vulnerabilities. In cases where a dynamic analysis is approximative,
like a static analysis, it can also generate false positives or false negatives, despite the fact
that it operates on a concrete execution trace.
For structured output generation vulnerabilities, a challenge is that the intended structure of
the generated output is often implicit, and hence there is no explicit specification that can be
monitored. Hence, monitoring relies on sensible heuristics. For instance, a monitor can use a
fine-grained dynamic taint analysis [25] to track the flow of untrusted input strings, and then
flag a violation when untrusted input has an impact on the parse tree of generated output.
Assertions, pre-conditions and post-conditions as supported by the design-by-contract ap-
proach to software construction [18, c3] can be compiled into the code to provide a monitor
for API vulnerabilities at testing time, even if the cost of these compiled-in run-time checks
can be too high to use them in production code.
Monitoring for race conditions is hard, but some approaches for monitoring data races on
shared memory cells exist, for instance, by monitoring whether all shared memory accesses
follow a consistent locking discipline.
mutation-based fuzzing, where the fuzzer is provided with one or more typical input val-
ues and it generates new input values by performing small mutations on the provided
input values.
• White-box fuzzing, where the internal structure of the program is analysed to assist
in the generation of appropriate input values. The main systematic white-box fuzzing
technique is dynamic symbolic execution. Dynamic symbolic execution executes a pro-
gram with concrete input values and builds at the same time a path condition, a logical
expression that specifies the constraints on those input values that have to be fulfilled
for the program to take this specific execution path. By solving for input values that do
not satisfy the path condition of the current execution, the fuzzer can make sure that
these input values will drive the program to a different execution path, thus improving
coverage.
control flow of the program complies with some specification of the expected control
flow, and hence detects many code-reuse attacks.
On detection of an attack, the runtime monitor must react appropriately, usually by terminat-
ing the program under attack. Termination is a good reaction to ensure that an attack can do
no further damage, but it has of course a negative impact on availability properties.
that can sandbox network- and filesystem-access in virtual hosting environments. The Java
Runtime Environment implements a sandboxing mechanism intended to contain untrusted
Java code, or to isolate code from different stakeholders within the same Java Virtual Ma-
chine, but several significant vulnerabilities have been found in that sandboxing mechanism
over the years [29].
Compartimentalisation is a related but finer-grained security mechanism, where the software
itself is divided in a number of compartments and where some bounds are enforced on the
privileges of each of these compartments. This again requires some underlying mechanism
to enforce these bounds. For instance, a compartimentalised browser could rely on operat-
ing system process access control to bound the privileges of its rendering engine by denying
it file system access. Exploitation of a software vulnerability in the rendering engine is now
mitigated to the extent that even after a successful exploit, the attacker is still blocked from
accessing the file system. Very fine-grained forms of compartimentalisation can be achieved
by object-capability systems [22], where each application-level object can be a separate pro-
tection domain.
To mitigate side-channel vulnerabilities, one can isolate the vulnerable code, for instance, on
a separate core or on separate hardware, such that the information leaking through the side
channel is no longer observable for attackers.
CONCLUSIONS
Software implementation vulnerabilities come in many forms, and can be mitigated by a wide
range of countermeasures. Table 1 summarises the relationship between the categories of
vulnerabilities discussed in this chapter, and the relevant prevention, detection and mitigation
techniques commonly used to counter them.
Acknowledgments
The insightful and constructive comments and feedback from the reviewers and editor on
earlier drafts have been extremely valuable, and have significantly improved the structure
and contents of this chapter, as have the comments received during public review.
Pierce:2002:TPL:509043 [15]
anderson2008security [6]
Du:computer-security [4]
Chess:static-analysis [5]
C-coding-standard [17]
swebokv3 [18]
Dowd:art [7]
1 Categories of Vulnerabilities
1.1 Memory Management Vulnerabilities c4,c5 c5 c6
1.2 Structured Output Generation Vulnerabilities c10,c11 c17 c9
1.3 Race Condition Vulnerabilities c7 c9
1.4 API Vulnerabilities c6 c9,c11
1.5 Side-channel Vulnerabilities c17
2 Prevention of Vulnerabilities
2.1 Language Design and Type Systems c1
2.2 API Design c18 c3
2.3 Coding Practices *
3 Detection of Vulnerabilities
3.1 Static Detection *
3.2 Dynamic Detection c4
4 Mitigating Exploitation of Vulnerabilities
4.1 Runtime Detection of Attacks c4
4.2 Automated Software Diversity c4
4.3 Limiting Privileges c7
FURTHER READING
Building Secure Software [31] and 24 Deadly Sins of Software Security [32]
Building Secure Software was the first book focusing specifically on software security, and
even if some of the technical content is somewhat dated by now, the book is still a solid
introduction to the field and the guiding principles in the book have withstood the test of
time.
24 Deadly Sins of Software Security is a more recent and updated book by mostly the same
authors.
OWASP Resources
The Open Web Application Security Project (OWASP) is a not-for-profit, volunteer-driven or-
ganisation that organises events and offers a rich set of resources related to application
security and software security. They offer practice-oriented guides on secure development
and on security testing, as well as a collection of tools and awareness raising instruments.
All these resources are publicly available at https://www.owasp.org.
REFERENCES
[1] L. Williams, The Cyber Security Body of Knowledge. University of Bristol, 2019, ch.
Secure Software Lifecycle, version 1.0. [Online]. Available: https://www.cybok.org/
[2] S. Fahl, The Cyber Security Body of Knowledge. University of Bristol, 2019, ch. Web &
Mobile Security, version 1.0. [Online]. Available: https://www.cybok.org/
[3] L. Szekeres, M. Payer, T. Wei, and D. Song, “Sok: Eternal war in memory,” in
Proceedings of the 2013 IEEE Symposium on Security and Privacy, ser. SP ’13.
Washington, DC, USA: IEEE Computer Society, 2013, pp. 48–62. [Online]. Available:
http://dx.doi.org/10.1109/SP.2013.13
[4] W. Du, Computer Security: A hands-on Approach, 2017.
[5] B. Chess and J. West, Secure Programming with Static Analysis, 1st ed. Addison-Wesley
Professional, 2007.
[6] R. J. Anderson, Security Engineering: a guide to building dependable distributed systems.
Wiley, 2008.
[7] M. Dowd, J. McDonald, and J. Schuh, The Art of Software Security Assessment: Identify-
ing and Preventing Software Vulnerabilities. Addison-Wesley Professional, 2006.
[8] B. Goetz, J. Bloch, J. Bowbeer, D. Lea, D. Holmes, and T. Peierls, Java Concurrency in
Practice. Addison-Wesley Longman, Amsterdam, 2006.
[9] M. Egele, D. Brumley, Y. Fratantonio, and C. Kruegel, “An empirical study of cryptographic
misuse in android applications,” in Proceedings of the 2013 ACM SIGSAC Conference on
Computer & Communications Security, ser. CCS ’13. ACM, 2013, pp. 73–84.
[10] N. Smart, The Cyber Security Body of Knowledge. University of Bristol, 2019, ch.
Cryptography, version 1.0. [Online]. Available: https://www.cybok.org/
[11] P. Kocher, “Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other
systems,” in Proceedings of the 16th Annual International Cryptology Conference on Ad-
vances in Cryptology, ser. CRYPTO ’96. Springer-Verlag, 1996, pp. 104–113.
[12] F. B. Schneider, “Enforceable security policies,” ACM Transactions on Information Sys-
tems Security, vol. 3, no. 1, pp. 30–50, Feb. 2000.
[13] M. Abadi, “Protection in programming-language translations,” in Proceedings of the 25th
International Colloquium on Automata, Languages and Programming, ser. ICALP ’98.
London, UK, UK: Springer-Verlag, 1998, pp. 868–883.
[14] A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr, “Basic concepts and taxonomy of
dependable and secure computing,” IEEE Transactions on Dependable Secure Computing,
vol. 1, no. 1, pp. 11–33, Jan. 2004.
[15] B. C. Pierce, Types and Programming Languages, 1st ed. The MIT Press, 2002.
[16] L. Cardelli, “Type systems,” in The Computer Science and Engineering Handbook, 1997,
pp. 2208–2236.
[17] Software Engineering Institute – Carnegie Mellon University, “SEI CERT C coding stan-
dard: Rules for developing safe, reliable, and secure systems,” 2016.
[18] IEEE Computer Society, P. Bourque, and R. E. Fairley, Guide to the Software Engineering
Body of Knowledge (SWEBOK®) , 3rd ed. Los Alamitos, CA, USA: IEEE Computer Society
Press, 2014.
[19] “SPARK 2014,” http://www.spark-2014.org/about, accessed: 2018-04-17.
[20] H. Hosoya, J. Vouillon, and B. C. Pierce, “Regular expression types for xml,” ACM Trans.
Program. Lang. Syst., vol. 27, no. 1, pp. 46–90, Jan. 2005.
[21] A. Sabelfeld and A. C. Myers, “Language-based information-flow security,” IEEE J.Sel. A.
Commun., vol. 21, no. 1, pp. 5–19, Sep. 2006.
[22] M. S. Miller, “Robust composition: Towards a unified approach to access control and
concurrency control,” Ph.D. dissertation, Johns Hopkins University, Baltimore, Maryland,
USA, May 2006.
[23] F. Long, D. Mohindra, R. C. Seacord, D. F. Sutherland, and D. Svoboda, The CERT Oracle
Secure Coding Standard for Java, 1st ed. Addison-Wesley Professional, 2011.
[24] MISRA Ltd, MISRA-C:2012 Guidelines for the use of the C language in Critical Systems,
Motor Industry Software Reliability Association Std., Oct. 2013. [Online]. Available:
www.misra.org.uk
[25] E. J. Schwartz, T. Avgerinos, and D. Brumley, “All you ever wanted to know about dynamic
taint analysis and forward symbolic execution (but might have been afraid to ask),” in Pro-
ceedings of the 2010 IEEE Symposium on Security and Privacy, ser. SP ’10. Washington,
DC, USA: IEEE Computer Society, 2010, pp. 317–331.
[26] J. C. Reynolds, “Separation logic: A logic for shared mutable data structures,” in Pro-
ceedings of the 17th Annual IEEE Symposium on Logic in Computer Science, ser. LICS ’02.
Washington, DC, USA: IEEE Computer Society, 2002, pp. 55–74.
[27] B. Livshits, M. Sridharan, Y. Smaragdakis, O. Lhoták, J. N. Amaral, B.-Y. E. Chang, S. Z.
Guyer, U. P. Khedker, A. Møller, and D. Vardoulakis, “In defense of soundiness: A mani-
festo,” Commun. ACM, vol. 58, no. 2, pp. 44–46, Jan. 2015.
[28] P. Larsen, A. Homescu, S. Brunthaler, and M. Franz, “SoK: Automated software diversity,”
in Proceedings of the 2014 IEEE Symposium on Security and Privacy, ser. SP ’14. Wash-
ington, DC, USA: IEEE Computer Society, 2014, pp. 276–291.
[29] P. Holzinger, S. Triller, A. Bartel, and E. Bodden, “An in-depth study of more than ten years
of java exploitation,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer
and Communications Security, ser. CCS ’16, 2016, pp. 779–790.
[30] B. Parno, J. M. McCune, and A. Perrig, “Bootstrapping trust in commodity computers,”
in Proceedings of the 2010 IEEE Symposium on Security and Privacy, ser. SP ’10. Wash-
ington, DC, USA: IEEE Computer Society, 2010, pp. 414–429.
[31] J. Viega and G. McGraw, Building Secure Software: How to Avoid Security Problems the
Right Way. Addison-Wesley Professional, 2002.
[32] M. Howard, D. LeBlanc, and J. Viega, 24 Deadly Sins of Software Security: Programming
Flaws and How to Fix Them, 1st ed. New York, NY, USA: McGraw-Hill, Inc., 2010.
[33] C. Collberg and J. Nagra, Surreptitious Software: Obfuscation, Watermarking, and Tam-
perproofing for Software Protection, 1st ed. Addison-Wesley Professional, 2009.
ACRONYMS
API Application Programming Interface.
ASLR Address Space Layout Randomization.
NX No Execute.