masterThesis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 90

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/365872100

Attack surface analysis of the Linux kernel based on complexity metrics

Thesis · April 2021


DOI: 10.13140/RG.2.2.29943.70561

CITATIONS READS

2 961

1 author:

Stefan Bavendiek
Hamburg University
2 PUBLICATIONS 3 CITATIONS

SEE PROFILE

All content following this page was uploaded by Stefan Bavendiek on 30 November 2022.

The user has requested enhancement of the downloaded file.


Attack surface analysis of the Linux
kernel based on complexity metrics

Master’s Thesis in the study course


Applied Informatics / Software Engineering

At the NORDAKADEMIE gAG,


University of Applied Science,
in 25337 Elmshorn, Germany

Submitted by
Stefan Bavendiek
[email protected]

First reviewer: Prof. Dr.-Ing. Versick, Daniel


Second reviewer: Prof. Dr. Zimmermann, Frank

Processing time: 09. November 2020 to 08. April 2021


Abstract

The Linux kernel is one of the dominating operating systems used


today. Like any complex system, the Linux kernel has a large attack
surface that can include vulnerabilities. When adversaries exploit vul-
nerabilities in common software systems like the Linux kernel, the con-
sequences can be severe. It is therefore crucial for the improvement of
digital infrastructure security, to identify and mitigate software areas
which are prone to be attacked. While there are already different ap-
proaches to assess and mitigate the attack surface of Linux, this research
project aims to identify the risks associated with different Linux kernel
components, by using software complexity metrics. The resulting mea-
sures can help identify highly complex kernel features to create secure
kernel configurations.

1
List of Figures
1 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Security Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 User space to Kernel Communication . . . . . . . . . . . . . . . 37
4 Kernel Subsystems . . . . . . . . . . . . . . . . . . . . . . . . . 38
5 Research Approach . . . . . . . . . . . . . . . . . . . . . . . . . 55
6 Test execution steps . . . . . . . . . . . . . . . . . . . . . . . . 62
7 Module Complexity . . . . . . . . . . . . . . . . . . . . . . . . . 66
8 Hardware dependent Complexity . . . . . . . . . . . . . . . . . 71

2
Contents
1 Introduction 6
1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Kernel Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Linux Kernel Architecture . . . . . . . . . . . . . . . . . 7
1.2.2 Interface and Exposure . . . . . . . . . . . . . . . . . . . 7
1.2.3 System Calls . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Research Object . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Approach and Organization . . . . . . . . . . . . . . . . . . . . 11

2 Attack surface Analysis 13


2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Attack Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Kernel Attack Vectors . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Exposure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.1 Remote . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.2 Local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.3 Security Boundaries . . . . . . . . . . . . . . . . . . . . 20
2.5 Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6 Measuring Attack surface . . . . . . . . . . . . . . . . . . . . . . 25

3 Security Metrics 27
3.1 Measuring Security . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Software Quality Metrics . . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 SLOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.2 Cyclomatic Complexity . . . . . . . . . . . . . . . . . . . 31
3.3.3 Cognitive Complexity . . . . . . . . . . . . . . . . . . . . 32
3.4 Complexity and Security . . . . . . . . . . . . . . . . . . . . . . 33

4 Kernel Architecture 35
4.1 Kernel Interfaces - System Calls . . . . . . . . . . . . . . . . . . 36
4.2 Overview of Linux subsystems . . . . . . . . . . . . . . . . . . . 37
4.2.1 System call interface . . . . . . . . . . . . . . . . . . . . 39

3
4.2.2 Process management . . . . . . . . . . . . . . . . . . . . 39
4.2.3 Memory management . . . . . . . . . . . . . . . . . . . . 39
4.2.4 Virtual file system . . . . . . . . . . . . . . . . . . . . . 40
4.2.5 Network stack . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2.6 Device drivers and device management . . . . . . . . . . 41
4.2.7 Architecture-dependent code . . . . . . . . . . . . . . . . 41
4.3 Kernel Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4 Hardware dependent code . . . . . . . . . . . . . . . . . . . . . 43
4.5 Source Code Structure . . . . . . . . . . . . . . . . . . . . . . . 44
4.6 Kernel Attack Surface . . . . . . . . . . . . . . . . . . . . . . . 46

5 Related Research 47
5.1 Attack Surface Analysis . . . . . . . . . . . . . . . . . . . . . . 47
5.2 Measuring Security . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 System Call Exposure . . . . . . . . . . . . . . . . . . . . . . . 48
5.4 Measuring Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . 49
5.5 Quantified Attack Surface Reduction . . . . . . . . . . . . . . . 50
5.6 Shortcomings of previous Research . . . . . . . . . . . . . . . . 52
5.6.1 Manual Attack Surface Analysis . . . . . . . . . . . . . . 52
5.6.2 Measuring known Vulnerabilities . . . . . . . . . . . . . 53
5.6.3 System Call Sandboxing . . . . . . . . . . . . . . . . . . 53
5.6.4 Compile-time Kernel Reduction . . . . . . . . . . . . . . 54

6 Approach Reasoning 55
6.1 Attack surface and complexity . . . . . . . . . . . . . . . . . . . 56
6.2 Linux Kernel Modules . . . . . . . . . . . . . . . . . . . . . . . 57
6.3 Hardware Dependent Complexity . . . . . . . . . . . . . . . . . 58

7 Testing Process 59
7.1 Sonarqube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.2 Test Parameter and Environment . . . . . . . . . . . . . . . . . 60
7.3 Test Execution Process . . . . . . . . . . . . . . . . . . . . . . . 62

8 Results and Interpretation 63


8.1 Complexity Overview . . . . . . . . . . . . . . . . . . . . . . . . 63

4
8.2 Complexity Difference . . . . . . . . . . . . . . . . . . . . . . . 64
8.3 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.4 Impact of optional Kernel Modules . . . . . . . . . . . . . . . . 66
8.4.1 SELINUX . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.4.2 AMDGPU . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.4.3 KVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
8.5 Impact of File Systems . . . . . . . . . . . . . . . . . . . . . . . 69
8.5.1 ext4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.5.2 xfs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.5.3 btrfs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.5.4 namespaces . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.6 Impact of Hardware Dependent Subsystems . . . . . . . . . . . 71
8.6.1 Instruction Architecture . . . . . . . . . . . . . . . . . . 71
8.6.2 Hardware Drivers . . . . . . . . . . . . . . . . . . . . . . 72

9 Conclusion 73
9.1 Hardware Dependent Kernel Builds . . . . . . . . . . . . . . . . 73
9.2 Environmental Dependent Kernel Builds . . . . . . . . . . . . . 73
9.3 Install Time Kernel Build Reduction . . . . . . . . . . . . . . . 74
9.4 Comparison to Previous Research . . . . . . . . . . . . . . . . . 74

10 Further research 75

5
1 Introduction

1.1 Problem Statement

The security of information systems is one of the most relevant topics for the
digital age and a lot of money is put into securing digital infrastructures [1].
Research regarding the current state of IT security, indicates an increasing
threat landscape[2]. As a result, cyber crime continues to increase [3]. As
the complexity in our software grows, the number of security vulnerabilities
increases as well[4]. While cyber security consists of many areas such as ap-
plication and software security, the basis of this infrastructure is provided by
the operating system. As one of the most common general purpose operating
systems, Linux is not immune to security risks [5]. However the Linux operat-
ing system is extensively used on both web servers [6] as well as mobile devices
[7]. With more then 2.5 billion active Android devices in the world [8], the
security of the Linux operation system is of great significance. However, the
Linux kernel itself is a complex piece of software with more then 27.8 million
lines in 2020 [9]. This results in a large attack surface to impact the security
of Linux based systems. The potential consequences of security vulnerabili-
ties in common software systems has been demonstrated by real world cyber
attacks such as Stuxnet [10], WannaCry [11], Mirai [12] and NotPetya [13].
The assessed damaged caused by NotPetya alone is estimated to amount to
more then 10 billion dollar, according to senior cybersecurity officials in the
US government [14]. Since our economy is increasingly dependent on a reliable
digital infrastructure [15] the potential damage caused by future cyber attacks
becomes even more relevant. Given this outlook on the consequences of ex-
ploitable vulnerabilities in common software products, it becomes clear that
critical software components like the Linux operating system, need to be pre-
pared for future security challenges. Mitigating the risk of widely exploitable
vulnerabilities in the Linux kernel requires a throughout analysis of the kernel
architecture. By researching the different components of the Linux kernel and
their included complexity, it is possible to illuminate which areas are most at
risk.

6
1.2 Kernel Security

The Linux kernel contains a large code base including components that have
different significance to the security of the system. The most relevant kernel
areas will be shortly introduced here to allow a better understanding of the
research objective.

1.2.1 Linux Kernel Architecture

The Linux kernel consists of a number of subsystems that form the operating
system. These different components build a complex software system with a
significant codebase [9] that presents a large attack surface for security vul-
nerabilities to be exploited [16]. To analyze and measure the attack surface
presented by the Linux kernel, it is substantial to understand and dissect the
architecture and codebase of Linux. The architecture of the Linux kernel has
already been analyzed in detail [17] [18] and the current kernel code is well
documented [19]. Based on these documented architectural description, it will
be possible to conduct an analysis of the attack surface. A significant part of
this analysis will include research into the role of the different Linux kernel
components in regards to the complexity of the kernel as a whole. Based on
an understanding of the different code areas of the Linux Kernel, an analysis
of the relevant code sections can reveal critical parts of the kernel code, that
significantly impact the complexity of the resulting software.

1.2.2 Interface and Exposure

The Linux kernel exposes different functionality to users and processes, based
on their privileges [18]. This permission model is enforced throughout the
kernel, and exposed to a number of interfaces the kernel provides. These
interfaces themselves also expose different areas of the kernel to their users.
To analyze the attack surface of Linux, it is crucial to dissect the different
methods to interact with the kernel and break down the exposed codebase
and functionality these interfaces provide. The measurement of the exposed
codebase and functionality is used as the basis of the analysis. The specific
features provided by the kernel as well as their software implementation include
different risks, based on the data they handle and the privileges they may

7
provide, as well as the code quality of the implementation. These attributes
will require a throughout examination to measure the detailed risk areas with
the Linux kernel. Besides the detailed examination of code executions paths,
an attack surface analysis also requires a specific threat model. The exposed
and potentially exploitable code depends on this threat model and is commonly
defined by the location and privileges an attacker requires to reach the exposed
interfaces of the software system. The threat model differs significantly for a
remote unauthenticated attacker on the network, in comparison with a local
user, or even a privileged user.

1.2.3 System Calls

The majority of all software applications running on the Linux platform is


executed in user space [18]. This address space is kept strictly separate from
the kernel space that the Linux kernel itself uses. The Kernel provides the
user space processes with a number of services, like handling files, devices and
memory management. To communicate with the Linux kernel and request its
services, a user space process uses system calls. When a user space process
requires a file to write to, it will ask the kernel for the file handle through
the open() system call. Since the persistent storage hardware is handled by
the kernel and only an abstracted view of the virtual file system is visible
to the processes in user space, the user space process will use this system
call to tell the kernel what file in the virtual file system it requires and that
it needs to write to this file. The kernel will then perform various checks
to confirm that the permissions of the calling process is sufficient to request
the specified file. If all checks pass, the kernel will proceed to pass the file
handle of the requested file to the user space process, in the form of the return
value of the open() system call. If the user space process wants to write to
the opened file, it will again have to request this by using the write() system
call. All user space actions that have to do with resources that are handled
by the Linux kernel, will need to go through this process. The used system
calls by user space applications will often be implemented by libraries, which
means that the developer of a Linux application software will not necessarily
have to use the system call directly. Often, libraries like glibc will provide an

8
abstracted interface that uses a system call indirectly. However in the end,
the data provided by the user space process is passed to the kernel, where it
is processed. This also means, that exploitable vulnerabilities in system calls
can impact the integrity of the kernel itself. The consequences depend on the
vulnerable code and the functionality the affected system call provides. Recent
examples like the DirtyCow vulnerability [20] show the significant impact that
kernel vulnerabilities can have. The number of exposed system calls from the
context of a user space application process, will therefore provide a significant
impact in regards to the attack surface of the kernel.

9
1.3 Research Object

Analyzing the attack surface of a software component to assess the involved


risks is crucial to make informed decisions in regards to the security of a sys-
tem. Methods to conduct such an analysis for common software products have
already been established [21] [22] [23]. There is also existing research regarding
the analysis of the Linux kernel attack surface and the reduction of the exposed
functionality [24]. While these existing methods will also be taken into consid-
eration, the methodologies for this research will focus on complexity metrics.
It has already been established, that the complexity of a software’s codebase
correlates with the resulting probability of vulnerabilities [25]. Using metrics
like cyclomatic complexity [26] on the code base of the Linux kernel, will create
a reproducible measure regarding the attack surface related to different kernel
features. To indicate the complexity within different kernel components, this
paper will use the kernel compile time configuration to measure the cyclomatic
complexity during the build process, using establish static code analysis tools.
This process will be repeated using different kernel build configurations to
include specific Linux features, like selected kernel modules and file systems.
Additionally the impact of specific kernel areas like hardware depended code
will be analyzed to assess the their relevance to the overall attack surface. The
measured complexity can subsequently be compared with the default kernel
to make an assessment regarding the impact of different kernel components on
the overall kernel security.

Figure 1: Research Objectives

Using this method to highlight more complex code areas, has the potential
to identify parts of the Linux kernel, that are more prone to software vulnera-
bilities. The results of this analysis will therefore provide system engineers with
a method to measure the attack surface of kernel features and make qualified
decisions regarding their use and their associated risks.

10
1.4 Approach and Organization

The attack surface of a complex software system like the Linux kernel requires
extensive research on various components. While there are some research ap-
proaches to analyze the detailed usage of kernel code by various features and
functions [23], the exact extent of used kernel code is difficult to assess and can
change rapidly, as the Linux kernel code is under fluent development. In order
to find new and improved ways to assess the kernel attack surface, previous re-
search on Linux kernel security assessments and general software attack surface
measurement was analyzed. Using common research sources like IEEE Xplore
[27] and Elsevir [28] as well as common search engines like Google Scholar
[29], existing research papers on related topics where explored to assess the
current state of research. Aside from attack surface analysis methodologies
and previous works on Linux kernel security, additional research regarding the
measurement of security attributes and their metrics was targeted. Based on
the identified previous research work, a new approach to improve the current
methodology for attack surface analysis was created. To provide a reason-
able assessment of the related risks regarding specific kernel components and
features, the approach used in this paper will focus on software quality assess-
ments, as an establish software engineering methodology. Instead of attempt-
ing to approach the security assessment of the Linux kernel using previously
used methods like attack vector identification and measuring their exposure
towards specific adversaries, this research project will use static measurement
of cyclomatic complexity related to different kernel subsystems and features.
With the previously established correlation between complexity and security
[25] this method will provide an approximated indicator based on a well es-
tablish complexity metric. While this process is not expected to produce an
exact measure of the attack surface, the results can provide a reliable indi-
cation of security sensitive code areas due to their complexity. To produce
comparable results, a number of common kernel features will be analyzed by
compiling and measuring the Linux kernel using different kernel configurations
to include the targeted subsystems. Based on the results, consequent measures
will be presented to allow the use of a kernel configuration that supports the
reduction of the available complexity and thereby increased security. The first

11
part of this paper will define core principles like attack surface definition and
common security metrics. Specific attention will be directed to the advantages
and issues of the previously established methods to assess software security
attributes. Subsequent chapters will present the structural overview of the
Linux kernel code to highlight the relevance of the different kernel components
for this research. Related research approaches in this field will be analyzed to
examine existing methods and their attributes. This is followed by a detailed
description of the used analysis approach and testing execution. The last chap-
ters will present the measurement results and provide an interpretation and
possible use cases for the results of this research.

12
2 Attack surface Analysis
Attack surface analysis are used to identify areas of a software product that
are vulnerable to attacks. This includes the finding of potential attack vectors
as well as their exposure. This chapter will describe the fundamental steps
related to the execution of attack surface analysis and the challenges related
to their measurement.

2.1 Objective

Improving the security of a software project requires a detailed understanding


of the possible ways an adversary could interact and attack the end product.
An attack surface analysis is used to provide an understanding about the
security risk present in the analyzed software and to highlight areas that need
improvements to reduce that risk [23] [21] [22]. It can also be used to monitor
how the attack surface of a software changes with newly implemented features
and how these changes impact the risks involved. Code areas that have been
identified to have a significant impact on the security, can then be reviewed
more closely to find potential vulnerabilities in the affected functions. An
attack surface analysis also provides a means to make a threat assessment of
specific features of a software and thereby allows a closer evaluation of the risks
associated with specific modes of operations or optional features [30]. Based
on this, security architects can make informed decisions about the use of these
features. Essential software components that present a large attack surface can
then receive additional defense in depth security features to prevent successful
exploitation in the presents of vulnerabilities and limit the consequences of
a successful compromise [31] [32]. To design a secure software product it
is essential to be aware of all the potential ways an attacker could target
to compromise it. Secure software development frameworks like Microsofts
security development life-circle (SDL) [33] require the developers to be aware of
the current and future attack surface to take appropriate measures to minimize
the probability of exploitable security issues in the end product. An attack
surface analysis is therefore an essential step to improve the security of a
software product and all consequent work is based on the analyzed risks that
have been identified during this step.

13
2.2 Attack Vectors

The attack surface is composed of the sum of all possible ways to interact
with the software during its operation, also known as attack vectors [21]. In
general that includes all kinds of input and output of data that the analyzed
software is involved in [23]. Every single communication path between the
software and outside systems can therefore be considered an attack vector
as well. Since communication requires the processing of data from external
sources, it presents a possible way to exploit vulnerable code in the targeted
software and adds to the attack surface of the application. This does not only
affect functions that are directly involved in the processing of external data,
but also all code that is indirectly involved in the support of those functions.
While a vulnerable parsing error in the data processing function is the most
direct way to exploit the targeted software, modern cyber attacks use more
complex exploit chains that demonstrate how even seemingly unrelated code
areas can influence the attack surface [34] [35]. It is therefore essential to
identify the attack vectors a software exposes and to assess the risk involved in
that specific attack vector, to measure the overall attack surface [23]. Since the
objective of an attack surface analysis is to assess the possible risks involved
in running a software application, it is necessary to consider all possible attack
vectors on the overall system. That includes not only the application itself,
but all other related services and system components required to run it. A web
application itself may have a seemingly simple attack surface that is mainly
composed of the exposed API functions. However it depends not only on a
web server for its use of the HTTP protocol, but also on a large amount of
underlying system function the operating system provides. Other services used
to manage and maintain the application platform will also affect the attack
surface of the system as a whole. While in same cases it may be intended
to analyze only the additional attack surface of a specific system component,
the objective of this paper is to analyze the attack surface of the Linux kernel
itself, which is a common part of the overall system of many modern software
applications.

14
2.3 Kernel Attack Vectors

The Linux kernel presents a large number of possible ways for userspace ap-
plications to interact with the services provided by the kernel. In general, all
kernel functions available to userspace processes are provided through system
calls. Through these system calls, the applications can make use of different
kernel subsystems such as process management, memory management, the vir-
tual file system or network interfaces. These subsystems also use architecture
dependent code as well as device drivers that are specific to the used system
hardware. The subsystems are also influenced by kernel extensions such as
provided by kernel modules. While only a small part of the kernel codebase is
directly accessible by user space applications through the provided system calls
[36], all kernel code is indirectly involved in providing the requested services of
the operating system. This means that even if a kernel function does not pro-
vide a direct interface to user space applications, the code will still provide a
functionality that the user space application relies on. Kernel subsystems that
are directly involved in processing data from user space application present the
primary attack vectors of the kernel, while other kernel code provides these
directly exposed subsystems with indirect functionality. Since all of the kernel
code lives in the same memory space, a single error in any part of the kernel
can affect the system as well [37] [38]. While a complete list of available at-
tack vectors significantly depends on the used kernel features, the following list
presents an overview of the some common attack vectors on Linux systems.

• Network Interfaces

– Raw sockets, IP Stack (OSI Layer 3)

– Network Sockets (TCP, UDP) (OSI Layer 4)

– Network Services (OSI Layer 5-7)

• Inter Process Communication

– Shared Memory

– System V IPC

– POSIX message queues

– Unix Domain Sockets

15
– Middleware IPC Services (DBus)

• Kernel Interfaces

– Kernel API - System Calls

– Kernel ABI - Internal Kernel Interfaces used by Kernel Components,


drivers

– Kernel variables - Kernel memory space

While some of these listed attack vectors, like network and kernel interfaces,
directly expose kernel space functions that may lead to a direct compromise of
the kernel itself, other attack vectors like inter process communication channels
can indirectly expose additional attack vectors by exploiting higher privileged
user space processes [39]. For example, the exposure of kernel system calls is
different for user space processes depending on the inherited privileges [40].
Even kernel code that does not process user space data in any way, may have
an impact on the security of the system[41]. Measuring these indirect risks is
difficult and new attack vectors to expose previously security unrelated kernel
code are regularly found by security researchers [42]. The case of Linux user
namespaces [41] demonstrates how seemingly unrelated code that does not
process untrusted data, can indirectly lead to the exposure of additional attack
vectors with significant impact [43] [44] [45]. Therefore these common kernel
attack vectors present a subset of the current attack vectors presented by the
Linux kernel.

2.4 Exposure

Attack vectors depend on the exposure towards a specific attacker [23]. Some
attack vectors may not be reachable for specific adversaries, depending on
the capabilities, physical location and privileges of the attacker. The specific
attack surface from the perspective of an attacker, therefore depends on the
exposure of the reachable attack vectors. An unauthenticated, remote attacker
will have different attack vector options to target the software, then an authen-
ticated user, or even a user with local shell access [40]. The different kinds
of access are defined by the security boundaries a system implements. The
exposure also depends on the physical location and the privileges of a user.

16
Essential questions to be answered during the attack surface analysis focus
on the different possible ways an attacker can interact with the system [46]
[22]. Based on the kind of software that is being analyzed and the deployment
environment, the exposure of the software can change significantly.

2.4.1 Remote

The exposure of remote network services is one of the most relevant cases for
attack surfaces analysis [22]. Since network based services are often provided
to a large number of possibly untrustworthy systems and hostile adversaries,
assessing and restricting the exposure of these systems is an established se-
curity measure [47]. Most predominantly internet facing services are common
application that face this kind of exposure. By providing an interface to the
public internet that allows arbitrary users to interact with the software system,
a large exposure needs to be faced by this application. Keeping the number of
attack vectors to a minimum, is therefore a crucial task for system engineers
that build internet facing services, since any exploitable vulnerability in the
provided software interfaces will be available to attack for all internet con-
nected systems [48] [49]. Due to this, internet facing systems that are directly
reachable to the public, often provide limited functionality, while the more
complex functionality of these internet services are more restricted, for exam-
ple by authentication requirements [50]. Other remote services can be provided
to the systems of a specific network only, creating an intranet, thereby restric-
tion the number of possible attackers. Networks such as common intranets
usually provide a significantly smaller exposure to its internal services [51].
Common measures to reduce the exposure of a service, such as firewalls and
VPN services, have been around for several decades [47]. However, common
intranet services such as applications based on the RDP, SMB or NFS pro-
tocol may be designed to be reachable by trusted entities only [52] [53]. To
reduce the attack surface of these services, network based restrictions are an
important part of the architecture [54]. The effectiveness of network restric-
tion measures are however not only difficult to measure and enforce [55], the
increasingly dynamic applications environment of modern IT infrastructures
also make it difficult to implement the required security policies reliably [56].

17
Since attackers only need to compromise a single device on the intranet to
circumvent network based restrictions, the exposure of these services, while
not directly available, is still reachable by combining an attack with additional
steps, such as compromising an end user device by phishing [57].
Other measures to reduce the exposure of network based software applica-
tions can include authentication requirements to access specific features. By
requiring authentication for parts of the software functions, the attack vectors
of these parts are shielded from unauthenticated entities [58]. However, the
authentication mechanisms adds new attack vectors and can be significantly
complex itself, thereby adding more attack vectors [59]. In addition to the
additional attack surface that network restriction and authentication mecha-
nisms add to a system, they also add additional systems [60]. While a simple,
internet facing application with minimal attack vectors may face a significant
exposure, more complex systems that are shielded by additional measures to
reduce the exposed attack vectors also introduce new attack vectors presented
by the additional security measures [61]. The exact exposure presented by
more complex remote network based applications, is hard to measure [62] and
to enforce reliably [55], while additional restrictions to limit this exposure can
also add additional attack vectors and complexity to the overall system, thus
increasing the overall attack surface [61].
Linux kernel attack vectors that are most relevant to this kind of exposure
include network interfaces on the different OSI layers [63]. However, similar
to the network based restrictions discussed before, some parts of the kernel
may be indirectly accessible to a remote entity [35]. Since the remotely ex-
posed network application itself runs on the local system, it can directly or
indirectly provide access to the locally exposed attack vectors. While a well
designed network service may try to shield access to local kernel interfaces [32],
exploitable vulnerabilities in the remote application itself can expose the local
attack vectors of the operating system.

2.4.2 Local

While many services are made available to a large number of entities via net-
worked systems, some kinds of services and interfaces may only be accessed

18
locally. The exposure of a systems attack vectors from the perspective of a
local entity, changes significantly depending on the available services, their
privileges and configuration [40]. While the network interfaces of a system as
well as the network based applications are expected to face significant expo-
sure, local interfaces may be build with different threat models due to their
limited exposure [64]. Measuring the exposure of the local attack vectors de-
pends on more detailed assumptions regarding the possibilities of an attacker
[65]. Local software applications or services may be build with the expectation
that only trusted users and input data will interact with the running process.
The initial attack surface of a local application, can be reduced to its local in-
terfaces and parsing functions [66]. One possible way for an attacker to interact
with local applications is to indirectly send input data that is able to exploit
a parsing error within the local process. This can be achieved by tricking the
user into opening an attacker controlled file, or by exploiting design errors in
remotely reachable applications, that send data to local processes for further
processing [67]. This kind of indirect interaction with local applications is of-
ten implemented by utilizing phishing attacks or other kinds of psychological
tricks that lead a legitimate user into opening untrustworthy data using a local
software application [57]. Complex file types, such as PDF or office file formats
that are a established way of exchanging data on the internet, are especially
prone to be exploited using this method [68]. Vulnerabilities in these kinds of
applications are therefore a primary attack vector for local software.
Once an attacker has gained code execution on a local system, either by
exploiting a remote service or a local application, the available attack vectors
towards the local system change significantly. While the attack vectors of local
applications and remotely accessible services are defined by specific software
that the overall system is often designed for, a local attacker can access all
services and interfaces that are provided the the exploited process. Arbitrary
code execution in the context of a process means, that the attacker has gained
the permission privileged of the user, that is running the exploited process
[69]. A local user or process on a Linux system, is therefore permitted to access
significantly more resources and services provided by the kernel. Some attacks,
such as denial of service, are trivial for local users to execute in the absence of
additional hardening features. For example, any process on a Linux system can

19
fork itself indefinitely, until the available system resources are depleted [70]. If
the targeted process is running with limited user permissions, the attacker may
have gained access to only those resources that the exploited process requires
to provide the intended services. However, the next permission boundary that
can be interacted with by the attacker after having achieved local shell access,
is that of both other more privileged application as well as the operating system
kernel itself [39] [40] [36]. While the reachable attack vectors of the remote
network services is limited to specific functions intentionally provided by the
application, the interfaces for local processes, as presented by local services
and kernel interfaces is significantly larger. The number of available system
calls in version 3.7 of the Linux kernel already amounted to 393 [71]. These
system calls present the attack vectors that are directly exposed towards the
Linux kernel from the perspective of a local user space process.

2.4.3 Security Boundaries

As discussed in the previous chapter, a significant differences regarding the


amount of exposed attack vectors can be located at the network level. Once
an attacker has gained local code execution, the number of possible attack
vectors is greatly increased, compared to usual network application interfaces
that are reachable by remote attackers [39]. There are however more security
boundaries then the network periphery [40]. Privileges are used to enforce a
more detailed permission set to control access for specific entities and resources
[69]. These boundaries are enforced by authentication [72] [73] and authoriza-
tion mechanisms [74], that verify the users identity, and grant permissions
based on the defined role of the users identity. Common authentication mech-
anisms are often based on password verification, using a shared secret between
a specific user and the authenticating application [75]. The authorization can
either provided by the application itself, a specific service, or the operating
system kernel itself. Using well established security boundaries, is therefore
an important aspect of secure system design, as they affect the attack surface
of a software system [76]. Common security boundaries in software systems
include the previously discussed application based authentication [72] [48],
network based authentication [55] and kernel based authentication [69] [18].

20
An unauthenticated remote entity, is usually presented with the smallest
attack surface towards a software system. After being authenticated, a remote
entity is exposed to an increased amount of the application interfaces, depend-
ing on the specific authorized permissions that entity was granted. However
the exposed attack vectors are still limited to the application, unless the re-
mote service already includes significant vulnerabilities that allow the user to
execute code on the local system [77]. An application based authentication
boundary is therefore often exposed to a large number of systems, like in the
case of common web based applications that are provided to the public inter-
net with minimal identification of the remote users. Even with only a small
number of attack vectors, the significant exposure of these applications leads to
a large attack surface, due to easy exploitability of publicly accessible systems
[78]. For this reason, these kinds of applications are usually confined into a
DMZ network segment, thereby shielding other services from further, indirect
attacks [79]. In case a remote application interface is provided to a more lim-
ited network, like an intranet, the security boundary may be implemented by
a VPN service that controls the access to the local network itself. Restricting
access to a service by a physical proximity is an alternative boundary, that
includes less technical and more social attack vectors [80]. Overall, the expo-
sure of internal network services is significantly smaller then that of software
systems that are exposed to the public internet. This however can lead to
additional risks, since the exposure from the internal network service towards
other internal infrastructure systems is now a new factor, that becomes rele-
vant in case an attacker compromises a single system with intranet access [55].
While the initial exposure of a software system that is only accessible from an
internal network is significantly smaller compared to public internet services,
the indirect exposure of other critical systems can be greatly increased, making
an assessment of the overall attack surface of the infrastructure more difficult.
This phenomenon is even more significant if the security boundary towards a
local operating system is targeted. Local applications that are not provided to
other networked systems, include a very limited exposure towards attackers.
Since an attacker cannot interact with a local software without gaining first
access to a local user, the direct exposure of these local software interfaces is
nonexistent. However, indirect access is still possible, for example by exploiting

21
another network based service on the same host, or by using social engineering
methods to trick the user into parsing untrusted data, that exploits vulnera-
bilities in the local applications that are used to process this data. To access
the local attack vectors of a system, therefore requires additional attacks on
other parts of the system. However, once local code execution is achieved, the
available attack vectors are significant. After gaining local shell access with the
permission of either a restricted or normal user, the next security boundary is
presented by the authentication system that enforces the available permission
on the local system, with the objective of gaining full administrative control.
This can be achieved by exploiting vulnerabilities in privileged services that
are exposed to unprivileged users. Depending on the privileges an application
is running with, the attacker can already have gained privileged system access
at this point. Besides exposed privileged user space applications, an attacker
with local shell access has already access to a large number of system calls,
provided by the operating system kernel. Exploiting kernel vulnerabilities in
these system calls, can allow a local user to gain administrative privileges.
The last possible security boundary of a software system, is that between user
space and kernel space. This boundary is enforced by the different protection
ring implemented by the CPU itself. Gaining full kernel space access can be
achieved by exploiting critical vulnerabilities in system calls that the operat-
ing system provides. The availability of system calls, is also affected by the
permissions of a user [36]. While a large amount of system calls on Linux
systems is already available to unprivileged users, some system calls that pro-
vide security relevant functions are only exposed to privileged users. Kernel
features that extent the exposure of system calls to unprivileged users, have
consequently lead to security issues in the past [81]. As discussed, the secu-
rity boundaries affect the exposure of attack vectors. The visualization in the
graphic below, illustrates the effects of security boundaries in regards to the
exposure of different system components. While the directly reachable inter-
faces (red), indicate the immediate exposure, a number of additional attack
vectors are reachable indirectly (blue), either by user interaction [80] [57] or
through breaching previous security boundaries [55] [40].
Taking all these different security boundaries and their individual exposure
into account, makes an attack surface analysis significantly challenging. It can

22
Figure 2: Security Boundaries

be summarized, that the exposure of a software system, is greatly influenced


by the position and privileges of an attacker and that the exposure of attack
vectors changes with each security boundary an adversary has overcome.

2.5 Risk Assessment

Followed by an attack surface analysis, risk assessments based on the results


may be the next step. While the technical analysis of the attack surface, based
on exposure and attack vectors has shown a number of possible targets for an
attacker, many of these discussed attack steps may not be necessary to archive
significant damage. The exposure of an attack vector, may have significantly
less relevance, if the resulting access is not required to archive an attackers

23
objective. For some applications the data a system contains is the actual
target. This data may be available to even unprivileged users, thus making
further attacks on a system unnecessary [82]. The relevance of an attack surface
towards specific parts of a system, is therefore significantly influenced by the
data and services a software system provides [54]. One common example is a
database that is accessed by a web application to manage user data like credit
card information. If the objective on an attacker is to access and sell this data,
it may be enough to find and exploit a single vulnerability, accessed through
a publicly exposed attack vector. Given that attackers who aim for this kind
of data are usually profit driven, the direct accessibility of this data would
make the effort for such an attack worthwhile [3]. Since applications like these
are often used by a large number of organizations, a single vulnerability in a
publicly accessible interface that is commonly exposed to the public internet
will most likely result a large scale attack against public instances of this
software, since it only required the development and deployment of a single
exploit. The very limited attack surface of the application will not protect
itself due to the large exposure [78]. In other cases, the relevant data may
only be accessed by security hardened systems, located on an internal, highly
monitored network that can only be accessed by using strong authentication.
Taken into account the significant security boundaries required to access this
data, the indirect exposure of the target system is very small. However, if the
data is important enough to implement these extensive security measures, an
attacker interested in compromising the information may invest similar efforts
to attack the involved systems. A practical example of a cyber attack that
includes this kind of investment can be found in the stuxnet malware and
their use against strategic targets [10]. The potential gain of cyber attacks
increases with the development of digital infrastructures and governments have
less focus on economics but rather strategic objective value that makes usual
costs required for developing exploits less relevant [83]. The relevance of a
systems exposure in regards to an attack surface analysis, is therefore not
only influenced by the technical environment a system is implemented into,
but also nontechnical factors like the value of data, potential adversaries and
their capabilities or even political developments. The overall security risk
presented by software system can be assesses by a number of factors besides

24
the attack surface itself. The DREAD risk assessment model developed by
Microsoft uses five categories to determined the associated risks [84]. Damage,
reproducibility, exploitability, affected users and discoverability calculate a risk
rating for a specific software deployment according to this model. Other threat
models that are used today include the STRIDE model [84] that approach the
risk assessment by measuring the consequences of an attack. Similar to the
STRIDE model, the Common Vulnerabilities and Exposures (CVE) scheme
has seen a wide adoption for the assessment of risk in regards to software
vulnerabilities [85]. The exact measurement for calculating the risk associated
with a software system depend on a number of system attributes and priorities,
however risk is generally calculated by the likelihood times the impact of
a threat [86]. The attack surface of a software system presents the likelihood
of a successful attack, while the data value significantly determines the impact
of successful exploitation [87] [88].

2.6 Measuring Attack surface

As discussed in previous chapters, the attack surface of a software system is


comprised of the attack vectors that are exposed towards a specific attacker
[21]. Measuring this attack surface by assessing the available attack vectors is
a difficult task [21]. Taking into account the exposure of those attack vectors
in a dynamic network environment, will additionally only provide a momen-
tary picture of the attack surface [89]. Assessing the attack surface in this way
is therefore difficult, resource intensive and provides limited value due to the
changing environment. Additionally, the number of attack vectors can change
as well, when security researchers identify new ways of exploitation that intro-
duce new attack vectors in unforeseen ways. Recent examples can be found
with Spectre [42] and Meltdown [90], that have both revealed new attack vec-
tors in a large number of software systems. Significant software changes that
introduce new features can also expose existing software interfaces through
security boundaries that have previously shielded them from exposure. The
introduction of Linux user namespaces is one example where a new feature
has lead to the exposure of additional attack vectors [81]. Significant parts
of the Linux kernel that were only accessible by privileged users before, were

25
exposed to unprivileged users by the introduction of this feature. A number
of kernel bugs were made exploitable for unprivileged users as a result [43]
[44] [45]. Related security risk are still present in modern container technology
[91]. An attack surface analysis that assesses the risk of an attack vector based
on its exposure, would be made incorrect with the introduction of this kind
of new software features as well as the discovery of new classes of vulnerabil-
ities like Spectre and Meltdown. Exposure changing software features as well
as newly discovered classes of vulnerabilities therefore make it obvious that
the exposure of an attack vector is very difficult to assess and unreliable in
light of changing threats and environments. Even worse, the introduction of
the previously mentioned CPU based vulnerabilities, did not only change the
exposure of vulnerable code, but introduced new attack vectors as well. Since
security researchers are regularly publishing newly discovered attack vectors
[92] [93], any attack surface analysis only presents a fragile assessment of the
current state of a software system, that can change by external factors easily
[62]. The question remains, how the attack surface of a software system can be
measured reliably, without depending on the environment, use cases, processed
data value, adversarial capabilities and newly discovered vulnerability classes.

26
3 Security Metrics
The assessment of the security attributes a software system possesses, requires
a objective metric to allow a comparison between different systems and com-
ponents. This chapter will describe the different challenges in measuring
security in software as well as establish metrics such as measuring identified
vulnerabilities and using software quality metrics due to their relevance
in previous research.

3.1 Measuring Security

Measuring the security of a software system using objective and reproducible


metrics has been a longstanding challenge [94]. The security of a system is
maintained by ensuring the intended functionality cannot be changed by unau-
thorized entities. Any possible ways to interact with a system that allows such
unintended changes, is defined as a vulnerability of the system [85]. As has
been discussed in the previous chapter, the measurement of attack vectors and
their exposure is fragile at best and only results in a snapshot of the current
security state of a system based on currently known risks [62]. One signifi-
cant area where this fragility has become obvious is with implementations of
cryptographic algorithms. While there have been cases there the cryptogra-
phy was broken by advances in cryptanalysis, most attacks on this area target
implementation errors instead [95]. The attack vectors that where tradition-
ally most closely analyzed, did turn out to be mostly irrelevant. Considering
the size of some software applications, a manual analysis also requires exten-
sive efforts and giving their limited value may not be reasonable. Comparing
the security of two different software products also required a comparability
of such a metric between different systems. An absolute metric to compare
the security attributes of different systems, may not be possible [89]. How-
ever, a relative comparison between different versions of a software system can
provide an objective result that indicates whether one version is more secure
then the other [62] [21]. Comparing different software products with a similar
function and core attributes may also provide a meaningful result that helps
to determine the more secure product. Given that the security is defined by
the enforcement of an intended state, one thing that can be measured dur-

27
ing security assessments is not security but rather deviance from the intended
state, or insecurity. Counting the number of previously known vulnerabilities
a software product had is one way to compare such a deviation and compare
different products that provide similar functionality with each other. Software
quality metrics are also an establish method to measure common quality at-
tributes of a software source code. Given that security tightly connected with
software quality attributes, it may be possible to use some of these established
method to make an assessment about the software security attributes as well.

3.2 Vulnerabilities

One established methods of measuring the security of a software is to count


the number of known vulnerabilities that have been found in the past [94].
This metric is however highly dependent on the effort put into discovering and
documenting vulnerabilities. It also needs to be considered that the discov-
ery of vulnerabilities in the source code can be the result of good software
development. By using secure coding standards and extensive test efforts,
many software vulnerabilities can be found at an early development stage as
well [33]. This metric is also influenced by the available resources available
to a project, like the number of developers reviewing the source code. With
increased efforts put into finding and fixing security related software errors,
the number of discovered vulnerabilities increases as well. In comparison, a
software project that does not apply secure coding practices by testing and
making code reviews will find less vulnerabilities. By measuring the number
of known and documented vulnerabilities of the result would indicate that the
former software product is less secure then the later, while this is not the case.
Given that some development teams also do not document security rele-
vant coding errors, this metric does have very limited value when comparing
different products and the results vary significantly depending on the effort
put into finding and documenting security issues.
The application of secure software development practices can reveal and
prevent security issues before they are introduced into production release
stages. These software quality measures will therefore prevent vulnerabilities
at an early stage during the development life cycle. The discovery of exist-

28
ing vulnerabilities in the released software product by independent security
researchers or active exploitation discoveries in the field will also increase this
metric. However the chance of vulnerabilities being discovered in this way is
depending significantly on the circulation of the end product. Another factor
will be the average value of the systems and data influenced by the software
itself, as well as the environments the software is used in. A software that is
not deployed on large numbers of systems will decrease the chance of a profit
driven adversary to develop exploits for this product. If the data and sys-
tems connected to the software are of low value, it will also not give attackers
any significant incentive to attack the software, thus reducing the number of
discovered vulnerabilities by public exploitation. Comparing the number of
known vulnerabilities between different software product are therefore addi-
tional dependent on the comparability of their deployment condition. Only if
the exposure, data value, use case and effort put into finding vulnerabilities is
similar in the compared products, are the basic conditions for a comparison
provided. Other factors that need to be comparable are used secure coding
practices and development resources like number of developers involved. These
conditions are however very rarely met in real world software projects, as ev-
ery one of these factors can easily vary to a significant degree. The value of
counting the number of known and documented vulnerabilities to compare the
security of software products is therefore very limited and does not appear
to be a very useful metric to assess their security. The dependence towards
the effort put into discovering vulnerabilities as well as the discussed external
factors, make this metric unusable for assessing software security in general.
The number of known security vulnerabilities can however provide a value to
a security assessment in that it documents past issues that may or may not
indicate a overall software security state. Using this metric as an indicator
for vulnerable areas can still be useful to guide the focus for future security
reviews towards these specific areas. What this metric is not able to provide
is an indicator for the absence of vulnerabilities.

29
3.3 Software Quality Metrics

Software quality metrics allow a quantitative evaluation of different code qual-


ity attributes that indicate the compliance with functional and non functional
requirements [96]. While some software quality attributes like efficiency are
not directly relevant to the security attributes of the code, many quality met-
rics like reliability and maintainability are related to the security attributes
[97]. Consequently the use of software quality metrics provides a valid and re-
producible method to measure the security attributes of source code [94] and
has been used for attack surface analysis in the past [24].

3.3.1 SLOC

Source lines of code (SLOC) is a simple software metric that is commonly used
to measure the extent of an applications source code. Since a larger codebase
has a higher potential of coding errors, the SLOC present an indicator of
potential quality and therefore security issues in program source code. However
the extent of the source code itself, does not make any statement in regards
to the actual quality of the source code and only allows a lose correlation with
the potential for errors in regards to the size of a software project. While it is
an intuitive measurement of source code attributes in regards to the extension
of the code base and logical code to comment line ratio, it is also an imprecise
measure. Difficulties in regards to comparability of this metric originates from
the different possible rules to count valid lines of code [98]. While there are
a number of available applications to measure the SLOC, the results can vary
significantly and make a comparison between software projects measured with
different tools even more difficult. Among the most relevant disadvantages of
this method to calculate source code quality, is its significant dependency on
the language, the programming style and its lose correlation with functionality.
While the comparison of projects written in the same programming language
can still serve a comparative value, different programming styles can distort
the results significantly, making the metric itself only a very simple and shallow
indicator of the actual extent of the code base. Yet the SLOC is one of the
most commonly used metrics to measure some aspects of source code in regards
to the expected quality.

30
3.3.2 Cyclomatic Complexity

Cyclomatic complexity is a quantitative method to measure the complexity of


source code, defined 1976 by Thomas J. McCabe [26]. It is one of the most
commonly used software quality metric that is used to indicate the included
complexity of a software product. Code review applications such as Sonar-
Qube, measure cyclomatic complexity using static code analysis [99]. This
metric calculates the number of possible independent paths through the source
code. This allows to exactly measure of the number of required test cases, nec-
essary to archive full test coverage in addition to an indicator of the complexity
of the product itself. The number of possible code execution paths measured
by cyclomatic complexity provides a reproducible and objective metric in re-
gards to the software code quality. However, the mathematical complexity
is not necessarily equivalent with the difficulty of comprehending the source
code functionality itself. Complexity is the difficulty to understand and verify.
While it is possible to write code that includes a small cyclomatic complexity,
it can be desirable to instead write code that is easier to understand from a
programmers perspective, even if it results in a higher cyclomatic complexity.
Well designed code that can be read and maintained easily, may therefore be
objectively of higher quality even if it includes a higher cyclomatic complexity
then necessary. This measure is therefore not optimal to measure the under-
standability of source code from the perspective of a human reviewer. While
many programming errors can be found or at least indicated by static code
analysis tools, many review processes still require manual analysis to verify
the intended functionality of the code. It is therefore crucial to keep in mind
the limitations of cyclomatic complexity as a software quality measure and
consider alternative metrics as additional indicator for readability. In general
cyclomatic complexity will provide a valuable indicator of the quality of source
code. A number of different software quality aspects such as maintainability
are directly affected by complexity, indicating that a low complexity improved
the software quality in general. The most relevant software aspect measured
by complexity metrics in this paper is however the software security aspect.

31
3.3.3 Cognitive Complexity

While cyclomatic complexity does not measure the readability of source code,
the cognitive complexity metric was defined to include considerations regarding
the human effort required to understand the source code as well [100]. Cogni-
tive Complexity measures include code attributes that increases the difficulty
to understand the functionality from a programmers perspective. While it can
be argued, that Cyclomatic complexity indicates an exact metric in regards to
the code execution path complexity, as it is relevant during execution or static
code analysis, there are also use cases that require a profound understanding
and intelligent analysis of the code itself. Manual code reviews, especially in
security critical software like cryptographic libraries, are still required to find
certain classes of logical bugs and aberration from intended functionality, that
cannot possibly be found by static code analysis. To enable an efficient review
and maintenance of such code, it is crucial that the code is kept readable to
a reasonable degree. Cognitive complexity is not a substitution to cyclomatic
complexity but instead focuses on different complexity attributes. Using mul-
tiple complexity metrics to complement each other and nullify methodological
shortcomings, can therefore be a valid strategy to increase the informative
value of the complexity analysis.

32
3.4 Complexity and Security

Measuring the security of software in a objective and reproducible way is a


difficult task without a fully established solution. With attack surface analysis
methods being highly dependent on fast changing environmental dependencies
like exposure and known attack vectors and in the absence of a general applica-
ble security metric for traditional attack surface analysis results, the question
remains how the security of a software system can be measured in a generic and
reproducible way. One possible way to approach a solution is to avoid finding
an exact solution to specific security measures and approach the problem at
the core at the cost of precision. It has been shown by previous research, that
a correlation exists between software complexity metrics and the number of
defects in the software source code [101] [102] [25]. Additional correlations
have been found between CVE-based vulnerabilities and code quality metrics
[103]. A quantified approach to measuring software security by using metrics
that have shown a strong correlation with other security metrics and existing
coding errors can therefore be a valid strategy. Using software complexity as
a metric to indicate the potential of defects within source code, can further-
more provide reproducible and objective results that are not dependent on
versatile environmental factors. Even without a direct causality between code
complexity and resulting security, measuring the potential for coding errors by
code complexity metrics can provide a meaningful prediction in regards to the
probability of exploitable vulnerabilities.
While the reproducibility of complexity metrics like cyclomatic and cog-
nitive complexity is unproblematic due to the availability of these definitions
and public tool chains, comparability the results depend on a number of fac-
tors. Comparing different software products in regards to their complexity
metrics can only provide relevant value if the compared products share the
same core attributes. One of the most significant code attributes is the used
programming language. Since different language imply significant differences
in regards to resulting code complexity, complexity metrics can only be com-
pared between software products that are written in the same programming
language. The comparison of software complexity in a code base written in low
level programming languages with that of programs written in high level lan-

33
guages will not result in any useful result as part of the resulting complexity is
moved from the source code to other parts of the system. Other programming
strategies that hide code complexity by using external libraries or abstractions
that provide external functionality will manipulate the results in similar ways,
limiting the value of the comparison. Measuring the complexity in imported li-
brary functions increases the difficulty and accuracy of the metric even further.
Additionally, the source code can include support for different operating sys-
tems, hardware or execution architectures. Since the end product is executed
on a specific platform only, these code areas cannot be added unconditionally.
Measuring the complexity for specific execution environments, would required
the exclusion of irrelevant code sections that remain unused.
If the base attributes of two different software projects are comparable, the
provided functionality also will additionally have a significant impact on the
complexity. Since additional features will increase the codebase and therefore
the complexity values, comparing software products that allow different use
cases, will make comparison in regards to complexity as a quality metric more
difficult. However, since the security of the software still correlates to the
overall complexity, additional features also need to be considered.
Comparing complexity metrics resulting from different versions of the same
software, can provide a easily comparable result that indicates the impact dif-
ferent features or changes between the two versions have on the resulting com-
plexity and therefore the overall approximate security. If the entire software
system, including all direct and indirect dependencies is measured, the re-
sulting complexity values represent the theoretical attack surface through the
potential of vulnerable code, without the need to consider currently known at-
tack vectors. While not all this vulnerable code may be exposed to be attacker
by typical adversaries in real world environments, it still allows an assessment
of the code security and quality attributes that is present in the system and
may potentially become exposed. The existing complexity of a software system
therefore serves as a measure for potential security issues, while non existing
complexity, can never result in vulnerable code. The reduction of software
complexity will therefore improve the software security and overall quality.

34
4 Kernel Architecture
The Linux operating system is based on a monolithic kernel that has been
under development since 1991. While Linux Torvalds is is the head devel-
oper of the Linux kernel, the development progress is driven by thousands of
developers without any significant connection towards each other [104]. The
development of the kernel is not the result of a strategic plan but rather an
evolutionary process [105]. While the monolithic nature of the Linux kernel
means that a significant amount of code is executed within the same mem-
ory space, there are some architectural design choices that make it possible to
limit the amount of kernel code that needs to be loaded at the same time. One
such feature is provided by loadable kernel modules (LKM), that allow the
addition of code into the kernel process during its execution. This allows to
use a significantly smaller operating system kernel that is still able to perform
the required tasks. Linux kernel modules can be used to add a number of
extensions to the currently running kernel, such as support for new hardware
devices and file systems. Many kernel features can be compiled as part of the
core kernel or as a loadable kernel module, and the compile time options that
regulate this behavior have significant impact on the resulting kernel. To un-
derstand the Linux kernel architecture it is required to consider the different
subsystems the kernel is comprised of. Additionally, understanding the role of
features such as loadable kernel modules and the impact of the build system
on the resulting kernel is crucial as well.

35
4.1 Kernel Interfaces - System Calls

At the highest abstraction level, the differentiation between user space and
kernel space is most apparent. Enforced by the CPU ring boundary, processes
in these two ares live in different address spaces that cannot interact with
each other directly. The only way for user space applications to communicate
with the operating system kernel is via system calls [38]. These can either be
addressed directly, or indirectly via user space libraries that provide functions
which use the kernels system calls directly instead [106] [107]. The system calls
process the requests from user space applications and execute the them using
different kernel subsystems. Some lower level parts of the kernel use hardware
dependent code that addresses the available hardware interfaces directly. The
graphic below describes the dependency flow of function calls from user space
to the different parts of the kernel until its execution by the actual hardware
platform.

36
Figure 3: User space to Kernel Communication

4.2 Overview of Linux subsystems

Before the actual approach to measure the attack surface of the Linux Kernel
is discussed, a general overview of the kernel architecture and source code
structure will help to highlight relevant code components that need to be
considered during the analysis.
The Linux kernel is comprised of a number of subsystems that provide
different operating system tasks [17]. Providing a high level abstraction of
the different kernel subsystems, the most relevant components are the sys-
tem call interface, process management, memory management, the virtual file

37
system, the network stack and the hardware depended code areas, comprised
mainly of the devices drivers and architecture dependent code [108]. A detailed
description of the different kernel functions is provided by the official Linux
documentation that is updated for every new kernel release [19]. To assess the
overall relevance of the different subsystems for the following attack surface
analysis, the tasks of the different architectural components that are visualized
in the graphic below, are described in this chapter for a high level abstraction.

Figure 4: Kernel Subsystems

38
4.2.1 System call interface

The System call interface is the part of the kernel that provides function calls
towards the user space process environment [18, p. 819]. As such it is the cen-
tral part of the Linux Kernel that is directly accessible from outside the kernel
memory space. While some architecture depended system calls are provided,
the majority of Linux system calls is an integral part of the core kernel code
and shared between different CPU architectures [109]. While the system calls
of the kernel define how the user space can interact with the kernel, the actual
functionality is provided by the underlying kernel subsystems. The system call
interface therefore provides the application program interface for the kernel.

4.2.2 Process management

The process management subsystem provides one of the most crucial core tasks
of an operating system [18, p. 35]. All functionality that regards the manage-
ment of processes is implemented in this part of the Linux kernel. Creating,
managing and stopping user space processes as well as kernel space threads is
controlled here. One of the more complicated kernel tasks involved the man-
agement of CPU execution time, also called scheduling, is implemented in this
subsystem and provides the different running processes with CPU resources.
Process signal management and the handling of interrupts is also handled by
this part of the kernel.

4.2.3 Memory management

All management operations of system memory are handled by this subsystem,


providing one of the most basic core tasks of an operating system kernel [18,
p. 133]. The implementation details and exact execution of memory opera-
tions can also depend on the specific hardware a kernel is running on. Some
hardware platforms do not provide features like a memory management unit,
resulting in the kernel managing additional tasks that are otherwise performed
by parts of the hardware. Tasks that are implemented in this kernel subsys-
tem include the management of physical memory, like allocating and freeing

39
memory as well as the management of virtual memory, including paging and
swapping. In addition, this subsystem is responsible for the management of
the user address space. System call functionality such as provided through the
mmap system call are mainly implemented here.

4.2.4 Virtual file system

The virtual file system is a significant abstraction for all file operations [18,
p. 519]. Instead of accessing different file system or even storage devices di-
rectly, this abstraction layer provides an API for file operations independent
of the used implementations. System calls related to file operations such as
open and close are implemented by this kernel subsystem, thereby providing a
standardized interface. Below the virtual file system interface are management
functions that implement the desired operations for the different file systems.
In combination with a common set of buffer operation functions, this code
manages the access to the device drivers itself that interact with the storage
hardware itself. A clear separation between this subsystem and other parts
of the kernel such as device drivers is difficult, just like a separation between
process and memory management towards the CPU architecture is not strictly
possible in all function areas.

4.2.5 Network stack

The network stack is a crucial part of the Linux kernel that implement different
networking protocols [18, p. 733]. Following the layered architecture approach
implemented in common network protocols, the Network stack of the kernel
provided APIs for different network layers. If requested, the network stack can
provide access to raw network frames itself, although more commonly higher
level protocols like the internet protocol or TCP and UDP are used to commu-
nicate between data endpoints. Besides managing access to remote systems,
the socket layer of the network stack is also used for local inter process commu-
nication. Like the virtual file system of the kernel, the network stack has also
a close relation to the device driver subsystem regarding the communication

40
to network devices.

4.2.6 Device drivers and device management

The device drivers and device are providing a significant part of the kernel
functionality by providing communication to the actual hardware devices [18,
p. 391]. Separated into different areas of device classes, this subsystem in-
cludes all required code to manage and use the supported hardware. The
subsystem provides another abstraction layer as well, by providing a unified
device model that can be addressed by other parts of the kernel such as the
virtual vile system or the network stack. Every device subsystem further has a
specific interface that is provided to allow easier communication with hardware
devices without the requirement to adjust the calling code for each specific de-
vice driver.

4.2.7 Architecture-dependent code

While most of the Linux kernel code is independent of the specific architecture
that is used to execute the code, some functions require architecture dependent
code [18, p. 1117]. One prominent reason is increased efficiency. Code sections
related to specific CPU architectures, may include additional functions that
provide support for tasks like process and memory management, allowing re-
source critical operations to be optimized for specific platforms. Additionally,
this kernel subsystem contains code that manages the boot process as well
as architecture specific initialization. It also handles various hardware related
tasks for execution on specific architectures, such as interfacing with interrupt
and BUS controllers, setup of exceptions and virtual memory handling.

41
4.3 Kernel Modules

The Linux kernel interacts with user space processes by providing system calls
that are implemented by the various kernel subsystems. These interfaces are
defined by the Linux kernel API and are considered fairly stable [110] as op-
posed to the binary kernel interface. However, there is another mechanic avail-
able that allows user space entities to influence the kernel space, resulting in a
changed implementation of the provided system calls. That mechanic is pro-
vided in the form of loadable kernel modules [18, p. 473]. Kernel modules allow
the extension of the running kernel to increase the available functionality with-
out requiring a reboot of the system or even a custom compiled kernel. Kernel
modules are an integral part of the Linux kernel and a significant amount of
the kernel code is implemented as kernel modules. Kernel modules can either
be compiled as part of the core kernel or as a loadable module. This allows to
keep the running kernel fairly small, while still providing the option to extend
the available kernel features during run time. Common use cases for kernel
modules are device drivers, file system and network protocol implementations,
as well as more specialized kernel functions. The concept of kernel modules
makes it possible to compile and run a very small version of the Linux kernel
that may be optimized for specific tasks only. This is often the case for em-
bedded systems that do not require a generic kernel [111]. Among the most
relevant use case for kernel modules are device drivers [112]. Since a lot of ker-
nel code is written to support specific hardware, the possibility to run only the
required driver code significantly reduces the size of a kernel that is running on
a specific hardware platform. Device drivers specifically are commonly loaded
automatically when a specific hardware device is attached and identified. As
part of different abstraction layer like the virtual file system, device drivers
also influence the implementation of specific systems calls that are used to
access hardware devices [112]. Other kernel modules are loaded on demand
by user space applications when they are required to fulfill a specific task like
the use of a specific network protocol. The Linux kernel also offers modules
implementing cryptographic algorithms for improved performance. Linux ker-
nel modules therefore present a significant part of the kernel architecture and
allow to extent and influence the running Linux kernel considerably.

42
4.4 Hardware dependent code

As discussed in the previous chapters, many Linux kernel components depend


on hardware related code. Given that the main task of an operating system
is to present an abstraction layer between the hardware platform the the user
space application environment, this is to be expected. The specific instruction
set architecture (ISA) of the central processing unit has a major impact on
the way the specific kernel subsystems operate. A lot of kernel code today
has been designed to be architecture independent, requiring only the compila-
tion process to consider the target execution environment. However, especially
for performance reasons, low level implementations of many kernel functions
still contain a significant amount of architecture dependent source code [18,
p. 1117]. There are a number of ISA implementations available in the Linux
kernel to allow the execution on different CPU architectures. Considering the
extent of the actually used kernel code during the execution on a specific plat-
form, only the code for a specific architecture will be used. The most significant
part of hardware dependent kernel code is however presented by the numer-
ous device drivers that are available in the Linux kernel sources. With driver
implementations for hundreds of different hardware components present in the
Linux kernel source tree, it can be said that only a small subset of this code
is ever used at the same time. That is also the reason why most Linux drivers
are build as loadable kernel modules [113]. This allows the driver code to be
loaded into the kernel process only when the actual device has been identified
as present on the currently used hardware platform. While the kernel subsys-
tems responsible for process and memory management, include dependencies
towards the ISA related code sections, the network and device management
subsystems are significantly dependent on the loaded device driver modules.
The hardware dependent kernel code that is used by the other components
of the kernel remains extensive, either by the use of ISA implementations or
specific driver modules.

43
4.5 Source Code Structure

The Linux kernel source code is structured along its various subsystems as well
as major code functions that provide general features that are used by multiple
parts of the kernel. To allow a more detailed overview of the different kernel
features, the folder structure of the kernel source code is briefly explained in
this chapter.

Subfolder Implementation
arch Architecture related code,
low-level code for memory and process management,
hardware initialization and assembly routines
block block I/O layer and block devices management
certs certificate information for module signatures
crypto kernel crypto API
providing common cryptografic algorithms
Documentation Kernel source code documentation
drivers Kernel code of hardware device drivers
fs Virtual file system abstraction layer
code for various file system implementations
include kernel headers for include files to build the kernel
init Kernel initialization during the boot process
ipc Inter process communication channels
shared memory, pipes and signals
kernel Essential kernel functions
as well as the system call interface
lib Common function used by various parts of the kernel
mm Virtual memory management abstraction
and early boot memory management functions
net Network abstraction layer for
the high level network management
addressed by low-level network driver functions
.

44
Subfolder Implementation
samples Sample code for various kernel functions
scripts Support scripts for the build process of the kernel
security Linux Security Module framework
for optional access control policy modules
sound Sound subsystem including related driver code
tools Kernel development tools
including test modules for various components
usr Code for the root file system image initramfs
virt Kernel virtual machine hyper visor module

45
4.6 Kernel Attack Surface

The system call interfaces of the kernel seemingly expose very limited parts
of the kernel to the user space environment. However as the analysis of the
kernel architecture as shown, the implementation of the provided system call
functions is done by the different kernel subsystems. Additionally, these sub-
systems are tightly integrated with each other and have dependencies with
hardware dependent code, which makes a clear separation of kernel code that
are responsible for specific system calls difficult.
The attack surface of the kernel is defined by all possible ways to breach
security boundaries that are enforced by the kernel itself. That is mainly the
boundary between userspace and kernel space, but also the security boundaries
enforced by the access permissions that are implemented. That can include
the discretionary access control model that is used by default on Linux systems
or other types like mandatory access controls that can be enforced by kernel
extensions such as SELinux.
The Linux kernel is a monolithic system that executes all kernel code within
the same memory space. There do not exist any significant security boundaries
between different parts of the kernel code, which makes a separation of attack
surfaces even more difficult. Coding errors in kernel functions that are directly
or indirectly accessible for user space processes, provide a direct and complete
access to the entire kernel space. The resulting, indirect attack surface of the
kernel is composed of the entire kernel itself.

46
5 Related Research
A number of related research projects have approached to problem of assessing
the complex attack surface of large software systems like the Linux kernel
before. This chapter will describe some of the related works in this area and
their specific advantages and problems.

5.1 Attack Surface Analysis

The reduction of the Linux kernel attack surface has already been subject to
previous research [24], that can be used as a baseline for further work in this
area. Measuring the general attack surface of software systems based on attack
opportunities has also been researched and helps to establish the previously
used methods for attack surfaces analysis [21][23]. To assess the outcome of
these different measurements, their results can be compared with previously
found vulnerabilities [114] to compare the theoretical assessments with real
world examples of security issues. Research regarding the potential of vulner-
abilities to be successfully exploited [88] can also provide additional incentive
to assess the criticality of exploitable coding errors and their associated risks.
These previous research projects have already provided a number of results
that can be used for attack surface assessment and reduction in the Linux
kernel. Compared to the available methodologies for attack surface analysis,
the complexity metric measurement used in this thesis will attempt to improve
the shortcomings of previous research approaches.

5.2 Measuring Security

Measurement methods that allow to assess different software attributes in gen-


eral have been established for some time [98] [96]. These measures can help to
estimate development efforts software quality state and other attributes that
generally provide a useful metric to base future decisions on. When it comes
to measuring the security attributes of a software system, some quality at-
tributes can serve as an indicator for security. However there is not a single
direct metric that mirrors the security of software directly [62]. The existing
approaches to measuring the security attributes of software are often based on

47
assumptions regarding the deployment environment and currently know at-
tack vectors, that make it difficult to measure an absolute metric. Promising
research regarding this kind of general security metric focus on the relative at-
tack surface, while comparing different versions of the same software [21]. This
allows a more relevant result due to the same code base that is compared with
each other, thus avoiding problems like the difficult comparability of programs
written in different programming languages.

5.3 System Call Exposure

The directly exposed areas of the kernel are accessed by using system calls.
The thereby provided kernel functions are one of the primary research objects
when it comes to assessing the kernel attack surface. The availability and ex-
posure of system calls to the user space stays relevant even when a process
does not actively use a specific system call. When the user space application
itself includes vulnerable code that may allow an adversary to execute code
in the context of the process, that attacker provided code can also access all
of the available system calls and thereby, large parts of the kernel. To miti-
gate this risk, recent software architectures have begun to include mechanisms
like system call sandboxing [115] and container technologies [116]. Related re-
search projects like [36] have shown significant difference in regards to the risk
associated with the availability of specific system calls, based on the provided
functionality. When it comes to attack surface reduction of software system,
the isolation of unused features and interfaces is a well established approach
[116][23]. In regards to the Linux kernel, the functionality of the available sys-
tem calls represents the most direct exposure of the kernel attack vectors, since
these are the only direct communication channels between the user and kernel
space. It is therefore crucial to investigate previous research regarding the evo-
lution and current state of Linux system calls [71]. An assessment of the risks
associated with specific system calls was already subject of previous research
[36] and can be used as a basis for a similar analysis on the available system
calls in the current Linux kernel. There has also been research regarding the
systematic reduction of system calls in specific server applications to reduce
the exposure of the kernel attack surface for specialized use cases [117] [30].

48
Applying these previous research methods and comparing their results to the
targeted research approach using complexity metrics, will provide a potential
correlation with the findings of this paper.

5.4 Measuring Vulnerabilities

Software security metrics based on discovered vulnabilities are one common


method to assess the security of a software system that has been used in
the past [94]. The Linux kernel as well as common Linux operating system
environments have previously been analyzed in regards to vulnerabilities that
were previously found [16]. Different research projects have also analyzed the
specific vulnerabilities that originate directly from the Linux kernel itself [5]
[100]. Their results have provided indicators of specific code areas that include
a higher number of previously found coding errors that lead to vulnerabilities in
the kernel code. This is mainly due to the fact that known vulnerabilities can
be the result of code reviews that target specific parts of the kernel [118]. Other
vulnerabilities are found by active exploitation in deployed system, which often
leads to their discovery due to their common usage. Both kinds of vulnerability
discoveries are the result of attention given to specific parts of the source code.
Many code areas of the Linux kernel however, contain rarely used functions
that are unlikely to receive the same attention. While static code analysis
tools can help discover vulnerable code in the overall system without manual
intervention [119], these methods are not at all reliable to detect all kinds
of vulnerabilities [120]. Additionally, not all vulnerabilities present the same
risk due to their difference in exploitability and impact [88], which further
limits the value of empirical analysis in this field. While this does provide
an indicator of insecurities in the kernel source code, it does not provide an
assessment of the overall security of the Linux kernel. Detailed research of
vulnerabilities that include a throughout risk analysis as shown in [88] provide
information for the priority of mitigation processes. However their use for
prediction future security issues and where in the code they may arise is very
limited due to the discussed issue. As previous research has shown, measuring
known vulnerabilities can provide an indicator of insecurity, but is unable to
provide a strong correlation towards the security of the remaining code base.

49
5.5 Quantified Attack Surface Reduction

One of the most relevant research projects in regards to the assessment and
reduction of Linux kernel attack surface is based on the active measurement of
used kernel functions [117]. This approach uses a modified version of the Linux
kernel to actively trace the code areas that are used during the execution of
specific user space software to automatically create a profile of utilized kernel
areas. Based on these results, a second modification to the Linux kernel denies
the execution of all function that have not been identified as required during
the analysis phase. The result is a version of the Linux kernel that reduces the
number of executable kernel code by a significant margin. Unlike previous at-
tack surface reductions implemented by container technology [116] or reduction
of available system calls [115], the approach implemented by [117] and [121]
has managed to reduce the executable code base. With the deployed changes
to the Linux kernel code including a very limited code additions, this measure
has managed to remove the usable kernel functions from the executable code
base. Based on these results, following research projects have managed to im-
prove the outcome by eliminating the requirement for additional kernel code
in the deployed kernel version altogether by using compile-time configuration
of the Linux kernel [122]. As the research based on this approach has shown,
a significant reduction of the compiled kernel function can be achived [123]
[24]. Hardening measures that reduce the kernel attack surface by removing
features have been used by other projects before [124], however the extent of
the reduction in kernel size is very limited for general purpose kernels. Other
approaches reduce the availability of kernel functions to limit their exposure
[125]. These previous kernel attack surface reduction strategies have focused
on hiding kernel ares from user space without reducing the actually executed
kernel in a significant way. The result of the measures researched by the listed
”Quantified Attack Surface Reduction” papers however, have managed to re-
duce the ”Trusted Computing Base” (TCB) of the overall system by excluding
source code in the compilation process of the kernel. As a result, new attack
vectors that may expose previously shielded kernel function cannot be used
against unavailable code ares that were removed at compile time. This ap-
proach is therefore significantly different then previous attempts to reduce the

50
accessibility of kernel function, without removing the actual kernel code. The
primary drawback of this approach to attack surface reduction is the signifi-
cant effort required to measure and maintain a profile of used kernel functions
for all systems and their different use cases. Maintaining and deploying a ker-
nel profile for major applications like web servers, may be an option for some
organizations, but still require a valid use care to justify the additional effort.
While the overall attack surface of the reduced kernel resulting from this re-
search approach is significant, the measurement of the exact exposure used in
this research still depends on exposure towards unprivileged user space. As has
been discussed before, measurements that depend on know exposure towards
specific parts of a system, can be unreliable due to changing environments [81].

51
5.6 Shortcomings of previous Research

The different research approaches to assess the attack surface of software like
the Linux kernel have shown different advantages and issues in regards to their
effects to measuring and improving the overall security of software systems.
This chapter will provide a summary of the discussed results from previously re-
search methodologies and provide an overview of the most relevant attributes.

5.6.1 Manual Attack Surface Analysis

Manual attack surface analysis methods rely on the identification of attack


vectors and measurement of their exposure towards specific entities.

Advantages:

• Easy Identification of attack vectors using data and control flow graphs

• Automatic measurement of exposure possible with established tools

Disadvantages:

• Subjective results based on experience and state of documentation

• New attack vectors may be discovered by future security research

• Exposure is depended on changing environments like firewalls

• Application changes can significantly impact results

52
5.6.2 Measuring known Vulnerabilities

Vulnerability measurement methods count the number of previously docu-


mented security issues and access their impact based on required efforts and
dependencies for exploitation.

Advantages:

• Easy to measure using public CVE documentation

• Provides an indicator of past security issues

Disadvantages:

• Not all vulnerabilities are documented

• High dependency on product popularity and code security measures

• Provides an indicator of insecurity only

5.6.3 System Call Sandboxing

The reduction of available system calls from the perspective of user spaces pro-
cesses may provide a significant reduction in the exposure of directly accessible
kernel interfaces.

Advantages:

• Allows significant reduction of kernel exposure towards userspace

• Implementation by the original application developer allows a detailed


profile of the required kernel functions

Disadvantages:

• Changing features require extensive testing of used system calls

• Sandbox escape enables the circumvention of this measure

• Primarily a user space measure for high risk applications

53
5.6.4 Compile-time Kernel Reduction

Reduction of the available kernel functions at compile time, based on dynamic


analysis of used kernel functions, allows the elimination of unused code areas,
thus removing the included security risks altogether.

Advantages:

• Significant and reliable reduction of attack surface

• Reproducible results

Disadvantages:

• Requires extensive analysis of used kernel features

• Applicable for very specific use cases

• Requires repeated analysis and testing with new software releases

54
6 Approach Reasoning
In the previous chapters, different subjects regarding the analysis of software
attack surfaces were discussed. The core issues that have been identified,
revolve around the versatility of both analyzed system attributes like attack
vectors and exposure, as well as the difficulty to measure the identified security
attributes. Since attack vectors change during the software development life
cycle, its identification can only provide a temporary value. The dependence
of exposure measurement to the deployed environment, further prevents the
use of this attribute outside of very specific use case scenarios. A traditional
approach to attack surface analysis is therefore not feasible to assess the Linux
kernel in general. Instead, this research uses a number of measures listed
below, to improve the current state of attack surface measurement.

Figure 5: Research Approach

In addition to the difficulty of attack surface identification, an objective


metric is required to provide a comparable result. Fortunately, the use of com-
plexity metrics has already been shown to indicate a correlation with security

55
[25]. The existence of established tools and specifications [99] allow the identi-
fication of complexity within the Linux kernel source code. Previous research
on Linux kernel attack surface reduction has successfully used this metric [123].
However the question remains how different kernel features affect the result-
ing complexity. The following chapter will therefore present an approach to
identify the effects different kernel components have on the complexity and
thereby attack surface of the kernel.

6.1 Attack surface and complexity

While complexity metrics cannot provide a direct causality to the security of


a system, its strong correlation can provide an approximate measure for the
probability of vulnerable code [25] [102]. This allows a reproducible and objec-
tive measure of the approximate attack surface within the analyzed software
system. Taking into account the challenges in regards to comparability of com-
plexity metrics, a useful result can be achieved by comparing the complexity
of different versions of a specific software, to indicate changes in the included
attack surface [25]. The approach used in this paper, will therefore rely on the
measurement of cyclomatic complexity within the Linux kernel code, while us-
ing different kernel configurations to determine how specific kernel components
impact the attack surface of the system. The results will provide an indicator
to assess the involved code security of using specific Linux kernel features and
help with the design of more secure kernel configurations. Additionally, an
overview of the distribution of complexity through the different kernel subsys-
tems may provide additional insight. While this research is intended to identify
the attack surface present within the Linux kernel source code, other relevant
attributes like performance impact due to kernel configuration changes will
be considered out of scope. However previous research has indicated that the
reduction of kernel features may not have any significant impact on the kernel
execution performance [121]. Using this approach will provide an approxima-
tion of attack surface distribution through the Linux kernel, while it will not
provide a specific risk assessment, due to the ignorance of exposure.

56
6.2 Linux Kernel Modules

The Linux kernel provides a large number of optional features that are imple-
mented as kernel modules. This allows the flexible extension of the monolithic
Linux kernel. However, even a default kernel configuration does include a
large number of kernel modules [126] that are a common part of commodity
operating systems [127]. Generic Linux operating systems are therefore in-
cluding a number of optional features that provide common services. These
kernel modules are not necessarily loaded into kernel space by default during
the boot process, but may only become relevant when requested. As discussed
previously in subsection 4.3, this may be the case because a service has re-
quested a module to be loaded, or because a specific hardware component was
attached to the system during its operation. The code complexity included
in kernel modules therefore needs to be considered as part of the kernel com-
plexity itself, even if it may not be loaded by default. Loadable Linux kernel
modules come with a number of security implications aside the increases com-
plexity. A significant amount of kernel code can be loaded at request by user
space application [128]. This includes the implementation of complex protocols
that may be requested by an application. Allowing user space applications to
trigger the loading of additional kernel modules, makes it possible to exploit
vulnerable functions in rarely used kernel code, as has been demonstrated in
the past [129]. While it is possible to restrict the available range of loadable
kernel modules or disable this option completely, it would imply significant
drawbacks. Since user space applications may rely on specific kernel functions
to be available for legitimate purposes, the restriction may break the expected
compatibility [130]. As with attack vectors that are dependent on environ-
mental exposure, different kernel modules may not be directly loadable by
unprivileged user space entities. However, just like other parts of the kernel
code, they are a part of the general attack surface of the Linux kernel. It
may be possible for system engineers to reduce the number of available kernel
modules in controlled environments, like specific organizations or embedded
systems. However this requires a known set of use cases the targeted system
will be used for and reduces the usefulness of this measure. Finding kernel
modules that are significant in regards to their attack surface by measuring

57
the included code complexity, may help make design decisions for environ-
ments where a restriction may be a valid option however. Consequently, the
following complexity analysis will target a number of common kernel modules
to measure the additional complexity they add to the system. For this research
project, the targeted kernel modules will include the SELinux, AMDGPU
and KVM module. Additionally, the file systems ext4, xfs and btrfs are
scanned to further highlight the influence from file system implementations.
Since previous vulnerabilities were related to Linux name spaces, additional
analysis will target the name space modules. The kernel modules targeted
in the following analysis were selected in regards to their relevance in common
Linux systems, but are primarily intended to provide only an example for a
general procedure to measure attack surface in kernel modules. In the follow-
ing chapter, the approach used to analyze these modules will be described in
detail to allow further research to target arbitrary kernel modules for future
analysis.

6.3 Hardware Dependent Complexity

The complexity analysis of specific kernel modules will provide a basis for risk
assessments that may determine their use in future system designs. However,
large parts of the kernel complexity are not expected to be included in spe-
cific kernel features, but may rather be in more general kernel subsystems like
hardware dependent code sections. Due to this, the following analysis will also
take a close look at the general distribution of kernel complexity within the
source code tree. Aside from kernel modules that serve a specific purpose, the
hardware related kernel code is another reasonable target for analysis. Since
only a very specific hardware platform is used to run a deployed Linux kernel,
it may be possible to reduce the available complexity significantly by remov-
ing significant parts of hardware related kernel code without impacting the
functionality. Based on the resulting complexity graph related to architecture
and driver related kernel code, new approaches to reduce the kernel attack
surface may be found that allow a reasonable and practicable improvement of
the deployed kernel security.

58
7 Testing Process
The Linux sources include extensive support for the creation of custom kernel
configurations that can be used to compile a specific kernel. A number of avail-
able make scripts allow the automatic creation of kernel build configuration
files [131]. In Linux 5.10 there are 12244 different kernel configuration options
available [132]. As a result, the kernel binaries generated by these options can
differ significantly in regards to implemented features and complexity. To mea-
sure the complexity impact of the targeted kernel modules and subsystems, a
number of build configuration files will be used to compile different versions of
the Linux kernel while measuring the included complexity using a static code
analysis tool. The following chapter will provide a detailed description of the
used tools, configurations and testing process.

7.1 Sonarqube

While there are a number of different static code analysis tools available to
measure code complexity, the exact results can differ depending on the imple-
mented specification. To provide a comparable and transparent result based
on documented metric specification [99], the public instance of SonarQube
(sonarcloud.io) will be used for the analysis process. The SonarQube toolkit
consists of a build wrapper and a scanner binary [133]. The build wrapper
is used to scan the C code during the build process and document the use
of source files, like include instructions and macros [134]. This step does not
change the resulting binaries and has therefore no effect on the accuracy of the
measurement. Based on the resulting source code information the SonarQube
scanner will scan the code and create a report about the source code attributes,
that is uploaded to the public Sonarcloud analysis tool. The code measure-
ment itself will take place within a local test environment, while the resulting
report will be analyzed on the Sonarcloud instance, where the results will be
displayed afterwards. While the SonarQube tool provides a static code anal-
ysis that includes a number of code attributes, the cyclomatic and cognitive
complexity will be included. The results presented by Sonarcloud show the
complexity attributes included in the different source code directories, which
allows additional insight into the complexity distribution.

59
7.2 Test Parameter and Environment

The test environment will be provided by a virtual machine running Debian


Linux version 10.7 while using a Linux kernel version 5.10.5 compiled on the
test system itself. After the initial deployment of the virtual machine no
further changes to installed software packages have been made to eliminate a
possible influence in the test results. All tested kernel configurations use the
source code of Linux version 5.10.5 and the used compile time configuration
files are documented [132]. The results of the analysis are documented in a
public Sonarcloud instance [135] in addition to the result overview presented
in the next chapter subsection 8.1.
To provide an overview of the resulting complexity these optional features
include, the Linux kernel will be measured using the provided configurations
resulting from the make options allnoconfig, allyesconfig and defconfig. These
make options will create kernel build configurations that disable or enable all
available configuration options respectively, or create a default kernel config-
uration. A comparison with the available configuration on a default Debian
system shows no significant differences with the configuration generated by de-
fconfig, therefore providing a baseline to compare with practical deployments.
Following tests will include changes to these configuration files to include or
exclude specific kernel features that are target to be analyzed. While the three
defined base configurations provide an overview of the significant differences
possible, the selected kernel features will mostly be analyzed using the defcon-
fig baseline to provide results that are comparable to practical deployments.
Some additional comparisons were made using the allnocnfig and allyesconfig
configurations. The following list shows an overview of the targeted kernel
features.

Targeted kernel modules:

• SELinux - Security Enhanced Linux, providing mandatory access control

• AMDGPU - GPU driver for modern AMD graphic cards

• KVM - Kernel Virtual Machine, providing hypervisor functionality

• ext4 - journaling file system, default on Android and Debian

60
• xfs - high-performance journaling file system

• btrfs - feature rich file system, providing advanced features

• namespaces - Linux namespaces, providing process based abstractions

The analysis of the listed features requires a modified kernel build con-
figuration that may require multiple depending option to be enabled. The
detailed list of tested kernel build configuration options have therefore been
documented [132]. In addition to the test execution of the listed modules, ad-
ditional tests were executed with a reduced source code to provide additional
indicators for the influence of hardware dependent code. For this purpose,
specified folders were removed after the successful compilation, but before the
static code analysis.
The ”nohw” analysis results were measured with the following kernel source
tree folders removed:

Removed folders for hardware independent ”nohw” scan:

• arch

• driver

• fs

As documented in subsection 4.5, part of the kernel sources includes code


that will not be relevant to the resulting kernel. This includes the folders Doc-
umentation, samples, scripts and tools. The main purpose of the reduced code
analysis is to measure the influence of hardware related code in arch, drivers
and fs. Additional analysis were conducted with the reduction and removal of
the fs folder (”no fs”) to verify the exact measurement of the influence from
file system implementations.

61
7.3 Test Execution Process

The test execution process described here, provides a step by step instruction
to reproduce the results of the research. The following instructions were used
with the different documented kernel build configuration [132] to produce the
results presented in the following chapter subsection 8.1.

Figure 6: Test execution steps

Command line steps:

tar xf linux-5.10.5.tar
cd linux-5.10.5/
make defconfig # alternatively allnoconfig or allyesconfig
make menuconfig # for modification of included kernel features
build-wrapper-linux-x86-64 --out-dir bw-output make -j 12
export SONAR_TOKEN=*** ## *** = sonarcloud authorization key
sonar-scanner -Dsonar.organization=linuxtest2020
-Dsonar.projectKey=linux-5.10.5-defconfig -Dsonar.sources=.
-Dsonar.cfamily.build-wrapper-output=bw-output
-Dsonar.host.url=https://sonarcloud.io -Dsonar.cfamily.threads=12

62
8 Results and Interpretation
The results from the conducted static code analysis have provided a number of
complexity metrics associated with different kernel features and subsystems. A
detailed overview of all analyzed kernel configurations can be found here [135],
while the next chapters will provide an overview of the complexity metrics as
well as an interpretation of their relevance.

8.1 Complexity Overview

The following table provides an overview of the measured cyclomatic complex-


ity resulting from the previously described test process.

cyclomatic complexity allnoconfig defconfig allyesconfig


unmodified 60138 351500 2126797
nohw 35987 138621 -
noarch 29104 278424 -
nodriver 57312 225155 -
with SELinux 99348 - -
with AMDGPU 134660 408062 -
with KVM 93922 364319 -
without SELinux - 347466 2122646
without AMDGPU - - 2076695
without KVM - - 2113707
no fs 53017 312158 -
fs helper 60109 324995 -
with ext4 75509 333993 -
with xfs 82213 340962 -
with btrfs 87465 344452 -
all namespaces 74992 351689 -
no namespaces 60138 351246 -

63
8.2 Complexity Difference

The following table provides an overview of the relative complexity differences


resulting from the previously described test process. The complexity differ-
ences introduced by the modification of optional kernel modules are compared
to the unmodified default complexity values, while the file system implemen-
tations are compared with a kernel build that only includes general file system
”helper” code, unrelated to specific implementations.

complexity difference allnoconfig defconfig allyesconfig


unmodified - - -
nohw -24151/-40.16% -212879/-60.56% -
noarch 0 -73076/-20.79% -
nodriver -2826 -126345/-35.94% -
with SELinux +39210/+65.2% - -
with AMDGPU +74522/+123.92% +56562/+16.09 -
with KVM +33784/+56.18% +12819/+3.65% -
without SELinux - -4034/-1.15% -4151/-0.2%
without AMDGPU - - -50102/-2.36%
without KVM - - -13090/-0.62%
no fs -7121/-11.84% -39342/-11.19% -
fs helper -29/-0.05% -26505/-7.54% -
with ext4 +15400/+25.62% +8998/+2.77% -
with xfs +22104/+36.77% +15967/+4.91% -
with btrfs +27356/+45.51% +19457/+5.99% -
all namespaces +14854/+24.7% +189/+0.05% -
no namespaces 0 -254/-0.07% -

64
8.3 Interpretation

While the complexity comparison with the default ”allnoconfig” kernel shows
the most significant differences, it needs to be noted, that this minimal kernel
build does not include the required code to be actually deployed. It can how-
ever serve as an additional measurement to compare with a deployable kernel
provide by the ”defconfig” build. Similar, the ”allyesconfig” build provides
a similar comparison in regards to the other extreme, by including all possi-
ble kernel features. Since the ”defconfig” builds reflect a practical kernel that
is included in many Linux based operating systems, the primary assessment
regarding the changes in attack surface will target the results from this base
configuration. The full kernel build configurations can be found here [132].

65
8.4 Impact of optional Kernel Modules

The inclusion or exclusion of the targeted optional kernel modules have re-
sulted in complexity changes of different magnitudes as shown in the graph
below. As the inclusion of optional kernel modules like these may not neces-
sarily depend on a required use case, the additional complexity may serve as
an additional factor to decide on the inclusion of these features.

Figure 7: Module Complexity

66
8.4.1 SELINUX

Based on the default kernel configuration, the SELinux kernel module impacts
the systems complexity by only 1.15%, compared to the unmodified kernel.
Given the cyclomatic complexity reduction of only -4034 when the module is
removed from the default build, the impact on the available attack surface is
very limited. The significant complexity difference of +39210 or 65.2% by the
SELinux module inclusion into the minimal kernel build, mainly results from
the requirement of additional kernel features such as the network stack. The
complexity difference of -4151 resulting from the SELinux exclusion from the
full feature kernel build (allyesconfig), confirms the very limited impact on the
overall system complexity from the SELinux module. Based on the complexity
impact of SELinux, the additional attack surface is therefore insignificant for
most systems that already require common kernel subsystems like the network
stack. Additionaly, the SELinux module can provide a number of optional
use cases that may become a requirement during the systems life time, which
further reduced the value of its removal.

8.4.2 AMDGPU

The AMDGPU module increases the cyclomatic complexity of the default ker-
nel by +56562 or 16.09%, thereby indicating a significantly increase compared
to the unmodified kernel. The complexity difference of +74522 measured for
the minimal kernel build, includes only an addition of 17960 points, resulting
from additional requirements. The exclusion of the AMDGPU module from
the full feature build confirms the result from the default kernel, while mi-
nor differences are still present due to additional feature requirements for the
AMDGPU module inclusion into the default kernel. For use cases where the
AMDGPU module is not strictly required, the significant complexity resulting
from its inclusion can therefore be considered. Given that the relevance of
AMDGPU is bound to the availability of the related hardware devices, the
presented results are mainly significant for decisions regarding the systems
hardware design.

67
8.4.3 KVM

The kernel virtual machine (KVM) module adds a cyclomatic complexity of


+12819 or 3.65% to the default kernel build. The significantly larger increase
for the minimal kernel by +33784 indicated a number of relevant feature depen-
dencies. For the full feature build, the exclusion of the KVM module impacts
the complexity result by -13090, which confirms the result for the default ker-
nel with minimal differences. While the additional complexity if the KVM
module is not nearly as significant as in the case of the AMDGPU module, it
also does provide a very specific use case. The additional attack surface due
to the availability of the KVM module, may therefore be justified depending
on the requirement of virtualization technology. Specific use case, like systems
that are deployed as a virtual machine, would not be able to make use of the
features provided by this module and therefore should not include this feature
due to its additional attack surface.

68
8.5 Impact of File Systems

To calculate the difference between the analyzed file systems, the resulting
complexity values were compared to a kernel build that only includes general
file system code that is unrelated to specific implementations. The complexity
listed as ”fs helper” is a result of a unmodified kernel configuration, measures
without specific file system source code. The complexity added by the im-
plementation of specific file systems show significant differences between the
analyzed implementations that are described in detail within the following
chapters.

8.5.1 ext4

The complexity difference added by the implementation of ext4 amounts to


+8998 or 2.77% for the default kernel. For the minimal kernel build the ad-
ditional complexity added for the ext4 implementation amounts to +15400 or
+25.62% as a result of additional feature requirements. Given the extensive
features the ext4 file system provides [136] the included cyclomatic complexity
is not significant. Compared to other file system implementations, like those
described below, the ext4 file system represents a valid choice in regards to the
approximated attack surface due to its complexity. Given the general require-
ment regarding the use of at least one file system implementation, ext4 may
therefore be an option for minimal kernel builds as well.

8.5.2 xfs

The xfs file system increases the default kernel build by +15967 or +4.91% and
is thereby almost twice as complex as the ext4 implementation. The additional
complexity added to the minimal kernel build amounts to +22104 or +36.77%
as a result of additional dependencies that are also required by other analyzed
file system implementations. The significant increase in complexity compared
to the ext4 implementation may be justified if the intended system use cases
requires file system features not provided by ext4. In cases where this is not the
case, the complexity difference may provide a valid argument to exclude the
xfs filesystem from the kernel configuration to reduce potential attack surface.

69
8.5.3 btrfs

The btrfs filesystem provides extensive features [137] resulting in an increased


complexity of +19457 or + 5.99% for the default kernel. The minimal kernel
build complexity increases by +27356 or +45.51%, reflecting the additional
dependencies. While the btrfs filesystem may provide some functionality that
is not available with the previous file system implementations, the significant
increase in complexity requires an even more relevant use case analysis to assess
if the additional attack surface is justified.

8.5.4 namespaces

As was discussed in previous chapters, Linux namespaces and user namespaces


in particular, did have an impact on the exposure of kernel code [41]. How-
ever, the analysis of code complexity added by these kernel features are very
limited. The cyclomatic complexity difference between the default kernel with
all name spaces and without any name spaces, amount to only 443 or 0.12%
and is therefore not irrelevant. This example shows that while the inclusion of
code with significant complexity may increase the attack surface, even simple
features can negatively impact the security of a system without adding com-
plexity themselves. The specific issue with user namespaces has shown, that
the newly added exposure of kernel code has made attacks on already existing
attack vectors possible. The resulting security vulnerabilities were however
already present in the kernel before the introduction of user namespaces [43]
[44] [45]. It can therefore be concluded, that the addition of user namespaces
did not increase the probability of security vulnerabilities within the Linux
kernel and only changed the specific exposure of other kernel features.

70
8.6 Impact of Hardware Dependent Subsystems

In order to identify hardware related complexity within the Linux kernel, the
analyzed kernel builds were additionally measured while excluding affected
code sections. To indicate the overall effect of hardware related kernel code,
the analysis was executed on the compiled source code after removing the
source folders arch, driver and fs. The ”reduced” kernel complexity results
indicate a significant amount of kernel code is related to the use of specific
hardware.

Figure 8: Hardware dependent Complexity

8.6.1 Instruction Architecture

The complexity impact related to the exclusion of the ”arch” kernel source
code, indicates a significant relevance to the overall system. With a cyclomatic
complexity reduction of -73076 or 20.79% the ISA implementation provides a
major part of the kernel complexity. In the case of the default kernel build, the
affected code reflect the x86 implementation in particular. While the specific
complexity may vary between different ISA implementations, a comparison
between these architectures is out of the scope of this research. Given that the
x86 architecture is one of the most commonly used instruction sets, a reduction
of architecture dependent code does not seem feasible. Future research may

71
however find that choosing hardware platform providing a different instruction
set, provides further options to reduce the resulting complexity.

8.6.2 Hardware Drivers

Significant amounts of the kernel complexity are located within source code sec-
tions such as ”arch”, ”driver” and ”fs”. These areas contain mostly hardware
related code that is only relevant to a specific hardware platform. Therefore,
large parts of the included complexity will never be relevant on a deployed
system. The measured complexity differences in the ”arch” folder are a direct
result of code required by the x86 instruction set included in the default ker-
nel build configuration. As previously concluded, the specific complexity may
vary between ISA implementations. One specific implementation will however
be required by any operating system kernel and can therefor not be excluded
from a deployable kernel build. In regards to the complexity added by the
file system implementations, a reduction may be feasible depending on the
required use cases. At least one specific file system will however be required as
well, which will still include a significant amount of complexity. The majority
of hardware platform dependent complexity is however related to the source
code of the available drivers. For the default kernel build, the exclusion of
the driver code reduces the cyclomatic complexity by -126345 or -35.94% and
has therefore a significant impact on the overall kernel complexity. Since the
Linux kernel includes a large amount of code related to a significant number
of device drivers, while only a small subset of these drivers is required for a
deployment, a significant reduction of driver related complexity may be feasi-
ble. The inclusion of many device drivers by the default kernel build is related
to the possible requirement of common hardware drivers such as USB devices.
To reduce the extent of supported hardware devices, a throughout analysis of
required use case may therefore be necessary.

72
9 Conclusion
The conducted complexity analysis of the Linux kernel has provided a number
of results that allow a more detailed assessment of the included attack surface.
The described test process allows the complexity measurement of arbitrary
kernel features and can be used as a template for future assessments to make
more qualified decisions regarding the inclusion of kernel functions and their
associated risks. The complexity analysis of the targeted kernel features pro-
vide practical examples of attack surface analysis based on complexity metrics.
The presented overview of the cyclomatic complexity distribution within differ-
ent parts of the Linux kernel source code, additionally indicates the influence
different kernel subsystems have on the overall system attack surface. Besides
the assessment of kernel related code complexity that was presented, the basis
for new approaches to reduce the attack surface of Linux kernel deployments
has been provided by this research.

9.1 Hardware Dependent Kernel Builds

The results of the measured complexity impact by different kernel subsystems


and components have indicated a significant dependency with hardware re-
lated code. Consequently, a kernel build with a reduced amount of hardware
support, can help to reduce the attack surface of the deployed kernel signifi-
cantly. Possible attack surface reduction strategies are based on a predictable
deployment environment or the install time compilation of the kernel described
below.

9.2 Environmental Dependent Kernel Builds

Modern IT systems are using industrial strategies to deploy large number of


systems based on a common set of environmental dependencies. This includes
for example the use of virtualization technologies that provide a common ab-
straction of the hardware platforms used to run the deployed operating systems
[138]. Virtual deployment environments provide an opportunity to use a modi-
fied kernel build for reduced complexity. Due to the predefined set of hardware
interfaces presented to the virtual machines, a Linux kernel deployed in these

73
environments can exclude large amounts of hardware related code, such as
drivers for unavailable hardware platforms. Due to the central management in
these environments, additional kernel complexity reductions, such as the use
of specific file system implementations are feasible. Organizations that deploy
large amounts of Linux based systems, may therefore improve the security of
their systems by reducing the attack surface using the described approach.

9.3 Install Time Kernel Build Reduction

Traditionally, operating systems like Linux based distributions provide com-


piled binaries of the available software packages. Given the required processing
power to compile a large software system like the Linux kernel, this used to be
a reasonable design decision. Recent progress in available hardware resources
[139] as well as performance improvements in compilers [140] can however pro-
vide new opportunities for system design. Given the significant reduction in
complexity within Linux kernels that are build for a specific hardware plat-
forms, it may become a valid option to include the automatic compilation of
the kernel within general update tools. While some hardware driver code may
still need to be available on demand for general desktop operating systems,
including code to support USB devices and other common accessories, a sig-
nificant amount of hardware related code will never server a valid use case.
Scanning the available hardware platform enables the exclusion of some kernel
driver code, thereby reducing the possible attack surface of the kernel. Cur-
rently the operating system kernels commonly used by many systems include
hardware support for a large number of devices, that can be removed when
the kernel is compiled for the specific system instead of being provided by the
software distributor.

9.4 Comparison to Previous Research

While a number of research approaches have targeted the analysis of the Linux
kernel attack surface, the approach used by [121] is most closely related to the
analysis presented in this paper. The major differences can be found with
the use of common static code analysis tools for complexity measurement as
well as the wider applicability of the results. While [121] have provided a

74
more detailed and exact analysis and reduction of required kernel functions, it
required a throughout analysis of a specific use case and needs to be conducted
for every deployed application. This may be a feasible approach for high risk
environments that provide the resources needed to use this approach. The
results presented in this research however, while less effective, are able to
provide a significant attack surface reduction with minimal efforts due to the
more general applicability.

10 Further research
The presented research results provide a template for kernel attack surface
analysis based on complexity metrics. This allows practical security improve-
ments for specific use cases described in the previous chapter. To provide a
more detailed analysis of the required efforts and security improvements, addi-
tional research could analyze the described reduced complexity deployments in
practice to verify the presented finding. Besides the reduction of kernel com-
plexity, additional research may target effective kernel exposure reduction to
further increase the effort required to attack Linux based applications. Promis-
ing technologies that have been established are userspace sandboxing [141] and
containerization technologies [116].

75
References
[1] S. A. Christoph Krösmann, “Markt für IT-Sicherheit auf Allzei-
thoch.” https://www.bitkom.org/Presse/Presseinformation/
Markt-fuer-IT-Sicherheit-auf-Allzeithoch, Online; accessed
31-March-2021.

[2] BSI, “The state of it security in germany in 2019,” tech. rep., Federal
Office for Information Security, Bonn, Germany, 10 2019.

[3] K. AG, “e-crime in der deutschen wirtschaft 2019,” p. 72, 07 / 2019.

[4] V. Katos, S. Rostami, P. Bellonias, N. Davies, A. Kleszcz, S. Faily,


A. Spyros, A. Papanikolaou, C. Ilioudis, and K. Rantos, “State of vul-
nerabilities 2018/2019,” tech. rep., ENISA - European Union Agency for
Cybersecurity, 12 2019.

[5] S. Raheja, G. Munjal, and Shagun, “Analysis of linux kernel vulnerabil-


ities,” Indian Journal of Science and Technology, vol. 9, 12 2016.

[6] “Web server survey.” https://secure1.securityspace.com/s_


survey/data/201402/index.html, Online; accessed 31-March-2021.

[7] M. Naldi, “Concentration in the mobile operating systems market,”


CoRR, vol. abs/1605.04761, 2016.

[8] “Active android devices.” https://twitter.com/android/status/


1125822326183014401, Online; accessed 31-March-2021.

[9] “Gitstats - linux.” https://phoronix.com/misc/linux-eoy2019/


index.html, Online; accessed 31-March-2021.

[10] P. R. Marie Baezner, “Stuxnet,” tech. rep., Center for Security Studies
(CSS), ETH Zürich, 10 2017.

[11] S.-C. Hsiao and D.-Y. Kao, “The static analysis of wannacry ran-
somware,” pp. 153–158, 02 2018.

[12] J. Margolis, T. T. Oh, S. Jadhav, Y. H. Kim, and J. N. Kim, “An in-


depth analysis of the mirai botnet,” in 2017 International Conference on
Software Security and Assurance (ICSSA), pp. 6–12, IEEE, 07 2017.

76
[13] J. Aidan, H. Verma, and L. Awasthi, “Comprehensive survey on petya
ransomware attack,” pp. 122–125, 12 2017.

[14] “The untold story of notpetya, the most devastating cy-


berattack in history.” https://www.wired.com/story/
notpetya-cyberattack-ukraine-russia-code-crashed-the-world/,
Online; accessed 31-March-2021.

[15] I. und Handelskammer NRW, “Digitale transformation und industrie


4.0,” tech. rep., IHK NRW, NRW, Germany, 10 2015.

[16] S. Niu, J. Mo, Z. Zhang, and Z. Lv, “Overview of linux vulnerabilities,”


in Proceedings of the 2nd International Conference on Soft Computing
in Information Communication Technology, pp. 225–228, Atlantis Press,
05 2014.

[17] I. Bowman, “Conceptual architecture of the linux kernel,” URL:


http://plg. uwaterloo. ca/itbowman/CS746G/a1, 1998.

[18] W. Mauerer, Professional Linux Kernel Architecture. 10 2008.

[19] “The linux kernel documentation.” https://www.kernel.org/doc/


html/latest/, Online; accessed 31-March-2021.

[20] T. Farah, R. Rahman, M. Hossain, D. Alam, and M. Zaman, “Study of


the dirty copy on write, a linux kernel memory allocation vulnerability,”
05 2017.

[21] M. Howard, J. Pincus, and J. Wing, Measuring Relative Attack Surfaces,


pp. 109–137. Springer, 01 2005.

[22] “Attack surface analysis.” https://cheatsheetseries.owasp.org/


cheatsheets/Attack_Surface_Analysis_Cheat_Sheet.html, Online;
accessed 31-March-2021.

[23] P. Manadhata and J. Wing, “An attack surface metric,” Software Engi-
neering, IEEE Transactions on, vol. 37, pp. 371–386, 05 2011.

[24] A. Kurmus, Kernel Self-Protection through Quantified Attack Surface


Reduction. PhD thesis, Braunschweig University of Technology, Ger-
many, 05 2014.

77
[25] M. Z. Mamdouh Alenezi, “On the relationship between software com-
plexity and security,” 2020.

[26] T. McCabe, “A complexity measure,” Software Engineering, IEEE


Transactions on, vol. SE-2, pp. 308– 320, 01 1977.

[27] “Ieee xplore.” https://ieeexplore.ieee.org/Xplore/home.jsp, year


= Online; accessed 31-March-2021.

[28] “Elsevier.” https://www.elsevier.com/de-de/search-results, year


= Online; accessed 31-March-2021.

[29] “Google scholar.” https://scholar.google.com/, year = Online; ac-


cessed 31-March-2021.

[30] S. Ghavamnia, T. Palit, S. Mishra, and M. Polychronakis, “Temporal


system call specialization for attack surface reduction,” pp. 1749–1766,
USENIX Association, 08 2020.

[31] V. Prevelakis and D. Spinellis, “Sandboxing applications.,” in USENIX


Annual Technical Conference, FREENIX Track, pp. 119–126, Citeseer,
2001.

[32] C. Greamo and A. Ghosh, “Sandboxing and virtualization: Modern tools


for combating malware,” IEEE Security & Privacy, vol. 9, no. 2, pp. 79–
82, 2011.

[33] S. Lipner, “The trustworthy computing security development lifecycle,”


in 20th Annual Computer Security Applications Conference, pp. 2–13,
IEEE, 2004.

[34] “A very deep dive into ios exploit chains found in the
wild.” https://googleprojectzero.blogspot.com/2019/08/
a-very-deep-dive-into-ios-exploit.html, year = Online; ac-
cessed 31-March-2021.

[35] “Introducing the in-the-wild series.” https://googleprojectzero.


blogspot.com/2021/01/introducing-in-wild-series.html, year =
Online; accessed 31-March-2021.

78
[36] M. Bernaschi, E. Gabrielli, and L. Mancini, “Remus: A security-
enhanced operating system,” ACM Trans. Inf. Syst. Secur., vol. 5,
pp. 36–61, 02 2002.

[37] A. S. Tanenbaum and R. Van Renesse, “Distributed operating systems,”


ACM Computing Surveys (CSUR), vol. 17, no. 4, pp. 419–470, 1985.

[38] A. S. Tanenbaum and H. Bos, Modern operating systems. Pearson, 2015.

[39] D. Morozov and P. Elena, “Linux privilege increase threat analysis,”


pp. 0579–0581, 05 2020.

[40] M. Haber, Privilege Escalation, pp. 99–116. 06 2020.

[41] J. Hertz, “Abusing privileged and unprivileged linux containers,”


Whitepaper, NCC Group, vol. 48, 2016.

[42] P. Kocher, J. Horn, A. Fogh, D. Genkin, D. Gruss, W. Haas, M. Ham-


burg, M. Lipp, S. Mangard, T. Prescher, et al., “Spectre attacks: Ex-
ploiting speculative execution,” in 2019 IEEE Symposium on Security
and Privacy (SP), pp. 1–19, IEEE, 2019.

[43] “Cve-2013-1858.” https://www.cvedetails.com/cve/


CVE-2013-1858/, year = Online; accessed 31-March-2021.

[44] “Cve-2015-8660.” https://www.cvedetails.com/cve/


CVE-2015-8660/, year = Online; accessed 31-March-2021.

[45] “Cve-2018-18955.” https://www.cvedetails.com/cve/


CVE-2018-18955/, year = Online; accessed 31-March-2021.

[46] E. Conrad, S. Misenar, J. Feldman, and K. Riggins, CISSP Study Guide.


01 2010.

[47] K. Ingham, S. Forrest, et al., “A history and survey of network firewalls,”


University of New Mexico, Tech. Rep, 2002.

[48] W. Conklin and G. Dietrich, “Secure software engineering: A new


paradigm,” 01 2007.

79
[49] G. McGraw, “Software security,” IEEE Security & Privacy, vol. 2, no. 2,
pp. 80–83, 2004.

[50] J. D. Meier, “Web application security engineering,” IEEE Security Pri-


vacy, vol. 4, no. 4, pp. 16–24, 2006.

[51] R. Oppliger, Internet and intranet security. Artech House, 2001.

[52] “Remote desktop - allow access to your pc.” https://docs.microsoft.


com/en-us/windows-server/remote/remote-desktop-services/
clients/remote-desktop-allow-access, Online; accessed 31-March-
2021.

[53] D. K. Hess, D. R. Safford, and U. W. Pooch, “A unix network protocol


security study: Network information service,” ACM SIGCOMM Com-
puter Communication Review, vol. 22, no. 5, pp. 24–28, 1992.

[54] J. Eloff and M. Eloff, “Information security architecture,” Computer


Fraud & Security, vol. 2005, no. 11, pp. 10–16, 2005.

[55] S. Jingyao, S. Chandel, Y. Yunnan, Z. Jingji, and Z. Zhipeng, “Securing


a network: How effective using firewalls and vpns are?,” in Advances
in Information and Communication, pp. 1050–1068, Springer, Springer
International Publishing, 01 2020.

[56] K. Mindo, C. Sogomo, and N. Karie, “Analysis of network and firewall


security policies in dynamic and heterogeneous networks,” International
Journal of Advanced Research in Computer Science and Software Engi-
neering, vol. 6, pp. 141–146, 04 2016.

[57] J. Hong, “The state of phishing attacks,” Commun. ACM, vol. 55,
pp. 74–81, 01 2012.

[58] S. Wiefling, M. Dürmuth, and L. Lo Iacono, “More than just good pass-
words? a study on usability and security perceptions of risk-based au-
thentication,” vol. abs/2010.00339, 10 2020.

[59] J. Clark and J. Jacob, “Attacking authentication protocols,” High In-


tegrity Systems, vol. 1, pp. 465–474, 1996.

80
[60] S. Kamara, S. Fahmy, E. Schultz, F. Kerschbaum, and M. Frantzen,
“Analysis of vulnerabilities in internet firewalls,” Computers & Security,
vol. 22, no. 3, pp. 214–232, 2003.

[61] F. Xue, “Attacking antivirus,” in Black Hat Europe Conference, 2008.

[62] S. Stolfo, S. Bellovin, and D. Evans, “Measuring security,” Security and


Privacy, IEEE, vol. 9, pp. 60 – 65, 07 2011.

[63] “Linux networking documentation.” https://www.kernel.org/doc/


html/v5.10/networking/index.html?highlight=network, year =
Online; accessed 31-March-2021.

[64] A. Shostack, Threat modeling: Designing for security. John Wiley &
Sons, 2014.

[65] B. Potter and G. McGraw, “Software security testing,” IEEE Security


Privacy, vol. 2, no. 5, pp. 81–85, 2004.

[66] D. Song, J. Lettner, P. Rajasekaran, Y. Na, S. Volckaert, P. Larsen, and


M. Franz, “Sok: Sanitizing for security,” in 2019 IEEE Symposium on
Security and Privacy (SP), pp. 1275–1295, 2019.

[67] S. Shah, “Browser exploits-attacks and defense,” London: EUSecWest,


2008.

[68] C. Carmony, X. Hu, H. Yin, A. V. Bhaskar, and M. Zhang, “Extract me


if you can: Abusing pdf parsers in malware detectors.,” in NDSS, 2016.

[69] “Credentials in linux.” https://www.kernel.org/doc/html/v5.10/


security/credentials.html, year = Online; accessed 31-March-2021.

[70] K. Shah and K. Patel, “Security against fork bomb attack in linux based
systems,” International Journal of Research in Advent Technology, vol. 7,
pp. 125–128, 04 2019.

[71] M. Bagherzadeh, N. Kahani, C.-P. Bezemer, A. E. Hassan, J. Dingel,


and J. R. Cordy, “Analyzing a decade of linux system calls,” Empirical
Software Engineering, vol. 23, no. 3, pp. 1519–1551, 2018.

81
[72] S. Z. Syed Idrus, E. Cherrier, C. Rosenberger, and J.-J. Schwartzma nn,
“A review on authentication methods,” Australian Journal of Basic and
Applied Sciences, vol. 7, pp. 95–107, 06 / 2013.

[73] S. M. Al Pascual, Kyle Marchini, “State of authentication report,” tech.


rep., JAVELIN, 2017.

[74] D. Syropoulos-Harissis and A. Syropoulos, Web Authorization Protocols,


pp. 493–499. 08 2020.

[75] K. Chanda, “Password security: An analysis of password strengths and


vulnerabilities,” International Journal of Computer Network and Infor-
mation Security, vol. 8, pp. 23–30, 07 2016.

[76] A. Naiakshina, A. Danilova, C. Tiefenau, M. Herzog, S. Dechand, and


M. Smith, “Why do developers get password storage wrong? a quali-
tative usability study,” in Proceedings of the 2017 ACM SIGSAC Con-
ference on Computer and Communications Security, (New York, NY,
USA), Association for Computing Machinery, 08 2017.

[77] “Owasp — a1:2017-injection.” https://owasp.org/


www-project-top-ten/2017/A1_2017-Injection, year = Online;
accessed 31-March-2021.

[78] M. Hopkins and A. Dehghantanha, “Exploit kits: The production line of


the cybercrime economy?,” in 2015 second international conference on
Information Security and Cyber Forensics (InfoSec), pp. 23–27, IEEE,
2015.

[79] H. Chen, J.-H. Cho, and S. Xu, “Quantifying the security effectiveness
of firewalls and dmzs,” pp. 1–11, 04 2018.

[80] K. D. Mitnick and W. L. Simon, The art of deception: Controlling the


human element of security. John Wiley & Sons, 2003.

[81] A. Semjonov, “Security analysis of user namespaces and rootless con-


tainers,” 2020.

[82] J. Majumder and G. Saha, “Analysis of sql injection attack,” Interna-


tional Journal of Computer Science and Informatics, 04 2013.

82
[83] Z. Baig and S. Zeadally, “Cyber-security risk assessment framework
for critical infrastructures,” Intelligent Automation and Soft Computing,
pp. –1, 01 2018.

[84] A. Shostack, “Experiences threat modeling at microsoft,” 01 2008.

[85] P. Mell and T. Grance, “Use of the common vulnerabilities and exposures
(cve) vulnerability naming scheme,” p. 6, 09 2002.

[86] R. Wolthuis and F. Phillipson, Quantifying Cyber security Risks, pp. 20–
26. 08 2019.

[87] G. Stoneburner, A. Goguen, and A. Feringa, “Risk management guide


for information technology systems,” Nist special publication, vol. 800,
pp. 800–30, 01 2002.

[88] A. A. Y. Mussa and Y. Malaiya, “Using software structure to predict


vulnerability exploitation potential,” 06 2014.

[89] S. M. Bellovin, “On the brittleness of software and the infeasibility of


security metrics,” IEEE Annals of the History of Computing, vol. 4,
no. 04, pp. 96–96, 2006.

[90] M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, A. Fogh, J. Horn,


S. Mangard, P. Kocher, D. Genkin, et al., “Meltdown: Reading ker-
nel memory from user space,” in 27th {USENIX} Security Symposium
({USENIX} Security 18), pp. 973–990, 2018.

[91] D. Huang, H. Cui, S. Wen, and C. Huang, “Security analysis and threats
detection techniques on docker container,” pp. 1214–1220, 12 2019.

[92] S. Dhawan, B. M. Gupta, and E. B, “Global cyber security research


output (1998–2019): A scientometric analysis,” Science and Technology
Libraries, pp. 1–18, 11 2020.

[93] A. Loginov, “Evolution of cyber-security research in an industrial set-


ting,” pp. 15–15, 11 2020.

[94] J. A. Wang, H. Wang, M. Guo, and M. Xia, “Security metrics for soft-
ware systems,” in Proceedings of the 47th Annual Southeast Regional
Conference, pp. 1–6, 2009.

83
[95] R. Anderson, “Why cryptosystems fail,” in Proceedings of the 1st ACM
Conference on Computer and Communications Security, pp. 215–227,
1993.

[96] B. W. Boehm, J. R. Brown, and M. Lipow, “Quantitative evaluation of


software quality,” in Proceedings of the 2nd international conference on
Software engineering, pp. 592–605, 1976.

[97] C. Chen, M. Shoga, and B. Boehm, “Exploring the dependency relation-


ships between software qualities,” in 2019 IEEE 19th International Con-
ference on Software Quality, Reliability and Security Companion (QRS-
C), pp. 105–108, 2019.

[98] L. Laird, Software Measurement and Estimation : A Practical Approach.


07 2006.

[99] “Metric definition — sonarqube docs.” https://docs.sonarqube.org/


latest/user-guide/metric-definitions/, year = Online; accessed
31-March-2021.

[100] G. A. Campbell, “Cognitive complexity: An overview and evaluation,”


in Proceedings of the 2018 International Conference on Technical Debt,
TechDebt ’18, (New York, NY, USA), p. 57–58, Association for Com-
puting Machinery, 2018.

[101] E. T. Chen, “Program complexity and programmer productivity,” IEEE


Transactions on Software Engineering, no. 3, pp. 187–194, 1978.

[102] V. Y. Shen, T.-j. Yu, S. M. Thebaut, and L. R. Paulsen, “Identifying


error-prone software—an empirical study,” IEEE Transactions on Soft-
ware Engineering, no. 4, pp. 317–324, 1985.

[103] Y. Shin and L. Williams, “Is complexity really the enemy of software
security?,” in Proceedings of the 4th ACM workshop on Quality of pro-
tection, pp. 47–50, 2008.

[104] G. KroahHartman et al., “Linux kernel development,” in Linux Sympo-


sium, pp. 239–244, Citeseer, 2007.

84
[105] D. G. Feitelson, “Perpetual development: a model of the linux kernel
life cycle,” Journal of Systems and Software, vol. 85, no. 4, pp. 859–875,
2012.

[106] “Glibc and system call layer.” https://sys.readthedocs.io/en/


latest/, year = Online; accessed 31-March-2021.

[107] “System-call wrappers for glibc.” https://lwn.net/Articles/


799331/, year = Online; accessed 31-March-2021.

[108] “Anatomy of the linux kernel.” https://developer.ibm.com/


technologies/linux/articles/l-linux-kernel/, year = Online;
accessed 31-March-2021.

[109] “Linux system calls in version 5.10.5.” https://github.com/torvalds/


linux/blob/v5.10/arch/x86/entry/syscalls/syscall_64.tbl, year
= Online; accessed 31-March-2021.

[110] “The linux kernel driver interface.” https://www.kernel.


org/doc/html/v5.10/process/stable-api-nonsense.html#
stable-api-nonsense, year = Online; accessed 31-March-2021.

[111] M. Kraeling and A. McKay, Linux for Embedded Systems, pp. 921–959.
12 2013.

[112] J. Corbet, A. Rubini, and G. Kroah-Hartman, Linux device drivers. ”


O’Reilly Media, Inc.”, 2005.

[113] J.-M. Goyeneche and E. Sousa, “Loadable kernel modules,” Software,


IEEE, vol. 16, pp. 65 – 71, 02 1999.

[114] M. Jimenez, M. Papadakis, and Y. Le Traon, “An empirical analysis of


vulnerabilities in openssl and the linux kernel,” pp. 105–112, 01 2016.

[115] M. Bo, M. Dejun, F. Wei, and H. Wei, “Improvements the seccomp


sandbox based on pbe theory,” in 2013 27th International Conference on
Advanced Information Networking and Applications Workshops, pp. 323–
328, 2013.

85
[116] A. Grattafiori, “Understanding and hardening linux containers,” tech.
rep., NCC Group, Manchester, United Kingdom, 06 2016.

[117] A. Kurmus, A. Sorniotti, and R. Kapitza, “Attack surface reduction for


commodity os kernels: trimmed garden plants may attract less bugs,”
in Proceedings of the Fourth European Workshop on System Security,
pp. 1–6, 2011.

[118] M. A. Howard, “A process for performing security code reviews,” IEEE


Security Privacy, vol. 4, no. 4, pp. 74–79, 2006.

[119] P. Louridas, “Static code analysis,” IEEE Software, vol. 23, no. 4, pp. 58–
61, 2006.

[120] K. Goseva-Popstojanova and A. Perhinschi, “On the capability of static


code analysis to detect security vulnerabilities,” Information and Soft-
ware Technology, vol. 68, pp. 18–33, 2015.

[121] A. Kurmus, S. Dechand, and R. Kapitza, “Quantifiable run-time ker-


nel attack surface reduction,” in International Conference on Detection
of Intrusions and Malware, and Vulnerability Assessment, pp. 212–234,
Springer, 2014.

[122] R. Tartler, A. Kurmus, B. Heinloth, V. Rothberg, A. Ruprecht,


D. Dorneanu, R. Kapitza, W. Schröder-Preikschat, and D. Lohmann,
“Automatic {OS} kernel {TCB} reduction by leveraging compile-time
configurability,” in Eighth Workshop on Hot Topics in System Depend-
ability (HotDep 12), 2012.

[123] A. Kurmus, R. Tartler, D. Dorneanu, B. Heinloth, V. Rothberg,


A. Ruprecht, W. Schröder-Preikschat, D. Lohmann, and R. Kapitza,
“Attack surface metrics and automated compile-time os kernel tailor-
ing.,” in NDSS, 2013.

[124] “hardened-kernel.” https://www.whonix.org/wiki/


Hardened-kernel, year = Online; accessed 31-March-2021.

[125] J. Turnbull, Hardening Linux. 01 2005.

86
[126] “Linux default kernel config v5.10.5.” https://github.com/
linuxTest2020/kernelConfigurations/blob/main/defconfig,
year = Online; accessed 31-March-2021.

[127] “Arch linux kernel config v5.10.5.” https://


github.com/archlinux/svntogit-packages/blob/
85a93920af2d3fffff676e90c5560089496cba81/trunk/config, year =
Online; accessed 31-March-2021.

[128] “Restricting automatic kernel-module loading.” https://lwn.net/


Articles/740455/, year = Online; accessed 31-March-2021.

[129] “Cve-2017-6074.” https://www.cvedetails.com/cve/


CVE-2017-6074/, year = Online; accessed 31-March-2021.

[130] “Linux mailing list.” https://lwn.net/Articles/740458/, year = On-


line; accessed 31-March-2021.

[131] “Kconfig make config.” https://www.kernel.org/doc/html/v5.10/


kbuild/kconfig.html, year = Online; accessed 31-March-2021.

[132] “Tested kernel configuration files.” https://github.com/


linuxTest2020/kernelConfigurations, year = Online; accessed
31-March-2021.

[133] “Sonarqube analysis.” https://docs.sonarqube.org/latest/


analysis/overview/, year = Online; accessed 31-March-2021.

[134] “Sonarqube build wrapper.” https://docs.sonarqube.org/latest/


analysis/languages/cfamily/, year = Online; accessed 31-March-
2021.

[135] “Sonarqube projects.”

[136] A. Mathur, M. Cao, S. Bhattacharya, A. Dilger, A. Tomas, and L. Vivier,


“The new ext4 filesystem: current status and future plans,” in Proceed-
ings of the Linux symposium, vol. 2, pp. 21–33, Citeseer, 2007.

[137] D. Gurjar and S. Kumbhar, “A review on performance analysis of zfs


and btrfs,” pp. 0073–0076, 04 2019.

87
[138] R. Scroggins, “Emerging virtualization technology,” Global Journal of
Computer Science and Technology, pp. 11–16, 08 2017.

[139] C. A. Mack, “Fifty years of moore’s law,” IEEE Transactions on Semi-


conductor Manufacturing, vol. 24, no. 2, pp. 202–207, 2011.

[140] P. T. Bukie, C. L. Udeze, I. O. Obono, and E. B. Edim, “Comparative


analysis of compiler performances and program efficiency,” 2019.

[141] N. Provos, M. Friedl, and P. Honeyman, “Preventing privilege escala-


tion,” in USENIX Security Symposium, 09 2003.

88
View publication stats

You might also like