Mockfog 2.0: Automated Execution of Fog Application Experiments in The Cloud
Mockfog 2.0: Automated Execution of Fog Application Experiments in The Cloud
Mockfog 2.0: Automated Execution of Fog Application Experiments in The Cloud
to the definitions presented in [7]. need to account for Dockerization impacts [11, 12].
1
Node
App
1 2 Agent
• We describe the design of MockFog, a system that for the infrastructure bootstrapping and infrastructure
can emulate fog computing infrastructure in ar- teardown. For the second module, developers define
bitrary cloud environments, manage applications, application containers and where to deploy them. The
and orchestrate experiments integrated into a typ- application management module uses this configura-
ical application engineering process. tion for the application container deployment, the col-
lection of results, and for the application shutdown.
• We present our proof-of-concept implementation For the third module, developers define an experi-
MockFog 2.0, the successor to the original proof- ment orchestration schedule that includes application
of-concept implementation4 . instructions and infrastructure changes. The experi-
ment orchestration module uses this configuration to
• We demonstrate how MockFog 2.0 allows develop- initiate infrastructure changes or to signal load gener-
ers to automate experiments that involve changing ators and system under test at runtime.
infrastructure and workload characteristics with The implementation of all three modules is spread
an example application. over two main components: the node manager and the
The remainder of this paper is structured as follows: node agents. There is only a single node manager in-
We first describe the design of MockFog and discuss stance in each MockFog setup. It serves as the point of
how it is used within a typical application engineering entry for application developers and is, in general, their
process (Section 2). Next, we evaluate our approach only way of interacting with MockFog. In contrast, one
through a proof-of-concept implementation (Section 3) node agent instance runs on each of the cloud virtual
and a set of experiments with a smart-factory appli- machines (VMs) that are used to emulate a fog in-
cation using the prototype (Section 4). Finally, we frastructure. Based on input from the node manager,
compare MockFog to related work (Section 5) before a node agents manipulate their respective VM to show
discussion (Section 6) and conclusion (Section 7). the desired machine and network characteristics to the
application.
Figure 2 shows an example with three VMs: two
2 MockFog Design are emulated edge machines, and one is a single “emu-
lated” cloud machine. In the example, the node man-
In this section, we present the MockFog design, starting ager has instructed the node agents to manipulate net-
with a high level overview of its three modules (Sec- work properties of their VMs in such a way that an ap-
tion 2.1). Then, we discuss how to use Mockfog in plication appears to have all its network traffic routed
a typical application engineering process (Section 2.2) through the cloud VM. Moreover, the node agents en-
before describing each of the modules (Sections 2.3 sure that the communication to the node manager is
to 2.5). not affected by network manipulation by using a dedi-
cated management network. Note that developers can
2.1 MockFog Overview freely choose where to run the node manager, e.g., it
could run on a developer’s laptop or on another cloud
MockFog comprises three modules: the infrastructure VM.
emulation module, the application management mod-
ule, and the experiment orchestration module (see Fig-
ure 1). For the first module, developers model the 2.2 Using MockFog in Application En-
properties of their desired (emulated) fog infrastruc- gineering
ture, namely the number and kind of machines but
A typical application engineering process starts with
also the properties of their interconnections. The in-
requirements elicitation, followed by design, implemen-
frastructure emulation module uses this configuration
tation, testing, and finally maintenance. In agile, con-
4 https://github.com/OpenFogStack/MockFog-Meta tinuous integration and DevOps processes, these steps
2
Commit tion 2.3.1 and Section 2.3.2.
Once during the Setup
During the infrastructure bootstrapping step (cf.
Build MockFog Configuration
Figure 3), the node manager connects to the respec-
Unit Testing Infrastructure Bootstrapping I tive cloud service provider to set up a single VM in
the cloud for each fog machine in the infrastructure
Application Container Deployment II model. VM type selection is straightforward when the
Integration Testing Optional
cloud service provider accepts the machine properties
as input directly, e.g., on Google Compute Engine. If
System Testing Experiment Orchestration III not, e.g., on Amazon EC2, the mapping selects the
smallest VM that still fulfills the respective machine re-
Benchmarking
quirements. MockFog then hides surplus resources by
Collection of Results & Application Shutdown
limiting resources for the containers directly. When all
Legend II
Developer Task
machines have been setup, the node manager installs
MockFog Task Infrastructure Teardown I the node agent on each VM which will later manipulate
the machine and network characteristics of its VM.
Once the infrastructure bootstrapping has been com-
Figure 3: The three MockFog modules setup and man- pleted, the developer continues with the application
age experiments during the application engineering management module. Furthermore, MockFog provides
process. IP addresses and access credentials for the emulated fog
machines. With these, the developer can establish di-
rect SSH connections, use customized deployment tool-
are executed in short development cycles, often even in
ing, or manage machines with the APIs of the cloud
parallel – with MockFog, we primarily target the test-
service provider if needed.
ing phase. Within the testing phase, a variety of tests
Once all experiments have been completed, the de-
could be run, e.g., unit tests, integration tests, system
veloper can also use the infrastructure emulation mod-
tests, or acceptance tests [53] but also benchmarks to
ule to destroy the provisioned experiment infrastruc-
better understand system quality levels of an appli-
ture. Here, the node manager unprovisions all emu-
cation, e.g., performance, fault-tolerance, data consis-
lated resources and deletes the access credentials cre-
tency [5]. Out of these tests, unit tests tend to evaluate
ated for the experiment.
small isolated features only and acceptance tests are
usually run on the production infrastructure; often, in-
volving a gradual roll-out process with canary testing, 2.3.1 Machine Properties
A/B testing, and similar approaches, e.g., [45]. For in- Machines are the parts of the infrastructure on which
tegration and system tests as well as benchmarking, application code is executed. Fog machines can appear
however, a dedicated test infrastructure is required. in various different flavors, ranging from small edge
With MockFog, we provide such an infrastructure for devices such as Raspberry Pis5 , over machines within
experiments. a server rack, e.g., as part of a Cloudlet [27, 44], to
We imagine that developers integrate MockFog into virtual machines provisioned through a public cloud
their deployment pipeline (see Figure 3). Once a new service such as Amazon EC2.
version of the application has passed all unit tests, To emulate this variety of machines in the cloud,
MockFog can be used to setup and manage experi- their properties need to be described precisely. Typical
ments. properties of machines are compute power, memory,
and storage. Network I/O would be another standard
property, however, we chose to model this only as part
2.3 Infrastructure Emulation Module of the network in between machines.
A typical fog infrastructure comprises several fog ma- While the memory and storage property are self-
chines, i.e., edge machines, cloud machines, and pos- explanatory, we would like to emphasize that there are
sibly also machines within the network between edge different approaches for the measurement of compute
and cloud [7]. If no physical infrastructure exists yet, power. Amazon EC2, for instance, uses the amount of
developers can follow guidelines, best practices or ref- vCPUs to indicate the compute power of a given ma-
erence architectures such as proposed in [22, 34, 40, chine. This, or the number of cores, is a very rough ap-
42, 43]. On an abstract level, the infrastructure can proximation that, however, suffices for many use cases
be described as a graph comprising machines as ver- as typical fog application deployments rarely achieve
tices and the network between machines as edges [21]. 100% CPU load. As an alternative, it is also possi-
In this graph, machines and network connections can ble to use more generic performance indicators such
also have properties such as the compute power of a as instructions per seconds (IPS) or floating-point op-
machine or the available bandwidth of a connection. erations per second (FLOPS). Our current prototype
For the infrastructure emulation module, the developer (Section 3) uses Docker’s resource limits .
6
3
Table 1: Properties of Emulated Network Connections M1
s
Delay Latency of Outgoing Packages s
5m
3ms
Dispersion Delay Dispersion (+/-) M3 M6
4
State A State B instructions to the node agents which then update ma-
Update Infrastructure Update Infrastructure
chine and network properties accordingly. For exam-
ple, it is possible to reduce the amount of available
Issue Application Commands Issue Application Commands memory (e.g., due to noisy neighbors), render a set
of network links temporarily unavailable, increase net-
Broadcast State Change Broadcast State Change
work latency or package loss, or render a machine com-
Monitor Transitioning Conditions Monitor Transitioning Conditions
pletely unreachable in which case all (application) com-
munication to and from the respective VM is blocked.
MockFog can also reset all infrastructure manipula-
Figure 6: In each state, MockFog can execute four ac- tions back to what was originally defined by the de-
tions. veloper. This action is optional.
5
MEMORY
RESET
3.1 Node Manager
E: memory error
INIT
T: 20min MEMORY T: 1min & The node manager NodeJS package can either be inte-
-20% E: application started
grated in custom tooling or be controlled via the com-
T: 20min
HIGH
LATENY
FINAL mand line. We provide a command line tool as part
T: 20min
of the package that allows users to control the func-
tionality of the three modules. For the infrastructure
Figure 7: The experiment orchestration schedule can emulation module, the node manager relies on the In-
be visualized as a state diagram. frastructure as Code (IaC) paradigm. Following this
paradigm, an infrastructure definition tool serves to
“define, implement, and update IT infrastructure archi-
allows developers to define arbitrarily complex state
tecture” [30]. The main advantage of this is that users
diagrams, see for example Figure 7.
can define infrastructure in a declarative way with the
In the example, the orchestration schedule comprises IaC tooling handling resource provisioning an deploy-
five states: When started, the node manager tran- ment indempotently. In our implementation, the node
sitions to INIT, i.e., it distributes the infrastructure manager relies on Ansible9 playbooks.
configuration update and application commands. Af- The node manager command line tool offers a num-
terwards, it broadcasts state change messages (e.g., ber of commands for each module. As part of the in-
this might initiate the workload generation needed for frastructure emulation module, the developer can:
benchmarking) and begins monitoring the transition
conditions of INIT. As the only transitioning condi- • Bootstrap machines: setup virtual machines on
tion is a time-based condition set to 20 minutes, the AWS EC2 and configure a virtual private cloud
node manager transitions to MEMORY -20% after 20 and the necessary subnets.
minutes. During MEMORY -20%, the node manager
instructs all node agents to reduce the amount of mem- • Install node agents: (re)-install the node agent on
ory available to application components by 20%. Then each VM.
it again broadcasts state change messages (e.g., this
might restart workload generation) and starts to mon- • Modify network characteristics: instruct node
itor the transitioning conditions of MEMORY -20%. agents to modify network characteristics.
For this state, there are two transitioning conditions.
• Destroy and clean up: unprovision all resources
If any application component emits a memory error
and remove everything created through the boot-
event, the node manager immediately transitions to
strap machines command.
MEMORY RESET and instructs the node agents to
reset memory limits. Otherwise, the node manager When modifying the network characteristics for a
transitions to HIGH LATENCY after 20 minutes. It MockFog-deployed application, the node manager ac-
also transitions to HIGH LATENCY from MEMORY counts for the latency between provisioned VMs. For
RESET when it receives the event application started example, when communication should on average in-
and at least one minute has elapsed. At the start of cur a 10 ms latency, and the existing average latency
HIGH LATENCY, the node manager instructs all node between two VMs is already 0.7 ms, the node manager
agents to increase the latency between emulated ma- instructs the respective node agents to delay messages
chines. Then, it again broadcasts state change mes- by 9.3 ms.
sages and waits for 20 minutes before finally transi- As part of the application management module, the
tioning to FINAL. developer can:
6
agents to override their current configuration in accor- Logistics
Central
Office
Prognosis
dance with the updated model. This is done via a dedi- C07
Dashboard
C10
cated “management network”, which always has vanilla Temperature Predict Generate
Camera
network characteristics and which is hidden from ap- Sensor Pickup Dashboard
plication components.
C01 C04 C06 C09
Check for C02 Production C03 Adapt C05 Packaging C08 Aggregate
Defects Control Packaging Control
7
1 CPU
Camera chines
1 CPU 1 CPU
0.5 GB
Central
2 GB
Application Component Machine
Production 2ms
Machine 2ms 1 CPU 2 CPU
16ms Office
0.3 GB 2 GB
Server Camera Camera
1 CPU 6ms Factory 20ms 2 CPU
Gateway
0.5 GB Server 4 GB
Temperature Sensor Temperature Sensor
Packaging 2ms 24ms Cloud
Machine 2ms Check for Defects Gateway
1 CPU
0.1 GB
Adapt Packaging Gateway
Temperature
Sensor Production Control Production Machine
Packaging Control Packaging Machine
Figure 9: The smart factory infrastructure comprises Predict Pickup Factory Server
multiple machines with different CPU and memory re-
sources. Communication between directly connected Logistics Prognosis Factory Server
machines incurs a round-trip latency between 2 ms and Aggregate Factory Server
24 ms. Generate Dashboard Cloud
Central Office Dashboard Central Office Server
When testing real-time systems, two important con-
cepts are reproducibility and controllability [1, p. 263].
During experiments, camera and temperature sensor A
T: 1min
hence generate an input sequence that can be con- Baseline INIT
8
100
C01
C02 75
Difference to Median in %
C03 50
C04 25
C05
0
C06
C07 25
C08 50
C09 75
C10
100
n1
n2
n3
n4
n5
n1
n2
n3
n4
n5
n1
n2
n3
n4
n5
n1
n2
n3
n4
n5
n1
n2
n3
n4
n5
n1
n2
n3
n4
n5
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
Ru
State A State B State C State D State E State F
Figure 11: Latency deviation across experiment runs is small for most communication paths even though
experiments were run in the cloud. On paths C04 to C07, resource utilization is high leading to the expected
variance across experiment runs.
tion 4.3.1). Then, we analyze how the changes made State A State B State C State D State E State F
in each state of the orchestration schedule affect the
application (Section 4.3.2) before summarizing our re- Figure 12: Latency between packaging control and lo-
sults (Section 4.3.3). gistics prognosis is affected by both CPU and network
restrictions.
4.3.1 Experiment Reproducibility
To analyze reproducibility, we repeat the experiment stability of affected communication paths. This, how-
five times. For each experiment run, we bootstrap a ever, is not a limit of reproducibility; rather, identifying
new infrastructure, install the application containers, such cases is exactly what MockFog was designed for.
and start the experiment orchestration – this is done Thus, we can conclude that experiment orchestration
automatically by MockFog 2.0. After the experiment leads to reproducible results under normal operating
run, we calculate the average latency for each com- conditions. This holds true even if a new set of virtual
munication path (C01 to C10 in Figure 8). Ideally, the machines is allocated for each run.
latency results from all five runs should be identical for
each communication path; in the following, we refer to 4.3.2 Application Impact of State Changes
the five measurement values for a given communication
path as latency set. In practice, however, it is not pos- Of the five experiment runs, the second run is the most
sible to achieve such a level of reproducibility because representative for the orchestration schedule: The la-
the application is influenced by outside factors [1, p. tency of its communication paths is usually close to the
263]. For example, running an application on cloud median latency of the set (Figure 11). Thus, we select
VMs and in Docker containers already leads to signif- this run as the basis for analyzing how the changes
icant performance variation [11, 12]. To measure this made in each state affect application metrics.
variation, we use the median runs of each latency set Figure 12 shows the latency between packaging con-
as a baseline and calculate how much individual runs trol and logistics prognosis. This latency includes the
deviate from this baseline (see Figure 11). From the communication path latency of C06 and C07, as well as
figure, we can see that the deviation is small for almost the time predict pickup needs to create the prognosis.
all communication paths. The exception is communi- In states A, D, E, and F, there are either no infras-
cation paths C04 to C07 in states B, C, and E which tructure changes or the ones made are on alternative
shows significant variance across runs. In these states, communication paths; thus, latency is almost identi-
the node manager applies various resource limits on cal. In state B, the factory server loses a CPU core;
the factory machine. Reducing the available compute as a result, predict pickup needs more time to create a
and network resources seems to negatively impact the prognosis which increases the latency. In state C, the
9
Median Quartile
24
22 14
10
18
8
16 6
14 4
12 2
0
10
State A State B State C State D State E State F State A State B State C State D State E State F
Figure 13: Latency on C09 between aggregate and gen- Figure 14: Distribution of packaging rate per state:
erate dashboard is affected by the delay between factory When the temperature increases in state F, packaging
server and cloud. control needs to pause more often resulting in more
frequent packaging rates of 0 (machine is paused, 1st
Quartile) and 15 (machine is running at full speed to
communication path C06 additionally suffers from a catch up on the backlog, 3rd Quartile).
20% probability of package loss and a 20% probability
of package corruption. As these packages have to be
resent14 , this significantly increases overall latency. 4.3.3 Summary
Figure 13 shows the latency on C09, i.e., the time be- In conclusion, our experiments show that MockFog
tween aggregate sending and generate dashboard receiv- 2.0 can be used to automatically setup an emulated
ing a message. In states A, D, and F, there are either no fog infrastructure, install application components, and
infrastructure changes or the ones made are on alterna- orchestrate reproducible experiments. As desired,
tive communication paths; thus, latency is almost iden- changes to infrastructure and workload generation are
tical. Note, that the minimum latency is 12 ms; this clearly visible in the analysis results. The main bene-
makes sense as the round-trip latency between factory fit of the MockFog approach is that this autonomous
server and cloud is 24 ms. In states B and C, the fac- process can be integrated into a typical application en-
tory server loses a CPU core; MockFog 2.0 implements gineering process. This allows developers to automat-
this limitation by setting Docker resource limits. As a ically evaluate how a fog application copes with a va-
result, there is now 1 CPU core that is not used by the riety of infrastructure changes, failures, and workload
application containers and hence available to the op- variations after each commit without access to a phys-
erating system. As the resource limitation seems not ical fog infrastructure, with little manual effort, and in
to impact aggregate, the additional operating system a repeatable way [5].
resources slightly decrease latency. While the effect,
here, is only marginal, one has to keep such side effects
in mind when doing experiments with Docker contain- 5 Related Work
ers. In state E, the round-trip latency between factory
server and cloud is increased to 100 ms. Still, the min- Testing and benchmarking distributed applications in
imum latency only increases to 18 ms as packages are fog computing environments can be very expensive as
routed via the central office server (round-trip latency the provisioning and management of needed hardware
is 16 ms + 20 ms). is costly. Thus, in recent years, a number of approaches
have been proposed which aim to enable experiments
The packaging control reports its current packaging
on distributed applications or services without the need
rate once a second. Figure 14 shows the distribution of
for access to fog devices, especially edge devices.
reported values, i.e., how often each packaging rate was
There are a number of approaches that, similarly to
reported per state. In states A, B, C, D, and E, the
MockFog, aim to provide an easy-to-use solution for
workload generated by camera and temperature sensor
experiment orchestration on emulated testbeds. WoT-
is constant, so the rates are similar. In state F, how-
bench [18, 19] can emulate a large number of Web of
ever, the temperature sensor distributes measurements
Things devices on a single multicore server. As such, it
that are 30% higher on average. As a result, the pack-
is designed for experiments involving many power con-
aging machine must halt production more frequently,
strained devices and cannot be used for experiments
i.e., the packaging rate equals zero. This also increases
with many resource intensive application components
the backlog; hence, the packaging machine will more
such as distributed application backends. D-Cloud [3,
frequently run at full speed to catch up on the back-
14] is a software testing framework that uses virtual
log, i.e., the packaging rate equals 15.
machines in the cloud for failure testing of distributed
systems. However, D-Cloud is not suited for the eval-
14 Resent packages can also be impacted by loss or corruption. uation of fog applications as users cannot control net-
10
work properties such as the latency between two ma- terminates virtual machines and containers running in
chines. Héctor [4] is a framework for automated testing the cloud. The intuition behind this approach is that
of IoT applications on a testbed that comprises phys- failures will occur much more frequently so that engi-
ical machines and a single virtual machine host. Hav- neers are encouraged to aim for resilience. Chaos Mon-
ing only a single host for virtual machines significantly key does not provide the runtime infrastructure as we
limits scalability. Furthermore, the authors only men- do, but it would very well complement our approach.
tion the possibility of experiment orchestration based For instance, Chaos Monkey could be integrated into
on an “experiment definition” but do not provide more MockFog’s experiment orchestration module. Another
details. Balasubramanian et al. [2] and Eisele et al. [10] solution that complements MockFog is DeFog [29]. De-
also present testing approaches that build upon phys- Fog comprises six Dockerized benchmarks that can be
ical hardware for each node rather than more flexi- deployed on edge or cloud resources. From the Mock-
ble virtual machines. EMU-IoT [39] is a platform for Fog point of view, these benchmark containers are
the creation of large scale IoT networks. The platform workload generating application components. Thus,
can also orchestrate customizable experiments and has they could be managed and deployed by MockFog’s
been used to monitor IoT traffic for the prediction of application management and experiment orchestration
machine resource utilization [38]. EMU-IoT focuses module. Gandalf [25] is a monitoring solution for long-
on modeling and analyzing IoT networks; it cannot term Cloud deployments. It is used in production as
manipulate application components or the underlying part of Azure, Microsoft’s Cloud service offer. It is
runtime infrastructure. therefore not part of the application engineering pro-
Gupta et al. presented iFogSim [13], a toolkit to cess (cf. Figure 3) and could be used after running
evaluate placement strategies for independent applica- experiments with MockFog. Finally, MockFog can be
tion services on machines distributed across the fog. used to evaluate and experiment with fog computing
In contrast to our solution, iFogSim uses simulation frameworks such as FogFrame [48] or URMILA [46].
to predict system behavior and, thus, to identify good
placement decisions. While this is useful in early devel-
opment stages, simulation based approaches cannot be 6 Discussion
used for testing of real application components which
While MockFog allows application developers to over-
we support with MockFog. [8, 24, 41] also describe
come the challenge that a fog computing testing in-
systems which can simulate complex IoT scenarios with
frastructure either does not exist yet or is already used
thousands of IoT devices. Additionally, network delays
in production, it has some limitations. For example,
and failure rates can be defined to model a realistic,
it does not work when a specific local hardware is re-
geo-distributed system. More simulation approaches
quired, e.g., when the use of a particular crypto chip
include FogExplorer [15, 16] which aims to find good
is deeply embedded in the application source code.
fog application designs or Cisco’s PacketTracer15 which
MockFog also tends to work better for the emulation
simulates complex networks – all these simulation ap-
of larger edge machines such as a Raspberry Pi but
proaches cannot be used for experimenting with real
has problems when smaller devices are involved as they
application components.
cannot be emulated accurately.
[9, 28, 32, 33] build on the network emulators Similarly, if the communication of a fog applica-
MiniNet [31] and MaxiNet [52]. While they target tion is not based on Ethernet or WiFi, e.g., be-
a similar use case as MockFog, their focus is not on cause sensors communicate via a LoRaWAN[47] such
application testing and benchmarking but rather on as TheThingsNetwork17 , MockFog’s approach of emu-
network design (e.g., network function virtualization). lating connections between devices does not work out
Based on the papers, the prototypes also appear to be of the box as these sensors expect to have access to a
designed for single machine deployment – which limits LoRa sender. With additional effort, however, appli-
scalability – while MockFog was specifically designed cation developers could adapt their sensor software to
for distributed deployment. Finally, neither of these use Ethernet or WiFi when no Lora sender is available.
approaches appears to support experiment orchestra-
Also, emulating real physical connections is difficult
tion or the injection of failures. Missing support for
as their characteristics are often influenced by external
experiment orchestration is also a key difference be-
factors such as other users, electrical interference, or
tween MockFog and MAMMOTH [26], a large scale
natural disasters. While it would be possible to add a
IoT emulator.
machine learning component to MockFog that updates
OMF [37], MAGI [20], and NEPI [36] can orches- connection properties based on past data collected on
trate experiments using existing physical testbeds. On a reference physical infrastructure, it is hard to justify
a high level, these solutions aim to provide a function- this effort for most use cases.
ality which is similar to the third MockFog module, Finally, MockFog starts one VM for every single
i.e., the experiment orchestration module. fog machine. This approach does not work well when
For failure testing, Netflix has released Chaos Mon- the infrastructure model comprises thousands of IoT
key[50] as open source16 . Chaos Monkey randomly devices. In this case, one should run groups of de-
vices with similar network characteristics on only a few
15 https://www.netacad.com/courses/packet-tracer
16 https://github.com/Netflix/chaosmonkey 17 https://www.thethingsnetwork.org
11
larger VMs. [6] David Bermbach, Liang Zhao, and Sherif Sakr.
“Towards Comprehensive Measurement of Con-
sistency Guarantees for Cloud-Hosted Data Stor-
7 Conclusion age Services”. In: Performance Characterization
and Benchmarking. Ed. by Raghunath Nambiar
In this paper, we proposed MockFog, a system for
and Meikel Poess. Red. by David Hutchison et al.
the emulation of fog computing infrastructure in ar-
Vol. 8391. Springer, 2014, pp. 32–47. isbn: 978-
bitrary cloud environments. MockFog aims to sim-
3-319-04935-9 978-3-319-04936-6.
plify experimenting with fog applications by provid-
ing developers with the means to design emulated fog [7] David Bermbach et al. “A Research Perspec-
infrastructure, configure performance characteristics, tive on Fog Computing”. In: 2nd Workshop
manage application components, and orchestrate their on IoT Systems Provisioning & Management
experiments. We evaluated our approach through a for Context-Aware Smart Cities. Springer, 2018,
proof-of-concept implementation and experiments with pp. 198–210. doi: 10.1007/978-3-319-91764-
a fog-based smart factory application. We demon- 1_16.
strated how MockFog’s features can be used to study [8] Giacomo Brambilla et al. “A Simulation Platform
the impact of infrastructure changes and workload vari- for Large-Scale Internet of Things Scenarios in
ations. Urban Environments”. In: Proceedings of the The
First International Conference on IoT in Urban
Space. ICST, 2014. doi: 10 . 4108 / icst . urb -
Acknowledgment iot.2014.257268.
We would like to thank Elias Grünewald and Sascha [9] Antonio Coutinho et al. “Fogbed: A Rapid-
Huk who have contributed to the proof-of-concept pro- Prototyping Emulation Environment for Fog
totype of the preliminary MockFog paper [17]. Computing”. In: Proceedings of the IEEE Inter-
national Conference on Communications (ICC).
IEEE, 2018. doi: 10.1109/ICC.2018.8423003.
Bibliography [10] Scott Eisele et al. “Towards an architecture for
[1] Paul Ammann and Jeff Offutt. Introduction to evaluating and analyzing decentralized Fog ap-
software testing. Cambridge University Press, plications”. In: 2017 IEEE Fog World Congress.
2008. 322 pp. IEEE, 2017, pp. 1–6. doi: 10.1109/FWC.2017.
8368531.
[2] Daniel Balasubramanian et al. “A Rapid Testing
Framework for a Mobile Cloud”. In: 2014 25th [11] Martin Grambow et al. “Dockerization Impacts
IEEE International Symposium on Rapid System in Database Performance Benchmarking”. In:
Prototyping. IEEE, 2014, pp. 128–134. doi: 10. Technical Report MCC.2018.1. Berlin, Deutsch-
1109/RSP.2014.6966903. land: TU Berlin & ECDF, Mobile Cloud Com-
puting Research Group, 2018.
[3] Takayuki Banzai et al. “D-Cloud: Design of a
Software Testing Environment for Reliable Dis- [12] Martin Grambow et al. “Is it safe to docker-
tributed Systems Using Cloud Computing Tech- ize my database benchmark?” In: Proceedings of
nology”. In: 2010 10th IEEE/ACM International the 34th ACM/SIGAPP Symposium on Applied
Conference on Cluster, Cloud and Grid Comput- Computing. ACM, 2019, pp. 341–344. doi: 10.
ing. IEEE, 2010, pp. 631–636. doi: 10 . 1109 / 1145/3297280.3297545.
CCGRID.2010.72. [13] Harshit Gupta et al. “iFogSim: A Toolkit for
[4] Ilja Behnke, Lauritz Thamsen, and Odej Kao. Modeling and Simulation of Resource Manage-
“Héctor: A Framework for Testing IoT Ap- ment Techniques in the Internet of Things, Edge
plications Across Heterogeneous Edge and and Fog Computing Environments”. In: Software:
Cloud Testbeds”. In: Proceedings of the 12th Practice and Experience 47.9 (2017), pp. 1275–
IEEE/ACM International Conference on Utility 1296. doi: 10.1002/spe.2509.
and Cloud Computing Companion. ACM, 2019, [14] Toshihiro Hanawa et al. “Large-Scale Software
pp. 15–20. doi: 10.1145/3368235.3368832. Testing Environment Using Cloud Computing
[5] David Bermbach, Erik Wittern, and Stefan Tai. Technology for Dependable Parallel and Dis-
Cloud Service Benchmarking: Measuring Qual- tributed Systems”. In: 2010 Third International
ity of Cloud Services from a Client Perspective. Conference on Software Testing, Verification,
Springer, 2017. and Validation Workshops. IEEE, 2010, pp. 428–
433. doi: 10.1109/ICSTW.2010.59.
[15] Jonathan Hasenburg, Sebastian Werner, and
David Bermbach. “FogExplorer”. In: Proceedings
of the 19th International Middleware Confer-
ence (Posters). Middleware ’18. Rennes, France:
12
ACM, 2018, pp. 1–2. isbn: 978-1-4503-6109-5. [26] Vilen Looga et al. “MAMMOTH: A massive-
doi: 10.1145/3284014.3284015. scale emulation platform for Internet of Things”.
[16] Jonathan Hasenburg, Sebastian Werner, and In: 2012 IEEE 2nd International Conference
David Bermbach. “Supporting the Evaluation of on Cloud Computing and Intelligence Systems.
Fog-based IoT Applications During the Design IEEE, 2012, pp. 1235–1239. doi: 10.1109/CCIS.
Phase”. In: Proceedings of the 5th Workshop on 2012.6664581.
Middlware and Applications for the Internet of [27] Redowan Mahmud, Ramamohanarao Kotagiri,
Things. M4IoT’18. Rennes, France: ACM, 2018, and Rajkumar Buyya. “Fog Computing: A Tax-
pp. 1–6. isbn: 978-1-4503-6118-7. doi: 10.1145/ onomy, Survey and Future Directions”. In: In-
3286719.3286720. ternet of Everything: Algorithms, Methodologies,
[17] Jonathan Hasenburg et al. “MockFog: Emulat- Technologies and Perspectives. Springer, 2018,
ing Fog Computing Infrastructure in the Cloud”. pp. 103–130. arXiv: 1611.05539.
In: 2019 IEEE International Conference on Fog [28] Ruben Mayer et al. “EmuFog: Extensible and
Computing. IEEE, 2019, pp. 144–152. doi: 10. Scalable Emulation of Large-Scale Fog Comput-
1109/ICFC.2019.00026. ing Infrastructures”. In: 2017 IEEE Fog World
[18] Raoufehsadat Hashemian et al. “Contention Congress. IEEE, 2017. doi: https://doi.org/
Aware Web of Things Emulation Testbed”. In: 10.1109/FWC.2017.8368525.
Proceedings of the ACM/SPEC International [29] Jonathan McChesney et al. “DeFog: fog com-
Conference on Performance Engineering. ACM, puting benchmarks”. In: Proceedings of the 4th
2020, pp. 246–256. doi: 10 . 1145 / 3358960 . ACM/IEEE Symposium on Edge Computing.
3379140. ACM, 2019, pp. 47–58. doi: 10.1145/3318216.
[19] Raoufeh Hashemian et al. “WoTbench: A Bench- 3363299.
marking Framework for the Web of Things”. In: [30] Kief Morris. Infrastructure as Code: Managing
Proceedings of the 9th International Conference Servers in the Cloud. 1st ed. O’Reilly, 2016.
on the Internet of Things. ACM, 2019. doi: 10. [31] Rogerio Leao Santos de Oliveira et al. “Us-
1145/3365871.3365897. ing Mininet for Emulation and Prototyping
[20] Alefiya Hussain et al. “Toward Orchestration Software-defined Networks”. In: 2014 IEEE
of Complex Networking Experiments”. In: 13th Colombian Conference on Communications and
USENIX Workshop on Cyber Security Experi- Computing. IEEE, 2014. doi: 10 . 1109 /
mentation and Test. USENIX, 2020. ColComCon.2014.6860404.
[21] A Kajackas and R Rainys. “Internet Infrastruc- [32] Manuel Peuster, Johannes Kampmeyer, and Hol-
ture Topology Assessment”. In: Electronics and ger Karl. “Containernet 2.0: A Rapid Proto-
Electrical Engineering 7.103 (2010), p. 4. typing Platform for Hybrid Service Function
[22] Vasileios Karagiannis and Stefan Schulte. “Com- Chains”. In: 2018 4th IEEE Conference on Net-
parison of Alternative Architectures in Fog Com- work Softwarization and Workshops. Montreal,
puting”. In: 2020 IEEE 4th International Con- QC: IEEE, 2018, pp. 335–337. doi: 10 . 1109 /
ference on Fog and Edge Computing (ICFEC). NETSOFT.2018.8459905.
IEEE, 2020, pp. 19–28. doi: 10 . 1109 / [33] Manuel Peuster, Holger Karl, and Steven van
ICFEC50348.2020.00010. Rossem. “MeDICINE: Rapid Prototyping of
[23] Shweta Khare et al. “Linearize, Predict and Production-ready Network Services in multi-PoP
Place: Minimizing the Makespan for Edge-based Environments”. In: 2016 IEEE Conference on
Stream Processing of Directed Acyclic Graphs”. Network Function Virtualization and Software
In: Proceedings of the 4th ACM/IEEE Sympo- Defined Networks. IEEE, 2016, pp. 148–153. doi:
sium on Edge Computing. ACM, 2019, pp. 1–14. 10.1109/NFV-SDN.2016.7919490.
doi: 10.1145/3318216.3363315. [34] Tobias Pfandzelter and David Bermbach. “IoT
[24] Isaac Lera, Carlos Guerrero, and Carlos Juiz. Data Processing in the Fog: Functions, Streams,
“YAFS: A Simulator for IoT Scenarios in or Batch Processing?” In: 2019 IEEE Interna-
Fog Computing”. In: IEEE Access 7 (2019), tional Conference on Fog Computing (ICFC).
pp. 91745–91758. doi: 10.1109/ACCESS.2019. IEEE, 2019, pp. 201–206. doi: 10.1109/ICFC.
2927895. 2019.00033.
[25] Ze Li et al. “Gandalf: An Intelligent, End-To-End [35] Tobias Pfandzelter, Jonathan Hasenburg, and
Analytics Service for Safe Deployment in Cloud- David Bermbach. From Zero to Fog: Efficient En-
Scale Infrastructure”. In: 17th USENIX Sympo- gineering of Fog-Based IoT Applications. 2020.
sium on Networked Systems Design and Imple- arXiv: 2008.07891 [cs.DC].
mentation. USENIX, 2020.
13
[36] Alina Quereilhac et al. “NEPI: An Integra- [46] Shashank Shekhar et al. “URMILA: Dynami-
tion Framework for Network Experimentation”. cally trading-off fog and edge resources for per-
In: 19th International Conference on Software, formance and mobility-aware IoT services”. In:
Telecommunications and Computer Networks. Journal of Systems Architecture 107 (2020). doi:
IEEE, 2011. 10.1016/j.sysarc.2020.101710.
[37] Thierry Rakotoarivelo et al. “OMF: a con- [47] Jonathan de Carvalho Silva et al. “LoRaWAN - A
trol and management framework for networking Low Power WAN Protocol for Internet of Things:
testbeds”. In: ACM SIGOPS Operating Systems a Review and Opportunities”. In: 2017 2nd Inter-
Review 43.4 (2010), pp. 54–59. doi: 10 . 1145 / national Multidisciplinary Conference on Com-
1713254.1713267. puter and Energy Science. IEEE, 2017.
[38] Brian Ramprasad, Joydeep Mukherjee, and [48] Olena Skarlat et al. “A Framework for Optimiza-
Marin Litoiu. “A Smart Testing Framework for tion, Service Placement, and Runtime Operation
IoT Applications”. In: 2018 IEEE/ACM Interna- in the Fog”. In: 2018 IEEE/ACM 11th Interna-
tional Conference on Utility and Cloud Comput- tional Conference on Utility and Cloud Comput-
ing Companion. IEEE, 2018, pp. 252–257. doi: ing (UCC). IEEE, 2018, pp. 164–173. doi: 10.
10.1109/UCC-Companion.2018.00064. 1109/UCC.2018.00025.
[39] Brian Ramprasad et al. “EMU-IoT - A Virtual [49] Olena Skarlat et al. “Optimized IoT service place-
Internet of Things Lab”. In: 2019 IEEE Inter- ment in the fog”. In: Service Oriented Computing
national Conference on Autonomic Computing. and Applications 11.4 (2017), pp. 427–443. doi:
IEEE, 2019, pp. 73–83. doi: 10 . 1109 / ICAC . 10.1007/s11761-017-0219-8.
2019.00019. [50] Ariel Tseitlin. “The Antifragile Organization”.
[40] Thomas Rausch et al. “Synthesizing Plausible In- In: Communications of the ACM 56.8 (2013),
frastructure Configurations for Evaluating Edge pp. 40–44. doi: 10.1145/2492007.2492022.
Computing Systems”. In: 3rd USENIX Workshop [51] Prateeksha Varshney and Yogesh Simmhan. “De-
on Hot Topics in Edge Computing. USENIX, mystifying Fog Computing: Characterizing Ar-
2020. chitectures, Applications and Abstractions”. In:
[41] Maria Salama, Yehia Elkhatib, and Gordon 2017 IEEE 1st International Conference on Fog
Blair. “IoTNetSim: A Modelling and Simulation and Edge Computing. IEEE, 2017, pp. 115–124.
Platform for End-to-End IoT Services and Net- doi: 10.1109/ICFEC.2017.20.
working”. In: Proceedings of the 12th IEEE/ACM [52] Philip Wette, Martin Draxler, and Arne
International Conference on Utility and Cloud Schwabe. “MaxiNet: Distributed emulation of
Computing. 2019, pp. 251–261. doi: 10 . 1145 / software-defined networks”. In: 2014 IFIP Net-
3344341.3368820. working Conference. IEEE, 2014. doi: 10.1109/
[42] Lidiane Santos et al. “An Architectural Style for IFIPNetworking.2014.6857078.
Internet of Things Systems”. In: Proceedings of [53] Mario Winter et al. Der Integrationstest. Hanser,
the 35th Annual ACM Symposium on Applied 2013.
Computing. ACM, 2020, pp. 1488–1497. doi: 10.
1145/3341105.3374030.
[43] Lidiane Santos et al. “Identifying Requirements
for Architectural Modeling in Internet of Things
Applications”. In: 2019 IEEE International Con-
ference on Software Architecture Companion.
IEEE, 2019, pp. 19–26. doi: 10 . 1109 / ICSA -
C.2019.00011.
[44] Mahadev Satyanarayanan et al. “The Case for
VM-based Cloudlets in Mobile Computing”. In:
IEEE Pervasive Computing (2009), p. 9. doi: 10.
1109/MPRV.2009.64.
[45] Gerald Schermann et al. “Bifrost: Supporting
Continuous Deployment with Automated Enact-
ment of Multi-Phase Live Testing Strategies”.
In: Proceedings of the 17th International Mid-
dleware Conference. ACM, 2016. doi: 10.1145/
2988336.2988348.
14