Towards Security and Privacy For Pervasive Computing: (RHC, Almuhtad, Naldurg, Geta, Mickunas) @cs - Uiuc.edu
Towards Security and Privacy For Pervasive Computing: (RHC, Almuhtad, Naldurg, Geta, Mickunas) @cs - Uiuc.edu
Computing*
1 Introduction
* This research is supported by a grant from the National Science Foundation, NSF CCR 0086094 ITR
and NSF 99-72884 EQ.
M. Okada et al. (Eds.): ISSS 2002, LNCS 2609, pp. 1–15, 2003.
© Springer-Verlag Berlin Heidelberg 2003
2 R. Campbell et al.
their preferences as well as performing tasks and group activities according to the
nature of the physical space. We term this dynamic, information-rich habitat an
“active space.” Within this space, individuals may interact with flexible applications
that may follow the user, define and control the function of the space, or collaborate
with remote users and applications.
The realization of this computing paradigm is not far fetched. An average person
today already owns vast numbers of consumer devices, electronic gadgets, and
gizmos that already have processors, microcontrollers, and memory chips embedded
into them, like VCRs, TVs, washers and dryers. The vehicles we use on daily basis
already have a large number of embedded computers handling different subsystems of
the vehicle, like ABS (Anti-lock Braking System) and ESP (Electronic Stability
Program). Technologies like Bluetooth [1] and Wi-Fi [2] make it possible to embed
networking capabilities into any small devices without hassle. In effect, these
technologies help make networking much more general and achievable even on
elementary devices, like toasters and paperclips.
In this section, we talk about the major challenges and requirements for securing
pervasive computing environments.
2.1 Challenges
As mentioned before, the additional features and the extended functionality that
pervasive computing offers make it prone to additional vulnerabilities and exposures.
Below, we mention these features that add extra burden to the security subsystem.
can threaten their data and programs in the virtual world. Therefore, traditional
mechanisms that focus merely on digital security become inadequate.
Since most policy management tools deal with these low-level interfaces,
administrators may not have a clear picture of the ramifications of their policy
management actions. Dependencies among objects can lead to unexpected side effects
and undesirable behavior [11]. Further, the disclosure of security policies may be a
breach of security. For example, knowing whether the system is on the lookout for an
intruder could actually be a secret. Thus, unauthorized personnel should not be able to
know what the security policy might become under a certain circumstance.
To deal with the new vulnerabilities introduced by pervasive computing, security and
privacy guarantees in pervasive computing environments should be specified and
drafted early into the design process rather than being considered as add-ons or
afterthoughts. Previous efforts in retrofitting security and anonymity into existing
systems have proved to be inefficient and ineffective. The Internet and Wi-Fi are two
such examples both of which still suffer from inadequate security. In this section, we
briefly mention the important requirements needed for a security subsystem for
pervasive computing environments.
2.2.2. Multilevel
When it comes to security, one size does not fit all. Hence, the security architecture
deployed should be able to provide different levels of security services based on
system policy, context information, environmental situations, temporal circumstances,
available resources, etc. In some instances, this may go against the previous point.
Scenarios which require a higher-level of assurance or greater security may require
users to interact with the security subsystem explicitly by, say, authenticating
themselves using a variety of means to boost system’s confidence.
6 R. Campbell et al.
2.2.3. Context-Awareness
Often, traditional security is somewhat static and context insensitive. Pervasive
computing integrates context and situational information, transforming the computing
environment into a sentient space. The security aspects of it are no exceptions.
Security services should make extensive use of context information available. For
example, access control decisions may depend on time or special circumstances.
Context data can provide valuable information for intrusion detection mechanisms.
The principal of “need to know” should be applied on temporal and situational basis.
For instance, security policies should be able to change dynamically to limit the
permissions to the times or situations when they are needed. However, viewing what
the security policy might become in a particular time or under a particular situation
should not be possible. In addition, there is a need to verify the authenticity and
integrity of the context information acquired. This is sometimes necessary in order to
thwart false context information obtained from rogue or malfunctioning sensors.
2.2.5. Interoperability
With many different security technologies surfacing and being deployed, the
assumption that a particular security mechanism will eventually prevail is flawed. For
that reason, it is necessary to support multiple security mechanisms and negotiate
security requirements.
2.2.7. Scalability
Pervasive computing environments can host hundreds or thousands of diverse
devices. The security services should be able to scale to the “dust” of mobile and
embedded devices available at some particular instance of time. In addition, the
security services need to be able to support huge numbers of users with different roles
and privileges, under different situational information.
In the following section, we suggest solutions that address some of the issues
mentioned above.
Towards Security and Privacy for Pervasive Computing 7
reliable and secure than others. For example, it is easy for smart badges to be
misplaced or stolen. On the other hand, the use of biometrics, retinal scans for
instance, is a fairly good means of authentication that is difficult to forge. Because of
the various authentication methods and their different strengths, it is sensible to
accommodate different levels of confidence and incorporate context and sensor
information to infer more information or buildup additional confidence in a
principal’s identity. Further, the same techniques can assist in detecting intruders and
unauthorized accesses and assessing their threat level.
The various means of authenticating principals and the notion of different
confidence levels associated with authenticated principals constitute additional
information that can enrich the context awareness of smart spaces. In a later section,
we illustrate how such information is inferred and exchanged with other Gaia core
services.
To meet the stated requirements we propose a federated authentication service that
is based on distributed, pluggable authentication modules. Fig. 1 provides a sketch of
the authentication architecture that incorporates the objectives mentioned above.
PAM (Pluggable Authentication Module) [17] provides an authentication method that
allows the separation of applications from the actual authentication mechanisms and
devices. Dynamically pluggable modules allow the authentication subsystem to
incorporate additional authentication mechanisms on the fly as they become available.
The Gaia PAM (GPAM) is wrapped by two APIs. One interface is made available for
applications, services, and other Gaia components, to request authentication of entities
or inquire about authenticated principals. Since the authentication service can be
running anywhere in the space (possibly federated) we use CORBA facilities to allow
the discovery and remote invocation of the authentication services that serve a
particular smart space. The authentication modules themselves are divided into two
types: Gaia Authentication Mechanisms Modules (AMM), which implement general
authentication mechanisms or protocols that are independent of the actual device
being used for authentication. These modules include a Kerberos authentication
module, a SESAME [18] authentication module, the traditional username/password-
based module, a challenge-response through a shared secret module, etc. The other
type of modules is the Authentication Device Modules (ADM). These modules are
independent of the actual authentication protocol; instead, they are dependent on the
particular authentication device.
This decoupling enables greater flexibility. When a new authentication protocol is
devised, an AMM module can be written and plugged in to support that particular
protocol. Devices that can capture the information required for completing the
protocol can use the new authentication module with minimal changes to their device
drivers. When a new authentication device is incorporated to the system, a new ADM
module is implemented in order to incorporate the device into the active space.
However, the device can use existing security mechanisms by using CORBA facilities
to discover and invoke authentication mechanisms that are compatible with its
capabilities. In effect, this creates an architecture similar to PAM and is also federated
through the use of CORBA. Many CORBA implementations are heavyweight and
require significant resources. To overcome this hurdle, we used the Universally
Interoperable Core (UIC), which provides a lightweight, high-performance
implementation of basic CORBA services [19].
Towards Security and Privacy for Pervasive Computing 9
GPAM API
(CORBA)
user
user name
name
SESAME
SESAME Digital
Digital Password
Password
Kerberos signatures AMMs
Kerberos signatures Challenge-
Challenge-
Response
Response
Smart
Smart Smart
Smart Fingerprint
Fingerprint
Badge
Badge Watch scanner
PDA scanner
Watch PDA ADMs
authentication
devices
Users
To address the privacy problems in pervasive computing, we introduce Mist [20, 21] a
general communication infrastructure that preserves privacy in pervasive computing
environments. Mist facilitates the separation of location from identity. This allows
authorized entities to access services while protecting their location privacy. Here, we
just give a brief overview of how Mist works. Mist consists of a privacy-preserving
hierarchy of Mist Routers that form an overlay network, as illustrated in Fig. 2. This
overlay network facilitates private communication by routing packets using a hop-by-
hop, handle-based routing protocol. We employ public key cryptography in the initial
setup of these handles. These techniques make communication infeasible to trace by
eavesdroppers and untrusted third parties.
A handle is an identifier that is unique per Mist Router. Every incoming packet has
an “incoming handle” that is used by the Mist Router to identify the next hop to
which to forward the packet. The incoming handle is replaced by an outgoing handle
before the packet is transmitted to the next hop. This hop-by-hop routing protocol
allows a Mist Router to forward the packet to the next hop, while hiding the original
source and final destination. In effect, this process creates “virtual circuits” over
which data can flow securely and privately.
Mist introduces Portals that are installed at various locations in the pervasive
computing environment. These Portals are devices capable of detecting the presence
of people and objects through the use of base stations or sensors. However, they are
10 R. Campbell et al.
...... P
P Portal
4
Mist Routers
4
Alice’s Campus
Lighthouse Mist Router
91
91
0
0
... 3
3 CS Building’s
Mist Router
789
78
9
2
3rd Floor’s
2 Mist Router
...
19
2
19
2
1
... 1
Mist
5
51
Alice
P P P P
P P P P
P
P
(or safe) access control matrix, the state transitions in the model add only authorized
access rights to the matrix. In other words, an access control model is secure if all
access rights are authorized. This property has to be preserved by the access control
system even when the state of the system changes dynamically, due to users and
devices entering and leaving an active space. The definition of authorized access right
depends on the type of access control policy. For example in MAC (Mandatory access
control) system, only an administrator is authorized to add a new access right to the
system. In DAC (Discretionary Access Control) object owners can add access rights.
In order to enforce the access control safety property, we annotate the specification
we built earlier with authorization proofs. These proofs are subroutines that use
credentials to attest the ownership of an object by a subject (for DAC) or the type of
subject (for MAC). The credentials are cryptographically protected by digital
signatures and/or encryption. All state transitions in the access control specification
are rewritten as guarded commands. The guards verify access control safety
condition, by validating authorization proofs. The commands (that correspond to
methods that change state variables) are executed only if the guard can be validated.
This annotation (the guard) to each dynamic state transition automatically guarantees
the safety properties even when the access matrix is allowed to change dynamically,
preserving the security of the system at all times.
Similar to access control safety, we have developed dynamic policies for some
information flow and availability properties. These security properties are a
combination of safety and liveness properties and include temporal quantifiers. At a
more fundamental level, we argue that dynamic environments require dynamic
security solutions. Dynamic policies enable administrators to react to vulnerabilities
detected by IDS and risk analyzers with greater confidence. By including temporal
properties in our design of security policies, we can change our system
implementations in a controlled manner, and turn on restrictive attack resilient
policies at will, without sacrificing security guarantees. This dynamism also allows us
to change back to default policies after the attack has been mitigated, allowing us to
implement minimal security solutions on a need to protect basis, and amortize
performance penalties.
Smart rooms are typically shared by different groups of users for different activities at
different points in time. Each activity-specific incarnation of an active space (such as
a “classroom” or a “meeting”) is called a “Virtual Space”. Access control policies and
mechanisms are necessary to ensure that users only use the resources (both hardware
and software) in an active space in authorized ways, and to allow shared use of the
space. These different virtual spaces have varying access control requirements, and
the access control policies and mechanisms should allow seamless transitions between
virtual spaces, without sacrificing security properties. In addition, the policies should
be easy to configure, enforce, and administer.
We are in the process of developing an access control system [25] for Gaia. Our
access control system supports customizable access control policies, and integrates
physical and virtual aspects of the environment into the access control decision
mechanisms. It can reconfigure an active space dynamically to support different
access control policies for different virtual spaces, depending on the set of users and
Towards Security and Privacy for Pervasive Computing 13
the activity being undertaken in the space. We also provide dynamic support for
explicit cooperation between different users, groups of users, and devices.
Our system uses the RBAC model [26] for policy configuration, and implements
both discretionary and mandatory access controls, providing flexibility and
expressiveness. We define three types of roles viz., system, space, and application
roles. Each role can be managed by a different administrator, thus allowing for
decentralized administration.
Within each active space, access control policies are expressed in terms of space
roles. The space administrator sets access control policies for resources within a
particular space. These policies are in the form of access lists for resources within the
space, expressed in terms of space roles and permissions. When users enter a space,
their system role is mapped into an appropriate space role automatically.
We also build an explicit notion of cooperation into our access control model using
the concept of the space mode. We define four distinct modes of collaboration in our
model: individual, shared, supervised-use and collaborative, corresponding to
different levels of cooperation between users in the space who do not necessarily trust
each other. The mode of the space depends on the users within the space and the
activity being performed.
This model of access control is useful in developing access control policies that are
appropriate for collaborative applications that are common in such environments.
The shift to the pervasive computing paradigm brings forth new challenges to security
and privacy, which cannot be addressed by mere adaptation of existing security and
privacy mechanisms. Unless security concerns are accommodated early in the design
phase, pervasive computing environments will be rife with vulnerabilities and
exposures. In this paper we talked about some of these challenges. We also presented
some solutions in our prototype implementation.
The construction of complete, integrated pervasive environments and their real-life
deployment are still things of the future. Security in pervasive computing is expected
to be an integral part of the whole system, which is not realized yet. It should be
noted, however, that there is no single “magical” protocol or mechanism that can
address all the security issues and meet the requirements and expectations of secure
pervasive computing. Moreover, security itself consists of a variety of different and
broad aspects each of which requires detailed research and customized solutions. For
these reasons, our prototype implementations are not meant to be a solution for all
problems. Instead, they represent milestones towards the construction of a full-
fledged security subsystem.
Promising future directions include the development of formal specifications of
desirable behavior in the form of security and privacy properties in pervasive
computing. Access control, information flow, availability, and secure protocols for
authentication, non-repudiation, confidentiality and integrity can be specified in terms
of system properties such as safety and liveness. It is also promising to incorporate
intelligence and automated reasoning into the security architecture. This “intelligent”
security system would be able to make judgments and give assistance in securing the
environment without too much intervention by users or administrators. Therefore, we
14 R. Campbell et al.
are exploring the possibility of incorporating automated reasoning and learning into
the active spaces security architecture, enabling it to perform intelligent inferences
under different contexts despite the uncertainties that arise as a result of bridging the
physical and virtual worlds.
We are also looking into the development of several middleware reflective object-
oriented patterns that can support the different aspects of security, including
authentication, access control, anonymity, and policy management, as well as how to
instantiate them with diverse mechanisms. Finally, because it is difficult to develop
security models that involve people and physical spaces, more studies on integrating
virtual and physical security need to be considered.
References
[16] L. Zadeh, "Fuzzy sets as basis for a theory of possibility," Fuzzy Sets and Systems, vol. 1,
pp. 3–28, 1978.
[17] V. Samar and R. Schemers, "Unified Login with Pluggable Authentication Modules
(PAM)," RFC 86.0, 1995.
[18] P. Kaijser, T. Parker, and D. Pinkas, "SESAME: The Solution to Security for Open
Distributed Systems," Computer Communications, vol. 17, pp. 501–518, 1994.
[19] M. Roman, F. Kon, and R. H. Campbell, "Reflective Middleware: From Your Desk to
Your Hand," IEEE Distributed Systems Online Journal, Special Issue on Reflective
Middleware, 2001.
[20] J. Al-Muhtadi, R. Campbell, A. Kapadia, D. Mickunas, and S. Yi, "Routing Through the
Mist: Privacy Preserving Communication in Ubiquitous Computing Environments,"
presented at International Conference of Distributed Computing Systems (ICDCS 2002),
Vienna, Austria, 2002.
[21] J. Al-Muhtadi, R. Campbell, A. Kapadia, D. Mickunas, and S. Yi, "Routing through the
Mist: Design and Implementation," UIUCDCS-R-2002-2267, March 2002.
[22] Z. Liu, P. Naldurg, S. Yi, R. H. Campbell, and M. D. Mickunas, "An Agent Based
Architecture for Supporting Application Level Security," presented at DARPA
Information Survivability Conference (DISCEX 2000), Hilton Head Island, South
Carolina, 2000.
[23] P. Naldurg and R. Campbell, "Dynamic Access Control Policies in Seraphim,"
Department of Computer Science, University of Illinois at Urbana-Champaign
UIUCDCS-R-2002-2260, 2002.
[24] P. Naldurg, R. Campbell, and M. D. Mickunas, "Developing Dynamic Security Policies,"
presented at Proceedings of the 2002 DARPA Active Networks Conference and
Exposition (DANCE 2002), San Francisco, CA, USA, 2002.
[25] G. Sampemane, P. Naldurg, and R. Campbell, "Access Control for Active Spaces,"
presented at the Annual Computer Security Applications Conference (ACSAC), Las
Vegas, NV, 2002.
[26] R. Sandhu, E. Coyne, H. Fienstein, and C. Youman, "Role Based Access Control
Models," in IEEE Computer, vol. 29, 1996.