Privacy, Security and Trust
Privacy, Security and Trust
edited by
Philip Robinson
University of Karlsruhe, Germany
Harald Vogt
ETH Zürich, Switzerland
Waleed Wagealla
University of Strathclyde in Glasgow, UK
Springer
eBook ISBN: 0-387-23462-4
Print ISBN: 0-387-23461-6
No part of this eBook may be reproduced or transmitted in any form or by any means, electronic,
mechanical, recording, or otherwise, without written consent from the Publisher
Preface vii
Acknowledgments viii
Overview 19
Privacy, Security and Trust Issues Raised by the Personal Server Concept 49
John Light, Trevor Pering, Murali Sundar, Roy Want
Overview 63
Overview 95
Overview 133
Acknowledgments
We would like to thank all authors for submitting their work, and
all members of the Program Committee, listed below, for their cooper-
ation and time spent reviewing submissions. Finally, we thank Kelvin
Institute, Microsoft Research, and SAP for financially supporting the
publication of proceedings for the SPPC workshop 2004.
Program Committee
Jochen Haller (SAP Corporate Research, Germany)
Adolf Hohl (University of Freiburg, Germany)
Giovanni Iachello (Georgia Tech, USA)
Roger Kilian-Kehr (SAP Corporate Research, Germany)
Marc Langheinrich (ETH Zürich, Switzerland)
Joachim Posegga (University of Hamburg, Germany)
Alf Zugenmaier (Microsoft Research, Cambridge, UK)
I
2
Department of Computer Science, ETH Zürich, Switzerland
[email protected]
3
Department of Computer and Information Sciences, University of Strathclyde in Glas-
gow, UK
[email protected]
Abstract The topics of privacy, security and trust have become high priority topics
in the research agenda of pervasive computing. Recent publications have
suggested that there is or at least needs to be a relationship of research
in these areas with activities in context awareness. The approach of
the workshop, on which this proceedings reports, was to investigate the
possible interfaces between these different research strands in pervasive
computing and to define how their concepts may interoperate. This
first article is therefore the introduction and overview of the workshop,
providing some background on pervasive computing and its challenges.
1. Introduction
We are currently experiencing a bridging of human-centered, socially
oriented security concerns with the technical protection mechanisms de-
veloped for computer devices, data and networks. The foundations of
this bridge started with the Internet as people, both purposely and ac-
cidentally, provided gateways to their personal computers and hence in-
formation. With enterprise-scale and even personal firewalls, providing
a rule-controlled entry point into network domains, as well as crypto-
graphic means of ensuring secrecy, many attacks on computer applica-
tions and data were circumvented, given that people behind the virtual
2 PRIVACY, SECURITY, TRUST AND CONTEXT
may have their own power supply, memory, custom OS, and network
interfaces. Embedded computing has been considered as contributory
to Pervasive Computing, while many Pervasive systems are built by cre-
ating a distributed network of micro nodes each with a special purpose.
There is still a need however to coordinate and make sense of the in-
teraction between these small computers by a more powerful system.
However, as these embedded systems are so small and resource-limited,
they do not support large-scale crypto protocols. Nevertheless, they may
store data fragments that may be reconstructed by any system capable
of coordinating their interaction. There is therefore some concern that
Pervasive Computing systems may ignore privacy, security and trust re-
quirements at the very low level, either because it is too complex or
technically infeasible.
3.2 Privacy
Technical solutions to the privacy problems in ubiquitous computing
cannot stand on their own to protect our privacy rights. Privacy protec-
tion has always been the subject of legislation, since there is an inherent
conflict in service provisioning: personal data must be collected in order
to adapt services to the users’ needs and preferences, but once given
away, there is no technical procedure to revoke it or detain somebody
from passing it on. Technology makes collecting data easy but cannot
help protecting it against abuse. Thus traditionally, solutions rely on
binding the collector of personal data by law to certain procedures, for
example obfuscation (by anonymizing the collected data) or deletion
after a certain time period.
However, data collectors must be enabled to meet the standards set
by jurisdiction and market forces, and technology can help in this re-
gard. This potentially leads to systems that are both easy to implement,
and therefore cost efficient and widely usable, and compliant to privacy
10 PRIVACY, SECURITY, TRUST AND CONTEXT
3.3 Security
A system is generally called secure if there are measures taken to
avoid a “bad” outcome, where the definition of bad greatly depends on
the application scenario. The accepted concepts of security include avail-
ability, authenticity, authority, integrity, confidentiality and reliability,
with their proportionate significance depending on the task at hand. A
great deal of security mechanisms supporting these concepts have been
developed, especially since the growth of the Internet, and have gained
wide acceptance in military, business and consumer applications. Ex-
amples range from tamper resistant devices, cryptography and security
protocols to intrusion detection systems. All these techniques will be
crucial for securing pervasive computing systems, but existing incarna-
tions are not all equally applicable. Security mechanisms for pervasive
environments must be
3.4 Trust
Trust is multidisciplinary concept, which has been used in the fields
of sociology, psychology, philosophy, and most recently in computing.
Within these disciplines, trust is defined and treated from different an-
gles that show its utilizations and applications. Although, there is no
consensus about a definition of trust, there is a general agreement on
its properties as a subjective and elusive notion. In these proceedings,
contributions are concerned about the utilizations of trust in pervasive
computing. The application of trust in computing is widely acknowl-
edged by the term trust management [2]. This term has emerged as a
new concept in computing, where it supports descriptions on how to fa-
cilitate trust relationships between entities. The establishment of trust
enables systems to exchange information even without the intervention
of administrators to authorize these interactions.
The application of trust management systems and models in pervasive
computing is about how to grant users access to resources and informa-
tion based on their trustworthiness rather than the application of con-
ventional techniques that map authorizations to access rights. The view
of trust management systems is that trust would be used as a measure
for how much resources or what types of information are permitted or
would be disclosed to others. This seems to fit the domain of pervasive
computing quite well, since there is no fixed infrastructure and entities
are not attached to specific domains, from which information about iden-
tities could be obtained. There are also potential interactions with huge
numbers of autonomous entities, and these interactions are triggered and
established in an ad-hoc manner. Therefore, to facilitate interactions in
pervasive computing, trust management is considered to be the most
appealing approach to reasoning about potential users’ trustworthiness
for granting them access to the required resources. Trust management
aids in taking autonomous decisions on whom to trust and to what de-
gree. These decisions embody reasoning about the trustworthiness and
the risk involved in these interactions between entities.
To illustrate the exploitation of trust, let’s consider the example of
an interaction between the agents of two users (systems working on the
users’ behalf) that will be carried out by using their PDAs. Assume
12 PRIVACY, SECURITY, TRUST AND CONTEXT
4. Outline of Proceedings
In the workshop’s call for papers we posed many questions about the
possible interfaces between context, security, privacy, and trust. We, as
organizers and program committee members, felt that addressing the
concerns of security and privacy in pervasive computing would come out
clearly if interfaces were defined and considered within the proposed pro-
Some Research Challenges in Pervasive Computing 13
Figure 3. The view on possible interfaces between context, trust, privacy and
security
and to what degree/level entities will be trusted, and what types of secu-
rity policies could be specified within specific context. The influence of
context shows the need for defined interface in the domain of pervasive
computing. The discussion on context influences raises debatable ques-
tions about: how context information would be combined with systems
and applications The answers to these questions are application-specific.
2 Secure Trust Models and Management in Pervasive Computing.
The security matters in pervasive computing are not about a mapping
from authentications to access rights or privileges, but it is all about
how much we trust users/infrastructure and systems. Trust expresses
the level of access to resources that can be granted based on the available
information and evidence. Trust information is mandatory for reaching
decisions. For trust management to be effective, the contextualization of
trust is an important step to build an appropriate level of trust on others.
Trust management combines the notion of specifying security policy with
the mechanisms for specifying security credential. To achieve that we
also need to know the information about trust on the infrastructure
and to express how confident we are on the authentication and entity
recognition systems. Trust can prevent the loss of privacy by striking the
balance between users’ trustworthiness and how much could be revealed.
This discussion shows clearly how trust, with the combination of context,
would adjust/control privacy and security.
3 Evidence, Authentication and Identity. The process of authentica-
tion (authentication techniques are totally different and varies in perva-
sive computing) involves collecting evidence about the identity of users.
The information of both trust and context are highly considered in the
process of authentication, because they give an insight view into user’s
identity. The concerns of identity in pervasive computing are much big-
ger than in other applications domains, because in pervasive computing
there are huge number of potential users that we may not have enough
information about them. Therefore, contextual information, trust in-
formation, and evidence form the basis for the evaluation of identity
and reasoning about it. An adequate level of available information and
evidence will facilitate the process of authorizations. The relationship
between evidence, authentication, and identity could be considered as
a dependency relationship, in the sense that evidence is highly required
for the process of authentication, which in turn provides valid identity.
4 Social and Technical Approaches to Privacy Protection. With the
advances of technology, privacy solutions has to consider both techni-
cal and social approaches. This consideration is important for pervasive
computing to be socially acceptable. On the other hand, both the con-
fidentiality and integrity of the information must be controlled. The
Some Research Challenges in Pervasive Computing 15
References
[1] Ambient Agoras. German Fraunhofer Gesellschaft (FhG), Darmstadt.
http://www.ambient-agoras.org/.
[2] Matt Blaze, Joan Feigenbaum, and Jack Lacy. Decentralized trust management.
In Proceedings of the 1996 IEEE Symposium on Security and Privacy, pages
164–173, Los Alamitos, USA, May 1996. AT&T.
[3] V. Cahill, E. Gray, J.-M. Seigneur, C. Jensen, Y. Chen, B. Shand, N. Dimmock,
A. Twigg, J. Bacon, C. English, W. Wagealla, S. Terzis, P. Nixon, G. Seru-
gendo, C. Bryce, M. Carbone, K. Krukow, and M. Nielsen. Using Trust for
Secure Collaboration in Uncertain Environments. Pervasive Computing Maga-
zine, 2(3):52–61, 2003.
[4] Anind K. Dey. Understanding and Using Context. Personal and Ubiquitous
Computing Journal, 5(1):4–7, 2001.
[5] H. Ishii and B. Ullmer. Tangible Bits: Towards Seamless Interfaces between
People, Bits, and Atoms. In Computing Systems (CHI), pages 234–241, 1997.
[6] Lalana Kagal, Jeffrey L Undercoffer, Filip Perich, Anupam Joshi, and Tim Finin.
A Security Architecture Based on Trust Management for Pervasive Computing
Systems. In Grace Hopper Celebration of Women in Computing, October 2002.
[7] David Kirsh. Adaptive Rooms, Virtual Collaboration, and Cognitive Workflow.
In Cooperative Buildings. Integrating Information, Organizations, and Architec-
ture, number 1670 in LNCS. Springer-Verlag, 1999.
[8] Steve Mann. WearComp.org, WearCam.org, UTWCHI, and Steve Mann’s Per-
sonal Web Page - research, 2004.
[9] Nexus. University of Stuttgart, Germany. http://nexus.informatik.uni-
stuttgart.de/.
[10] Bruce Schechter. Seeing the light: IBM’s vision of life beyond the PC, 1999.
[11] Smart Spaces. National Institute of Standards and Technology.
http://www.nist.gov/smartspace/.
[12] SECURE Project Official Website. http://secure.dsg.cs.tcd.ie, 2002.
[13] Mark Weiser. The Computer for the 21st Century. Scientific American, pages
66–75, September 1991.
I
2
GKEC, TU Darmstadt, Germany
{aheine,terpstra}@gkec.tu-darmstadt.de
Abstract The goal of ubiquitous computing research is refine devices to the where
their use is transparent. For many applications with mobile devices,
transparent operation requires that the device be location-aware. Un-
fortunately, the location of an individual can be used to infer highly
private information. Hence, these devices must be carefully designed,
lest they become a ubiquitous surveillance system.
This paper overviews existing location-sensing mobile devices, vec-
tors for a privacy invasion, and proposed solutions. Particular attention
is paid to required infrastructure and the accuracy of the location infor-
mation which can be stolen. Solutions are examined from the perspec-
tive of attacks which can be reasonably expected against these systems.
1. Introduction
The proliferation of portable electronic devices into our day-to-day
lives introduced many unresolved privacy concerns. The principle con-
cern in this paper is that these devices are being increasingly equipped
with communication capabilities and location awareness. While these
features present a wide array of new quality-of-life enhancing applica-
tions, they also present new threats. We must be careful that the po-
tential quality-of-life lost through the surrender of private information
does not overwhelm the benefits.
24 PRIVACY, SECURITY, TRUST AND CONTEXT
For indoor use, the Active Badges [23] from AT&T Laboratories Cam-
bridge were developed. These are small devices worn by individuals
which actively transmit an identifier via infrared. This information is
received by sensors deployed in the environment. This system provides
essentially room-level resolution and has problems following individuals
due to the infrequency of updates. The environment consolidates this
information and can provide the current location of an individual.
A later refinement, the Bat [24], increased the detected resolution.
With the increased resolution, the Bat can be used to touch virtual
hot spots. Their work reports accuracy as good as 4cm. These refined
devices used ultrasonic pings similar to bat sonar. However, once again
the environment measures the Bat’s location as opposed to real bats
which learn about their environment.
The Cricket Location-Support System [18] system takes a similar ap-
proach. It uses radio and ultrasonic waves to determine distance and
thus location. Like the Cambridge Bat, resolution to within inches is
possible. As opposed to the similar Cambridge work, beacons are placed
in the environment as opposed to on individuals. The Cricket devices
carried by individuals listen to their environment in order to determine
their location. In this way, the device knows its location, while the
environment does not.
An approach to location sensing which does not require new infras-
tructure is taken by Carnegie Mellon University [21]. Here, the existing
wireless LAN is observed by devices to recognize their location. By pas-
sively observing the signal strengths of various base stations, a device
can determine it’s location. Though there are no requirements for new
infrastructure, there is a training overhead. During training a virtual
map of signals is created which is used by the devices to determine their
location.
Cell phones can be abused to provide location information. Although
not originally intended for this purpose, the E-911 [19] requirements
in the US forced cell phone providers to determine customer location
when they dialed an emergency phone number. Although this practice
was clearly beneficial, the technology has since spread. The underlying
problem is the omnipresent possibility of performing triangulation (with
varying accuracy, though).
In the near future Radio Frequency Identification (RFID) [8] will be
found in many consumer goods. Intended as a replacement for barcodes,
these tiny devices are placed in products to respond to a wireless query.
Unlike barcodes, RFIDs are distinct for every item, even those from the
same product line. This allows companies to determine their inventory
26 PRIVACY, SECURITY, TRUST AND CONTEXT
3.3 Observation
Attackers may also obtain information by configuring devices to ob-
serve their environment. The most obvious problem is the deployment
of many nearly-invisible cameras in the environment. However, there are
other risks which are more feasible to launch with current technology.
One of the more interesting attacks that can be launched against mo-
bile communications-equipped devices is triangulation. By measuring
timing delays in a signal, the attacker can determine the location of the
device. This is similar to how the Bat operates, only using electromag-
netic waves instead of sound waves.
3.4 Inference
One of the fears about automated privacy invasion is the compilation
of a profile. After gathering large amounts of information via communi-
cation and observation, an automated system combines these facts and
draws inferences. Given enough data, the idea is to build a complete
picture of the victim’s life.
From a more location-centric point of view, location information could
be processed to obtain useful information for discrimination. If a person
regularly visits the location of a group meeting, she is probably a member
of that group. In the consumer arena, the fact that an individual shops
at a particular store at regular intervals may be useful information for
price discrimination [1].
28 PRIVACY, SECURITY, TRUST AND CONTEXT
4. Solutions
In the literature there exist several approaches to protect the loca-
tion of a user. Most of them try to prevent disclosure of unnecessary
information. Here one explicitly or implicitly controls what informa-
tion is given to whom, and when. For the purposes of this paper, this
information is primarily the identity and the location of an individual.
However, other properties of an individual such as interests, behaviour,
or communication patterns could lead to the identity and location by
inference or statistical analysis.
In some cases giving out information can not be avoided. This can
be a threat to personal privacy if an adversary is able to access different
sources and link the retrieved data. Unwanted personal profiles may
be the result. To prevent this, people request that their information
be treated confidentially. For the automated world of databases and
data mining, researchers developed policy schemes. These may enable
adequate privacy protection, although they similarly rely on laws or
goodwill of third parties.
4.1 Policies
In general, all policy based approaches must trust the system. If the
systems betrays a user, his privacy might be lost. Here, the suitable
counter-measure is a non-technical one. With the help of legislation the
privacy policy can be enforced.
All policy based systems have the drawback that a service could sim-
ply ignore the individual’s privacy preferences and say, “To use this
service you have to give up your privacy or go away.” This certainly
puts the user in a dilemma and he will probably accept these terms as
he wants to use the service.
its location. For example, using a printer implicitly reveals that the
device is near to the printer.
5. Conclusions
The solutions we have seen can be categorized into policies and in-
formation minimizing at the source. These approaches aim to address
threats in the areas of first- and second-hand communication, observa-
tion, and inference.
Policies seem to work well wherever consent underlies the transac-
tion. For example, when information is to be provided to a service,
an agreement can be reached regarding the further distribution of the
information. If no agreement can be reached, then the individual will
be unwilling to use the service, but the service will likewise not obtain
the information or any associated remunerate. Similarly, the individual
can negotiate terms about how his information may be used; this can
address attacks based on inference.
There is no consent in observation. This means that policies can
not be applied to these attacks since the individual is in no position
to negotiate. Here, legal safeguards and countermeasures are required.
Unfortunately, there is currently insufficient discourse between technical
and legal experts.
Accuracy reduction techniques apply primarily to first-hand commu-
nication problems. These schemes aim at reducing the amount of con-
fidential information disclosed to third parties. There are a variety of
techniques which obscure the location information, the timestamp of the
transaction, and the identity of the individual.
As mentioned in the introduction, privacy issues are fundamentally
not technical. As ubiquitous devices permeate the every-day lives of
ordinary citizens, our privacy protection measures will have increasing
impact on their lives. It is important that research into privacy pro-
tection bear in mind what must be protected. This is more the area of
social sciences, and thus requires more inter-disciplinary discourse.
6. Acknowledgements
This work was sponsored in part by the Deutsche Forschungsgemein-
schaft (DFG) as part of the PhD program “Enabling Technologies for
Electronic Commerce”.
Notes
1. For example the Kazaa Media Desktop
2. The smallest RFIDs are currently only of 0.4mm * 0.4mm size.
References
Survey on Location Privacy in Pervasive Computing 33
[16] Adam Laurie. Serious Flaws in Bluetooth Security Lead to Disclosure of Per-
sonal Data, http://www.bluestumbler.org, 2003.
[17] Mobiloco - Location Based Services for Mobile Communities. http://www.
mobiloco.de/.
[18] Nissanka B. Priyantha, Anit Chakraborty, and Hari Balakrishnan. The Cricket
Location-Support System. In Mobile Computing and Networking, pages 32–43,
2000.
[19] Jeffrey H. Reed, Kevin J. Krizman, Brian D. Woerner, and Theodore S. Rap-
paport. An Overview of the Challenges and Progress in Meeting the E–911
Requirement for Location Service. IEEE Communications Magazine, 5(3):30–
37, April 1998.
[20] Bill Schilit, Anthony LaMarca, Gaetano Borriello, William Griswold, David
McDonald, Edward Lazowska, Anand Balachandran, Jason Hong, and Vaughn
Iverson. Challenge: Ubiquitous Location-Aware Computing and the Place Lab
Initiative. In Proceedings of The First ACM International Workshop on Wire-
less Mobile Applications and Services on WLAN (WMASH 2003). ACM Press,
September 2003.
[21] Asim Smailagic, Daniel P. Siewiorek, Joshua Anhalt, David Kogan, and Yang
Wang. Location Sensing and Privacy in a Context Aware Computing Environ-
ment. In Pervasive Computing, 2001.
[22] Einar Snekkenes. Concepts for Personal Location Privacy Policies. In Proceed-
ings of the 3rd ACM Conference on Electronic Commerce, pages 48–57. ACM
Press, 2001.
[23] Roy Want, Andy Hopper, Veronica Falcão, and Jonathan Gibbons. The Active
Badge Location System. ACM Transactions on Information Systems, 10(1):91–
102, 1992.
[24] Andy Ward, Alan Jones, and Andy Hopper. A New Location Technique for the
Active Office. IEEE Personal Communication, 4(5):42–47, 1997.
EXPLORING THE RELATIONSHIP
BETWEEN CONTEXT AND PRIVACY
1. Introduction
Most people agree that privacy protection is an important aspect of
networked and distributed applications, especially in the fields of mobile
and pervasive computing. However, it is hard to agree on a common def-
inition of privacy, for two main reasons: First, the definition depends on
the highly variable preferences of individuals and socio-cultural groups.
Secondly, in contrast to the related security goal of data confidentiality,
privacy is not an all-or-nothing notion. It is often acceptable to divulge
a limited amount of personal data, whereas it may be unacceptable if
large amounts of the same type of data become known.
The problem becomes even harder when one considers the question of
personal privacy with respect to context-aware applications, i.e. appli-
cations that take the context of entities into account. In pervasive com-
puting, the most important entities are individuals. According to [3],
context is information that describes the situation of an individual,
which means that the question of personal privacy arises naturally: The
amount of context information that is personal (such as the location
of a user) or related to personal information (such as the location of a
user’s mobile device) could conceivably grow quite large. Additionally,
36 PRIVACY, SECURITY, TRUST AND CONTEXT
2. Motivation
Consider a scenario with an abundant supply of context-based infor-
mation systems: Location-based services track the locations of their
customers and supply information relevant at their current location
(e.g. route planning, public transport timetables etc.) while “infosta-
tions” supply information to anyone in their transmission range. User
Alice uses her personal devices to communicate with such information
systems. Location tracking and communication with the location-based
service is done via a mobile phone network that can provide high loca-
tion resolution (e.g. UMTS). Access to the infostations is gained through
her WLAN-equipped PDA.
We assume that Alice needs to authenticate herself to the location-
based information system (LBS) for billing purposes. A a consequence,
Exploring the Relationship Between Context and Privacy 37
she is known to the LBS under the identifier Alice-LBS (which might be
a pseudonym). The PDA uses a wireless LAN adapter with the constant
device ID (MAC address) Alice_PDA.
Now consider adversary Eve that has gained access to the information
generated by the transmissions of Alice’s devices (for example, a UMTS
service provider that also monitors WLAN traffic at some locations). Eve
could then collect two types of location traces for all users. With respect
to Alice, she would obtain location traces under the ID Alice-LBS and
also other location traces under the ID Alice_PDA, using the location
information that comes implicitly with the WLAN transmissions.
Eve’s next step would be to correlate both types of location traces
in order to see whether a pair of location traces from different sources
matches. That way, two different identifiers (Alice_LBS and Alice-PDA)
could be linked to the same person (Alice). Furthermore, only one suc-
cess in this regard will be enough to link Alice’s identifiers Alice-LBS
and Alice_PDA from this point on.
The important point of this scenario is that with increasing amounts
of context information, attempts to penetrate an individual’s privacy
will also be increasingly successful, because the adversary will be able
to leverage the semantics of context data items to infer additional in-
formation that is not explicitly stated in the context information. Even
if data is only stored in pseudonymous form, adversaries will often be
able to link items of context data based on their content. Moreover, the
problem discussed here is not restricted to a specific application scenario,
but remains valid for any form of constant identifier, for instance RFID
tags that can be interrogated remotely.
Note that the amount of data used by Eve in the example above
is comparatively small and restricted to identifiers and spatio-temporal
coordinates. This is an indication that the privacy problems will become
even worse when context-aware computing is used ubiquitously and other
forms of context data are taken into account.
ID The identifier under which this data item was created. In the ex-
ample, this field would contain Alice’s customer ID or the MAC
address for her PDA.
Location Spatial information about a data item. This field would for
example contain Alice’s location when it is stored by the location-
based service.
Time Temporal information about the data item. In the example, this
refers to the time of a communication or the time of an update of
location data.
Time The time at which this record was created. Again, it should be
possible to determine this information in a fairly exact way.
For two locations and we write if and only if and are
less than 50 meters apart. We also define a comparison operator for
times. For two times and if and only if and are less
than one minute apart. For the purpose of this discussion, we assume
that location and temporal information can be determined with a good
enough accuracy for theses operators.
5.4 Remarks
A set of data items derived through application of this definition de-
scribes a person in terms of the device he or she used and the locations
at which that occurred. It is noteworthy that sets derived in this way
do not contain directly identifying information. However, sufficiently
detailed location information would make identification of any person
easy (e.g. because most people spend most of their time at home). Also,
after a person has been identified once, his or her name can always be
linked to the use of his or her personal device.
6. Related Work
Pervasive Computing scenarios [7, 12] are full of privacy issues. How-
ever, much of the current work in this field has, with some exceptions,
not yet progressed much beyond the conceptual stage [10, 11].
The Platform for Privacy Preferences Project (P3P) [1] aims at devel-
oping machine-readable privacy policies and preferences. This approach
is somewhat related to our model component for privacy conditions. An
interesting issue that comes up in both P3P and our work and that
is worth further investigation is how preferences can be described in an
easy to understand and human-readable form and then transformed into
a more formal representation. Marc Langheinrich, one of the authors of
P3P has also extended the P3P concept to ubiquitous computing sce-
narios [6].
The Freiburg Privacy Diamond [13] is more closely related to our
approach. The authors model privacy in mobile computing environments
using relations between the sets of users, devices, actions and locations.
The only inference rule in their model is transitive closure. As a result,
the expressiveness of the model is limited. The authors also discuss the
possibility of including probabilities and time in their model, although
it remains unclear where the probabilities come from and the concept of
time is only mentioned briefly in the paper.
Snekkenes [9] discusses access control policies for personal informa-
tion. He proposes a lattice model to describe the accuracy of information
(e.g. location, time or identifying information). The way we represent
identifying information as sets of data items relating to the same person
46 PRIVACY, SECURITY, TRUST AND CONTEXT
amount of data items will be generated and many classes of adversaries will have access
to only a small subset of them. In pervasive computing, the trade-off between privacy
and functionality needs to be considered explicitly. If we erroneously assume highly
powerful attackers, and limit the generation of data items (and, consequently, possible
functionality) based on this assumption, we might needlessly restrict functionality.
Additionally, two major lines were identified for future work:
1 Usage of the model. Two usage scenarios for a privacy model are possible:
The first one aims at evaluating a given pervasive computing scenario and
determining whether a certain set of privacy requirements can be satisfied.
The second one is based on observing the history of a user’s interactions and
advising whether a certain action of that user would have harmful effects on
his or her privacy.
2 Incorporating probability into the model. Participants at the workshop also
favored the early inclusion of probability and probabilistic inference into the
framework.
The authors would like to thank all participants for the lively discussion and the
excellent feedback given.
References
[1] Lorrie Cranor, Marc Langheinrich, Massimo Marchiori, Martin Presler-Marshall,
and Joseph Reagle. The Platform for Privacy Preferences 1.0 specification. W3C
Recommendation, The World Wide Web Consortium, April 2002.
[2] Dorothy E. Denning. Cryptography and Data Security. Addison-Wesley, 1982.
[3] Anind K. Dey. Understanding and using context. Personal Ubiquitous Comput.,
5(1):4–7, 2001.
[4] Urs Hengartner and Peter Steenkiste. Access control to information in pervasive
computing environments. In Proceedings of the 9th Workshop on Hot Topics in
Operating Systems (HotOS IX), Lihue, Hawaii, May 2003.
[5] Urs Hengartner and Peter Steenkiste. Protecting access to people location in-
formation. In Proceedings of the First International Conference on Security in
Pervasive Computing (SPC 2003), Lecture Notes in Computer Science, Boppard,
Germany, March 2003. Springer-Verlag.
[6] Marc Langheinrich. A privacy awareness system for ubiquitous computing en-
vironments. In Gaetano Borriello and Lars Erik Holmquist, editors, UbiComp
2002: Ubiquitous Computing, 4th International Conference, volume 2498 of Lec-
ture Notes in Computer Science, page 237ff, Göteborg, Sweden, September 29 -
October 1 2002. Springer-Verlag.
[7] Friedemann Mattern. The vision and technical foundations of Ubiquitous Com-
puting. Upgrade, 2(5):75–84, October 2001.
[8] Kurt Rothermel, Martin Bauer, and Christian Becker. Digitale Weltmodelle
– Grundlage kontextbezogener Systeme. In Friedemann Mattern, editor, Total
Vernetzt?! Springer-Verlag, 2003.
[9] Einar Snekkenes. Concepts for personal location privacy policies. In Proceedings
of the 3rd ACM conference on Electronic Commerce. ACM Press, 2001.
[10] Ubicomp 2002. Socially-informed design of privacy-enhancing solutions in ubiq-
uitous computing: Workshop at Ubicomp’2002, sep 2002.
[11] Ubicomp 2003. Ubicomp communities: Privacy as boundary negotiation: Work-
shop at Ubicomp’2003, oct 2003.
48 PRIVACY, SECURITY, TRUST AND CONTEXT
[12] Mark Weiser. Some computer science issues in ubiquitous computing. Commu-
nications of the ACM, 36(7):75–84, July 1993.
[13] Alf Zugenmaier, Michael Kreutzer, and Günter Müller. The Freiburg Privacy
Diamond: An attacker model for a mobile computing environment. In K. Irm-
scher and K.-P. Fährich, editors, Proceedings KIVS 2003. VDE-Verlag, February
2003.
PRIVACY, SECURITY AND TRUST
ISSUES RAISED BY THE PERSONAL
SERVER CONCEPT
Abstract This paper is a survey of user risks associated with the Personal Server
concept. The Personal Server concept is used as a surrogate for future
mobile devices. The risks are threats involving security, privacy and
trust. An overview of the concept is provided, followed by descriptions
of three usage models: mobile storage, application server, and beacon
receiver. Each usage model description includes a discussion of risks
that result from that usage. No solutions are provided.
This paper explores the issues related to last question using our learn-
ings from the other questions.
Privacy, Security and Trust Issues Raised by the Personal Server Concept 51
2. Summary of Issues
Because the Personal Server explores an extreme computing model,
it raises unique issues of security, privacy and trust in addition to those
present in any mobile device. We expect aspects of the Personal Server to
make their way into mainstream products in the future, and the Personal
Server project provides a relatively clear view of what those issues may
be.
Any mobile device raises concerns about security (“Can someone mod-
ify or destroy my data?”), privacy (“Can someone read my data?”), and
trust (“Can I count on my data being available when I need it?”). The
way these issues manifest themselves depends on the nature of the de-
vice, the nature of its use, and the expectations of its user.
The Personal Server concept expands on those issues because of its
lack of display and dependence on a wireless connection to the world. For
any computer system, the most severe threats involve external commu-
nication, and all of the Personal Server’s operations involve interaction
with external sources. Moreover, the Personal Server concept proposes
new primary modes of external interaction such as annexing external
User Interaction devices and listening to Information Beacons. Annex-
ation raises new questions for secure authentication, and listening to
beacons raises new issues of privacy.
This paper summarizes the security, privacy and trust issues uncov-
ered by the Personal Server project. We will not explore issues that are
common to all mobile devices, concentrating on those that are unique to
Personal Server concept. We hope that this exposition of issues will add
to the overall picture [4] of what we need to do to make the Pervasive
Computing environment safe.
3. Generic Risks
Any mobile device carries risks involving security, privacy and trust.
Solutions to eliminating or mitigating such risks are an on-going effort by
the mobile computing community. The Personal Server project assumes
that those efforts will be successful and expects to benefit from them.
We will survey them quickly to provide a more complete picture of the
issues.
At one extreme of the mobile device playing field are the smart card
and USB Flash storage device, sometimes called a USB dongle. Both
have a primary purpose of carrying information safely from one place to
another. Both are implemented with storage and a processor sufficient
to interface them to other computing devices, and that is their primary
purpose. In one case the storage and device size are very small (smart
52 PRIVACY, SECURITY, TRUST AND CONTEXT
card), and in the other case (USB dongle) the storage capacity can be
quite large in a package not much bigger. The biggest difference is that
the smart card is designed to only talk with trusted readers while a USB
dongle can connect with nearly any computer.
At another extreme is the notebook computer. Some are barely mo-
bile, and they typically include large amounts of storage. Most have
many I/O mechanisms, but I/O other than the keyboard, display and
pointer is usually of secondary importance. The primary purpose of
most notebook computers is as a more or less complete, self-contained
computing environment. A notebook computer may be just as vulner-
able to risks of security, privacy and trust, but many of those risks can
be mitigated by working without connection to the external world until
a safe venue is attained.
Most mobile devices fit within those extreme, but they all share some
common concerns.
Some of these concerns are bigger problems for some devices than
others. The likelihood of being stolen is a complex function of perceived
value versus perceived risk on the part of a potential thief. A device
that is often put down on surfaces is more likely to be stolen or lost.
Moreover, the availability of effective (and used) security and privacy
technologies can make the loss of data less of a problem. The avail-
ability (and use) of backup or synchronization services can mitigate the
replacement problem.
The Personal Server and other devices with Personal Server capabili-
ties are vulnerable to these same risks. We expect products with these
capabilities will use the best known practices to deal with these and
Privacy, Security and Trust Issues Raised by the Personal Server Concept 53
other generic risks. The rest of the paper discusses risks that are intro-
duced or emphasized by the Personal Server concept, which we will refer
to as incremental risks.
the hands of someone who wants to steal the contents, through either
theft or loss of the device. Furthermore, the normal usage of the device
exposes its contents to theft whenever it is connected with a host not
directly controlled by the storage device owner. This is true whether
the host is operated by the user (a rented computer) or not (a customer
computer). Most current devices expose all their contents whenever they
are plugged in, and the few with authentication methods expose all their
contents after authentication succeeds. Ideally, only the data relevant
to a transaction would be accessible at any one time.
A mobile storage device must provide reasonable access to its held
data. The word “reasonable” refers to a tradeoff between the user’s risk
and effort. Security often deals with such tradeoffs, but the need to
include untrusted hosts in the security equation makes solutions more
difficult. For example, common security methods such as typed pass-
words are less effective in the common usage model since they expose
the passwords themselves to theft. This can lead to more complicated
security measures, which may discourage using either the device or the
security measures. It is not sufficient to prove that a procedure is se-
cure unless you can also prove that people will use it. This problem
encourages the development of alternative authentication methods.
A mobile storage device can act as a vector for worms, viruses and
other forms of malware. Because it promiscuously connects to multiple
devices and connects quite directly (typically as a mapped file system), it
is an ideal vector for malware. All such devices are currently vulnerable
to existing viruses, and we expect malware to be written specifically
for mobile storage devices as the use of such devices proliferates. Since
the current crop of mobile storage devices are seen as big floppy disks,
this problem is being treated as a host issue, but it is not practical to
scan all the contents of a multi-gigabyte storage device every time it is
plugged into a host. The device itself must be involved in supporting the
protection process, and the host must be able to trust that involvement.
The Personal Server project has explored solutions for some of these
problems, using the device’s processing power to counter its vulnerabil-
ity. For example, we have considered structured availability of data, new
forms of authentication [2], and access journaling. The Personal Server
can also present its contents in the form of a Web site, which reduces
some threats to the Personal Server but not the host. Discussion of these
solutions is not within the scope of this paper.
Privacy, Security and Trust Issues Raised by the Personal Server Concept 55
How do you know which display you are annexing? This may seem
obvious, but if you annex an interaction device that someone else
controls, they might steal or destroy your information before you
even know there is a problem.
How do you know the interaction device isn’t recording your ses-
sion? There are lots of nefarious uses for a session recording.
directly from its source (who owns the information beacon) to its desti-
nation (who owns the receiver). The Personal Server can run software
agents that process the incoming beacon messages and act on or archive
them without direct user involvement.
Any form of location-aware computing raises issues of privacy and
trust. We believe the use of information beacons raises fewer such issues
than other forms of location-aware computing since it doesn’t involve
third parties such as cellular vendors or location-database web sites and
it doesn’t require traceable radio activity on an ongoing basis. Compar-
ing the use of information beacons with other forms of location-aware
computing is not in the scope of this paper. We will summarize the
privacy and trust issues of this new approach.
Information beacons offer information and services to passing re-
ceivers. The information might be as simple as a store description, or
it might include a full menu for a restaurant or a coupon for a clothing
store. It could offer to sell something to the user, and the transaction
might be able to take place immediately. Previous forms of location-
aware computing have concentrated on immediate notification of “inter-
esting” events because of the high cost of maintaining and processing
significant state in a centralized resource for each user. We believe the
cost of handling state can be much lower in a distributed approach. The
new approach concentrates on building a personalized location database
for the user, providing a useful source of context and state computa-
tions and reducing the need for interruptions commonly seen in other
approaches. An agent running in the receiver might interrupt the user,
but it would be based on considerably more context than is available to
some other approaches.
Three classes of privacy and trust issues arise with the new approach.
One class involves external tracking. If a user is communicating contin-
uously with a series of information sources as she passes through an en-
vironment, software with a global view of the information sources could
track her location and path. This is similar to the concern that the cel-
lular network can track you while you carry a cell phone. This problem
can be mitigated by avoiding use of a traceable identifier in communi-
cations with the information beacons. The problem can be eliminated
entirely if the transmissions are entirely unidirectional. That is, if the
receiver doesn’t have to send any radio message in order to receive the
beacon information, then there is essentially no way for the receiver to
be tracked.
Another class of issues involves self tracking. As the receiver collects
information from beacons, it likely creates a time stamped record of
locations in its persistent storage. This record can be a major source of
58 PRIVACY, SECURITY, TRUST AND CONTEXT
7. Summary
Because the Personal Server defines a new computing model and
new usage models, it exposes new risks to security, privacy and trust.
Whether the Personal Server as presented here ever becomes a product
is not important, but it is clear to us that various capabilities of the
concept will become part of other mobile devices. The Personal Server
project provides an opportunity for us to identify these risks at an early
stage and provide solutions before they are needed. This paper describes
what has been learned so far about risks facing any mobile device that
incorporates aspects of the Personal Server concept.
References
[1] J. Light, E. Pattison, T. Pering, M. Sundar, and R. Want. Fully Distributed
Location-Aware Computing. UbiComp 2003 workshop, 2003.
[2] T. Pering, M. Sundar, J. Light, and R. Want. Photographic Authentication
through Untrusted Terminals. IEEE Pervasive Computing, 2(1):30–36, 2003.
[3] J. S. Pierce, H. E. Mahaney, and G. D. Abowd. Opportunistic Annexing for
Handheld Devices: Opportunities and Challenges. In Proceedings of the Human-
Computer Interaction Consortium, 2004.
Privacy, Security and Trust Issues Raised by the Personal Server Concept 59
[4] B. Schneier. Secrets and Lies: Digital Security in a Networked World. Wiley,
2000.
[5] R. Want and T. Pering. New Horizons for Mobile Computing. In Proceedings of
PerCom, 2003.
[6] R. Want, T. Pering, G. Danneels, M. Kumar, M. Sundar, and J. Light. The
Personal Server: Changing the Way We Think about Ubiquitous Computing. In
UbiComp 2002; Ubiquitous Computing : 4th International Conference, volume
2498 of LNCS, pages 194–209. Springer-Verlag, 2002.
This page intentionally left blank
II
2
Technical University of Denmark
Denmark
[email protected]
1. Introduction
Weiser’s vision of ubiquitous/pervasive computing [28] will only be-
come true when computing capabilities are woven into the fabric of every
day life, indistinguishable from it. The goal is to enhance the environ-
ment and help people in their daily activities. However, the current state
of the art in pervasive computing does not properly address security and
privacy [2]. For example, illegitimate monitoring, can arise in such an
environment due to the proliferation of sensor technology. The ability of
computing systems to identify and adapt to their environmental context
* This work is sponsored by the European Union through the Information Society Technologies
(IST) programme, which funds the IST-2001-32486 SECURE project and the IST-2001-34910
iTrust Working Group.
66 PRIVACY, SECURITY, TRUST AND CONTEXT
5. Related Work
One of the main issues for the management of multiple dependable
identities is the support of trust levels [6]. We indeed demonstrate in
this paper that the SECURE project addresses this issue. Wagealla et
al. [27] use trustworthiness of an information receiver to make the deci-
sion on whether private information should be disclosed or not, which is
another way to envisage the relation between trust and privacy. Kosba
and Schreck [15] highlighted the fact that reputation systems do not
mandatory require explicit link with real world identities. We added that
too much evidence can lead to the disclosure of the implicit link [25].
Others [10, 11, 15] have presented how pseudonyms can be used for
privacy protection and shown that different levels of pseudonymity and
configurations exist. Their work is valuable to choose the right type
of configuration and pseudonymity. Previous work on identity manage-
ment in ubicomp environments [12, 18] demonstrates that the model
of switching identities according to context is appealing and meaning-
ful for users. Our own prototype [24], where pseudonyms are disclosed
based on location, confirms the usefulness of context. Different TSFs
have been used for sharing personal information in ubicomp environ-
ments [9, 26]. However, these TSFs do not use pseudonyms and their
focus is not on identity matters. Another related work, although this
one only focuses on recommendation, is the OpenPrivacy platform [16].
The user can create many pseudonyms linked with specific information.
Langheinrich’s work [17] is valuable to understand privacy in context-
aware pervasive computing. Robinson and Beigl [22] investigate one of
the first real trust/context-aware spaces based on the Smart-Its context
sensing, computation and communication platform, which could also be
used for an ER scheme based on context.
6. Conclusion
Identity is a central element of computational trust. In pervasive
computing, where there is no central authority legitimate for all enti-
ties, more or less trustworthy technical infrastructure between parties
facilitates attacks (e.g., the Sybil attack) on trust/risk-based security
frameworks. However, this weakness can be used for privacy protection.
Different alternatives are possible for the implementation of identity in
a TSF. There is a trade-off between the aimed level of trust, privacy,
interoperability and scalability. We argue for a solution that explicitly
takes into account these different levels and so can be used in a diver-
sity of applications (as it can be expected in pervasive computing). We
propose the following generic mechanisms to engineer this solution. The
74 PRIVACY, SECURITY, TRUST AND CONTEXT
Notes
1. In this paper, we use the following terms as synonyms: level of trust and trustworthi-
ness. In a TSF, they are represented as a trust value. This is different than trust, which is
the concept.
2. “No more evidence than needed should be linked.”
References
[1] B. D. Brunk, Understanding the Privacy Space, in First Monday, vol. 7, no. 10,
Library of the University of Illinois, Chicago, 2002.
[2] R. Campbell, J. Al-Muhtadi, P. Naldurg, G. Sampermane, and M. D. Mickunas,
Towards Security and Privacy for Pervasive Computing, in Proceedings of the
International Symposium on Software Security, 2002.
[3] T. M. Cooley, A Treatise on the Law of Torts, Callaghan, Chicago, 1888.
[4] S. Creese, M. Goldsmith, B. Roscoe, and I. Zakiuddin, Authentication for Per-
vasive Computing, in Proceedings of Security in Pervasive Computing, LNCS,
Springer, 2003.
[5] J. L. Crowley, J. Coutaz, G. Rey, and P. Reignier, Perceptual Components for
Context Aware Computing, in Proceedings of Ubicomp, LNCS, Springer, 2002.
[6] E. Damiani, S. D. C. d. Vimercati, and P. Samarati, Managing Multiple and
Dependable Identities, in 7(6), pp. 29-37, IEEE Internet Computing, 2003.
[7] A. K. Dey, Understanding and Using Context, in Personal
and Ubiquitous Computing Journal, vol. 5 (1), pp. 4-7, 2001,
http://www.cc.gatech.edu/fce/ctk/pubs/PeTe5-1.pdf.
[8] J. R. Douceur, The Sybil Attack, in Proceedings of the
1st International Workshop on Peer-to-Peer Systems, 2002,
http://research.microsoft.com/sn/farsite/IPTPS2002.pdf.
[9] J. Goecks and E. Mynatt, Enabling Privacy Management in Ubiquitous Comput-
ing Environments through Trust and Reputation Systems, in Proceedings of the
Conference on Computer Supported Cooperative Work, ACM, 2002.
[10] I. Goldberg, A Pseudonymous Communications Infrastructure
for the Internet, PhD Thesis, University of California, 2000,
http://www.isaac.cs.berkeley.edu/ iang/thesis-final.pdf.
[11] R. Hes and J. Borking, Privacy Enhancing Technologies: The Path to Anonymity,
ISBN 90 74087 12 4, 2000, http://www.cbpweb.nl/downloads_av/AV11.PDF.
The Role of Identity in Pervasive Computational Trust 75
Yücel Karabulut
SAP Research, CEC Karlsruhe, Vincenz-Priessnitz-Str. 1, 76131 Karlsruhe
Abstract Basically, there are two intertwined kinds of security mechanisms: mon-
itoring including access control and cryptographic protocols. The pur-
pose of an access control system is to enforce security policies by gating
access to, and execution of, processes and services within a computing
system. Specification and enforcement of permissions can be based on
asymmetric cryptography. In order to employ asymmetric cryptogra-
phy in open computing environments we need appropriate trust man-
agement infrastructures that enable entities to establish mutual trust.
Management of trust is organized within a public key infrastructure,
PKI for short. Credentials assert a binding between a principal, rep-
resented by a public key, and some property. Current proposals inves-
tigating the definition of PKI and the application of credential-based
access control treat existing PKI models (e.g. X.509) and trust man-
agement approaches (e.g. SPKI/SDSI) as competing technologies. We
take a different position. We argue here that a trust management in-
frastructure for open computing environments has to use and to link
existing approaches. We explain which requirements a next-generation
trust management approach has to fulfill. After presenting an applica-
tion scenario, we finally outline the design of a next-generation trust
management approach that we believe really would appear to be worth-
while for a broad spectrum of applications.
1. Introduction
The proper administration of computing systems requires to specify
which clients are allowed to access which services, and to effectively and
efficiently enforce such specifications. In a local computing system, a
specification can be represented by traditional access rights granted to
78 PRIVACY, SECURITY, TRUST AND CONTEXT
Acknowledgments
It is a pleasure to thank Joachim Biskup with whom I’ve had extensive
discussions about trust management, PKI models and secure mediation.
82 PRIVACY, SECURITY, TRUST AND CONTEXT
References
[1] C. Altenschmidt, J. Biskup, U. Flegel and Y. Karabulut: Secure Mediation: Re-
quirements, Design and Architecture. Journal of Computer Security, 11(3):365-
398, 2003.
[2] J. Biskup and Y. Karabulut: A Hybrid PKI Model with an Application for Secure
Mediation. In 16th Annual IFIP WG 11.3 Working Conference on Data and Appli-
cation Security, pages 271-282, Cambridge, England, July 2002. Kluwer Academic
Press.
[3] J. Biskup and Y. Karabulut: Mediating Between Strangers: A Trust Management
Based Approach. In 2nd Annual PKI Research Workshop, pages 80-95, Gaithers-
burg, Maryland, USA, April 2003.
[4] B. Chinowsky: Summary of the panel discussions Dueling Theologies. In 1st An-
nual PKI Workshop, Gaithersburg, Maryland, USA, Apr. 2002.
[5] Y. Karabulut: Investigating Trust Management Approaches for Trustworthy Busi-
ness Processing for Dynamic Virtual Organizations. Special Session on Security
and Privacy in E-Commerce within the 7th International Conference on Elec-
tronic Commerce Research, INFOMART, Dallas, June 2004.
[6] Y. Karabulut: Developing a Trust Management Based Secure Interoperable In-
formation System. Special Session on Security and Privacy in E-Commerce
within the 6th International Conference on Electronic Commerce Research, IN-
FOMART, Dallas, October 2003.
[7] Y. Karabulut: Implementation of an Agent-Oriented Trust Management Infras-
tructure Based on a Hybrid PKI Model. In 1st International Conference on Trust
Management, LNCS 2692, pages 318-331, Crete, Greece, May 2003.
[8] Y. Karabulut: Secure Mediation Between Strangers in Cyberspace, Ph.D. Thesis,
University of Dortmund, 2002.
[9] SPKI/SDSI. http://theworld.com/ cme/html/spki.html.
[10] X.509. http://www.ietf.org/html.charters/pkix-charter.html.
[11] A. S. Tanenbaum and M. van Steen. Distributed Systems: Principles and
Paradigms. Prentice Hall, Upper Saddle River, NJ, Sept. 2002.
[12] B. Neuman and T. Ts’o: Kerberos: An Authentication Service for Computer
Networks. IEEE Communications, 32(9):33-38, Sept. 1994.
RESEARCH DIRECTIONS FOR
TRUST AND SECURITY IN
HUMAN-CENTRIC COMPUTING*
2
Formal Systems (Europe) Ltd.
www.fsel.com
[email protected]
3
Oxford University Computing Laboratory
[email protected]
4
Distributed Technology Group,
QinetiQ, Malvern Technology Centre, UK.
[email protected]
* This research is being conducted as part of the FORWARD project which is supported by
the U.K. Department of Trade and Industry via the Next Wave Technologies and Markets
programme. www.forward-project.org.uk
84 PRIVACY, SECURITY, TRUST AND CONTEXT
1. Introduction
The ubiquitous paradigm foresees devices capable of communication
and computation embedded in every aspect of our lives and throughout
our environment. This will increase both the complexity of information
infrastructures and the networks which support them. New forms of
interaction are envisaged, which will aim to push the technology into the
background making the information services human-centric in delivery.
Computing devices will be less and less noticeable, creating a feeling of
being surrounded by “ambient intelligence”.
As these pervasive computing technologies become deeply intertwined
in our lives we will become increasingly dependent on them, implicitly
trusting them to offer their services without necessarily understanding
their trustworthiness. Undoubtedly the timely provision of bespoke ser-
vices will require personal or valuable data to be digitally stored and
made available. The increased digitalisation of our assets, coupled with
the increasingly intangible way that networks use information, will make
it difficult to ensure that trusted services are indeed trustworthy. Will
users have to decide how to interact with systems without understanding
the associated risks?
This paper presents our thoughts on a particularly important, often
critical, property that will be required of such systems, namely Infor-
mation Security. We consider both the technical requirements for se-
cure pervasive computing environments and the human centric proper-
ties users are likely to demand of such systems. We highlight the issues
we feel require addressing by the research community. The thoughts
that we present in this short article are guided by our previous work on
pervasive computing security: [2] and [3].
an identity (if identity can be proved, then this is a basis for authorisa-
tion) . In [2] we provided a critique of traditional identity authentication,
arguing its unsuitability for pervasive networks because:
Interaction would be between devices and it does not seem plau-
sible that the identity of an arbitrary device, in an arbitrary envi-
ronment, can be reliably determined. Furthermore in some appli-
cations mass-produced devices might not have unique identities.
The value of authenticating an identity depends on the trustwor-
thiness of the owner of the identity. If we do not know, either
beforehand or by other means, that the owner of the identity is
trustworthy, then little is gained by authenticating that identity.
Thus, simply proving the identity of a device would be of limited
value, since it provides little assurance that the device will behave
in a trustworthy manner.
There were subsidiary reasons for doubting the value of identity authen-
tication, such as the viability of certification infrastructures to support
authenticating the identities of the huge numbers of devices that are
likely to exist.
After presenting the above deconstruction we proposed that authen-
tication for pervasive computing is revised to mean attribute authenti-
cation. Any device will have a range of attributes, such as its location,
its name, its manufacturer, aspects of its state, its service history, and
so forth. In a given situation some attributes will need authenticating
and the attributes should be chosen to achieve assurance about which
devices are the subject of interaction, and what those devices will do.
Protocols for authentication and authenticated key exchange have
been the subject of intense study [1]. Moreover, the subject of verify-
ing such protocols has achieved significant advances [5]. For analysis and
formal verification it is vital to be precise about the threat model which a
given protocol must resist. The standard model of the attacker is due to
Dolev and Yao [4], and it underpins a large portion of the research com-
munity’s efforts. However, the Dolev-Yao threat model (as it is referred
to) significantly predates the promulgation and widespread acceptance
of the pervasive computing vision. In [3] we proposed that such a threat
model was too simplistic and unable to capture the authenticated key
agreement protocols that might be required for pervasive networks. The
principal amendment was to propose a “two-channel” threat model, as
follows:
1 An E-channel which captures human or other “external” partici-
pation in bootstrapping an authenticated link. On the one hand,
86 PRIVACY, SECURITY, TRUST AND CONTEXT
Notes
1. www.w3.org/2001/sw/Europe/
2. www.trustedcomputing.org
3. www.gridforum.org
4. www.semanticweb.org
Research Directions for Trust and Security in Human-Centric Computing 91
References
[1] Colin Boyd and Anish Mathuria. Protocols for Authentication and Key Estab-
lishment. Springer-Verlag, 2003.
[2] S. Creese, M. H. Goldsmith, Bill Roscoe, and Irfan Zakiuddin. Authentication in
Pervasive Computing. In D. Hutter, G. Müller, W. Stephan, and M. Ullmann, ed-
itors, First International Conference on Security in Pervasive Computing, volume
2802 of LNCS. Springer-Verlag, 2003.
[3] S. Creese, M. H. Goldsmith, Bill Roscoe, and Irfan Zakiuddin. The Attacker in
Ubiquitous Computing Environments: Formalising the Threat Model. In Formal
Aspects of Security and Trust, Pisa, 2003. Springer-Verlag.
[4] D. Dolev and A. C. Yao. On the Security of Public Key Protocols. IEEE Trans-
actions on Information Theory, 29(2), 1983.
[5] P. Y. A. Ryan, M. H. Goldsmith, S. A. Schneider, G. Lowe, and A. W. Roscoe.
The Modelling and Analysis of Security Protocols: the CSP Approach. Addison-
Wesley, 2001.
This page intentionally left blank
III
EVIDENCE, AUTHENTICATION,
AND IDENTITY
This page intentionally left blank
OVERVIEW
Mario Hoffmann
Fraunhofer Institute Institute for Secure Telecooperation, Germany
Abstract Two levels of identity management can be determined. The first level
considers Enterprise Identity Management which is currently on the
roadmap of most companies dealing with huge knowledge bases of em-
ployees and/or customers. At this level identity management means (1)
to provide employees with role-based access to documents and resources
and (2) to consolidate and concatenate partial customer identities for
simplifications in customer administration.
Almost at the same time the second level of identity management
occurred. Personalised context-aware services have begun to enter, par-
ticularly, the mobile communication market and, obviously, detailed
user profiles are essential to provide reasonable personalised services.
These services are based on the user’s current location, his environ-
ment, and personal preferences. Here, identity management becomes a
key technology in order to keep those additional information under con-
trol. However, this pursuit of control finally leads to severe implications
due to privacy violation.
Hence, a third level of identity management has to be introduced:
User-centric Identity Management. User-centric identity management
allows the user to keep at least some control over his personal data where
several different approaches in this paper have to be discussed. Specif-
ically, a framework will be described which adds user-centric identity
management to a context-aware mobile services platform. This platform
has been already designed to support and dynamically combine services
especially of small- and medium-sized independent service providers.
1. Introduction
With the roll-out of UMTS and public WiFi hotspots in several Euro-
pean countries the usability and acceptance of Location Based Services
100 PRIVACY, SECURITY, TRUST AND CONTEXT
References
[1] Johann Bizer, Dirk Fox, and Helmut Reimer, editors. DuD - Datenschutz und
Datensicherheit. Schwerpunkt: Identitätsmanagement. Vieweg Verlag 09/2003.
[2] Mario Hoffmann, Jan Peters, and Ulrich Pinsdorf. Multilateral Security in Mo-
bile Applications and Location Based Services. In ISSE - Information Security
Solutions Europe, Paris, France, October 2002.
[3] Birgit Pfitzmann. Privacy in enterprise identity federation - policies for Liberty 2
single signon. Elsevier Information Security Technical Report (ISTR), 9(1):45–58,
2004.
[4] Birgit Pfitzmann and Michael Waidner. Federated Identity-Management Proto-
cols -Where User Authentication Protocols May Go -. In 11th Cambridge Inter-
national Workshop on Security Protocols. Springer-Verlag, 2003.
104 PRIVACY, SECURITY, TRUST AND CONTEXT
[5] Kai Rannenberg. Multilateral Security - A Concept and Examples for Balanced
Security. In Proceedings of the 2000 Workshop on New Security Paradigms, pages
151–162, Ballycotton, Ireland, 2000. ACM Press.
[6] Jeroen van Bemmel, Mario Hoffmann, and Harold Teunissen. Privacy and 4G
Services: Who do you trust? 10th Meeting - Wireless World Research Forum,
New York, NY, USA, Oct 27-28, 2003.
[7] Stephen A. Weis, Sanjay Sarma, Ronald L. Rivest, and Daniel W. Engels. Security
and Privacy Aspects of Low-Cost Radio Frequency Identification Systems. In
Proc. of First International Conference on Security in Pervasive Computing (SPC
2003), volume 2802 of LNCS, Boppard, Germany, March 2003. Springer-Verlag.
PRE-AUTHENTICATION
USING INFRARED
1
, Michael Kreutzer2, Martin Kähmer2,
Sumith Chandratilleke2
1
Faculty of Organization and Informatics
University of Zagreb
Pavlinska 2, 42000 Croatia
[email protected]
2
Institute of Computer Science and Social Studies
Dept. of Telematics
University of Freiburg
Friedrichstraße 50, D-79098 Freiburg, Germany
{kreutzer, kaehmer, sumith}@iig.uni-freiburg.de
1. Introduction
Using complex authentication and verification methods is not always
feasible in application fields with time and resource restrictions. How-
ever, fast and configuration-less authentication methods are required in
many pervasive computing applications using wireless connectivity. For
106 PRIVACY, SECURITY, TRUST AND CONTEXT
2. Attacker Model
The endpoints of the communication, i.e. both devices in consider-
ation, are assumed to be trustworthy. Using exclusively wireless tech-
nology, the focus of the attacker model lies in the air interface. Ac-
cording to the application fields in infrastructure-less environments and
the dynamics of an ad-hoc basis of usage we assume as intentional at-
tack eavesdropping (originated by a man in the middle also capable to
effectuate a subsequent replay attack) and as unintentional attack the
identification of the false device (misdirection). Denial of service attacks
are not regarded in this paper.
3. Related Work
Even if the wireless technology gets more and more important, two
devices that are in the range of each other, should not in each case
“talk” to each other: this imposes not only scalability problems but also
security problems, especially related to authentication (cf. [9]). However,
as in [3] suggested, an authentication mechanism is needed to explicitly
“marry” two formerly mutually unknown devices, i.e. two devices which
haven’t any (even partial) knowledge about the existence of one other.
Such an authentication mechanism has been proposed by [1] and has
been called “ad-hoc authentication” by [2].
As the focus of [1] lies in asymmetric cryptography with PKIs, its
mechanisms even protect against active attacks like impersonation dur-
ing authentication establishment. However, it is questionable whether
this attacker model is realistic for the majority of the application scenar-
ios and whether there are lightweight mechanisms to deploy and main-
Pre-Authentication Using Infrared 107
after the secret of phase 1 has been verified as this procedure protects
against denial of service attacks that take place on the radio channel
(this saves energy as well: dependent on the scenario, the radio link
only needs to be activated after a successful run of phase 1).
5. Pre-Authentication Mechanism
5.1 Design decisions
Beside the desired security properties our guiding design criteria for
the mechanism are: fast, cheap, simple, lightweight, and no pre-existing
mutual knowledge of the devices.
As context is used for pre-authentication, a location-limited channel
must be taken (cf. [1]). When using communication technologies, they
should have physical limitations in their transmissions, for example the
necessity of line of sight and limited range, like “the PDAs are directed
to each other and have a distance less than 20 cm”. The reasons for the
need for a location-limited channel are twofold:
These three steps are the basis for the subsequent phases.
6. Implementation
The used PDAs have some limitations like small processor power and
restricted energy resources. In the following we will call our implementa-
tion of pre-authentication phase IrEx, which is implemented as a client-
server model. At least one of the partners must have a self-standing
application running called the irexserver. The client part is also a self-
standing application, called the irexclient. The device which starts the
irexclient is the client which initiates the exchange.
First we developed IrEx primarily for the Pocket PC platform but now
we have a solution for Palm devices too. This means we can use Pocket
PC - Pocket PC, Palm - Palm and Pocket PC - Palm connections.
right button is pressed the irexclient (and with it the secret key creation)
is started.
When an incoming request is noted, the server closes all other server
ports until the exchange procedure is done. Pressing the left button (on
the second device) starts the client program and the exchange procedure
is started. This procedure consists of the following steps:
8. Acknowledgments
We thank Kerry McGawley for her helpful comments on readability
and Prof. Dr. Günter Müller and Prof. for encour-
aging support. This research has been supported by the Kolleg “Living
in a smart environment” of the Gottlieb Daimler- and Karl Benz-Stiftung
and by the TEMPUS project of the European Commission.
Notes
1. http://www.nfc-forum.org/
References
[1] D. Balfanz, D. Smetters, P. Stewart, and H. Wong. Talking to strangers: Authen-
tication in adhoc wireless networks. In Symposium on Network and Distributed
Systems Security (NDSS ’02), San Diego, California, 2002.
[2] S. Chandratilleke and M. Kreutzer. Credential-basierte ad-hoc-authentifikation
(engl.: Credential-based ad-hoc-authentication). In netzwoche Netzguide E-
Security, Netzmedien AG, Basel, 2003.
[3] L. M. Feeney, B. Ahlgren, and A. Westerlund. Spontaneous networking: an
application-oriented approach to ad hoc networking. In IEEE Communications
Magazine, 2001.
[4] L. E. Holmquist, F. Mattern, B. Schiele, P. Alahuhta, M. Beigl, and H. W.
Gellersen. Smart-its friends: A technique for users to easily establish connections
between smart artefacts. In Proc. of UBICOMP 2001, Atlanta, GA, USA, 2001.
[5] T. Kindberg and K. Zhang. Secure spontaneous device association. In Proc. of
UbiComp 2003, Seattle, Washington, 2003.
[6] J. Light. Security, privacy and trust issues raised by the personal server concept.
In this book., 2004.
[7] J. Rekimoto, T. Miyaki, and M. Kohno. Proxnet: Secure dynamicwireless connec-
tion by proximity sensing. In Proc. of Pervasive 2004, Linz/Vienna, 2004.
[8] P. Robinson. Architecture and protocol for authorized transient control. In this
book., 2004.
[9] F. Stajano and R. J. Anderson. The resurrecting duckling: Security issues for ad-
hoc wireless networks. In Lecture Notes in Computer Science, Vol. 1796, Springer,
pages 172–194, 2000.
ARCHITECTURE AND PROTOCOL FOR
AUTHORIZED TRANSIENT CONTROL
Philip Robinson
Teco, University of Karlsruhe & SAP Corporate Research. Vincenz-Prießnitz-Str. 1,
76131 Karlsruhe, Germany
1. Introduction
A system with static interrelationships and purely atomic interactions
is simple to manage. However, this is not a practical assumption of real
world systems, where items in an environment have multiple relation-
ships with users, including shared ownership, and hence multiple oper-
ating modes. Corner & Noble make a similar observation in their work
on transient authentication, where they discuss the fallacy of infrequent
and persistent authentication between people and their devices [8]. Sta-
jano also addresses this theme by defining techniques and mechanisms
for asserting and ending the transient association between people and
devices in his Resurrecting Duckling protocol [15]. Therefore, it is often
argued that a major requirement for security in pervasive computing
is dynamic adaptation as opposed to rigid prescription of system con-
trols [4]. This however requires a richer model of interaction between
the security management system and the real world environment of the
resources it monitors and controls. Advances in context awareness and
senor networking facilitate this form of interaction even if the manage-
ment system and resources are distributed. The protection goals of a
system seem to however act against the goals of awareness, usability and
114 PRIVACY, SECURITY, TRUST AND CONTEXT
This paper addresses the above issues by combining them under the
heading “authorized transient control”. The intent was to emphasize
operational matters as opposed to design and implementation issues that
appear to be already well addressed by existing work. The analysis of
the problem therefore commenced with a consideration of operational
roles as opposed to system components, as is the approach in the area
of control systems [17, 13, 9, 10].
Authorized transient control suggests that a user of a system is grant-
ed provisional access to a resource, given that certain conditions cur-
rently hold. The user is therefore referred to as an “authorized transient
Architecture and Protocol for Authorized Transient Control 115
a system switches from one operational mode to a next [13]. This second
notion of transient is not discussed in this paper, but is marked as an
issue that should be addressed as adaptive security does entail switching
operating modes of a target system. Thirdly, if a subject A controls a
target resource B, A monitors a set of control properties Rn that refer
to B and its operational environment, compares them to a set of con-
trol reference properties and generates an action O that counteracts
the comparative error between and This definition of control is
derived from Powers’ work on “Perceptual Control Theory” [10], which
forms a part of the approach discussed later in this section. Other useful
descriptions of the term “control” come from Petersen, who states that
the role of a human operator [controller] is to bring about desired state
changes (or non-changes) in a controlled system [9]. Therefore if A is
an authorized transient controller of B, A is permitted to perform an
action resultant from comparing the properties and in order to
control the operational state of B to bring about for the validity
of a situation S.
Considering the above definitions, resources in pervasive environments
can be said to have multiple controllers with different references or oper-
ational goals. However, only two types of controllers are considered for
the purposes of this paper - the “Interaction Controller” and the “Ac-
cess Controller”. The Access Controller carries out control operations
on behalf of a fulltime controller or administrator, while the Interaction
Controller acts on behalf of a transient controller or user of a resource.
The two controllers therefore have different perceptions of the target
resource, its situation and that of its environmental signals. Figure 1
depicts how these two different controllers are seen to operate on the
same target resource.
Figure 1 has introduced new terms that may lack intuitive meaning
for readers unfamiliar with perceptual control theory (PCT) [10]. PCT
is based on the premise that dynamic systems do not plan and process
repeatable actions; rather they plan and process perceptions (or desired
views of a system), and hence producing repeatable results with varied
conditions. The principles of PCT adopted in the controller model in
figure 1 are defined below:
[Access/Interaction] Perception: this is the relevant view that a hu-
man controller has of a resource dependent on its operational state. The
operator need not know every detail of the resource’s operational state
but sufficient detail for the support of effective control decision-making.
The human controller may receive this directly from a resource, but in
the model used here, there is an intermediate controller module or agent
Architecture and Protocol for Authorized Transient Control 117
that automatically adjusts the perception in order that in the best cases
the human operator constantly receives an “ideal view” of the resource.
[Access/Interaction] Perceptual Reference: this represents the “ideal
view” that the controller wishes to receive from the resource. In the
case of the fulltime controller (FTC) and Access Controller (AC), the
source of the perceptual reference is authorization and obligation poli-
cies. These policies are specified by the FTC and enforced by the AC.
In the case of the transient controller (ATC) and interaction controller
(IC), the source of the perceptual reference is the tasks the ATC wishes
to carry out as well as the credentials that certify some set of rights.
[Access/Interaction] Perceptual Signal: this is the input that a con-
troller receives from a sensor system, which represents the control state
of the target resource, with respect to its observable properties, as well
as that of its environment.
[Access/Interaction] Perceptual Error: this is the calculated compar-
ative error between a perceptual signal and a perceptual reference. That
is, this is the controller’s calculation of how much the actual perception
of the target resource deviates from the ideal perception as defined by
the perceptual reference.
Environment Disturbance & Feedback: these are both property sets
sensed by a sensor system. “Feedback” is the actualized value of explic-
itly monitored properties of the target resource, while the “environment
disturbance” is monitored properties of the environment. The environ-
ment disturbance may have either an indirect or direct effect on the
target resource’s control state and hence perceptual signals.
118 PRIVACY, SECURITY, TRUST AND CONTEXT
From the above model it is observed that feedback from the target
resource simultaneously results in two classes of perceptual signals, and
that the resource may also simultaneously receive two forms of per-
ceptual errors and control commands. Breemen and Vries discuss and
reference a number of problems that arise in systems with multiple con-
trollers [17], which also apply to the interpretation of multi-controller
used here. Three of these multi-controller problems addressed by the
architecture and protocol are conflicts, deadlocks and coordination of
switching between controllers. Conflicts may arise as a result of con-
trary perceptual references or if the controllers attempt to simultane-
ously enforce a control on the target resource. In the context of the
access and interaction controllers, a conflict arises if the authorizations
and obligations specified at the AC do not support the tasks and cre-
dentials of the IC or if the AC tries to perform an access control at
the same time the IC performs an interaction control (and vice versa).
Deadlocks refer to exceptional control situations which none of the con-
trollers are prepared to handle. There could therefore be a case where an
irresolvable perceptual error occurs at both the interaction and access
controllers - e.g. hardware or software failure - which may render the
target resource as unavailable. The coordination of switching between
controllers means that rules have to be defined for when and how control
is to be exchanged. Although the AC typically has a higher controller
priority than the IC, there may be situations, such as the emergency re-
sponse scenario, where this priority should be overridden to allow the IC
to work more efficiently. This means that the AC will in this case need
to adapt its perceptual reference to accept the new controllability of the
target resource. The architecture provides more details on the design of
a management system to computationally support the controller model,
giving consideration to the issues discussed.
UML diagram of the four conceptual models for the management system
and their integration.
ities and possibilities are modeled in the situation and controller models
respectively. The class design principles were nevertheless inspired work
from Scott et al., where they describe a spatial policy framework for mo-
bile agents based on the ambient calculus [12]. One of the goals was to
model the “world” inclusive of both physical entities and virtual agent
entities. They began by defining a set of entity sorts/types and then
define a calculus for defining their behavior, relationships and interac-
tion rules. The top-level entities in their model are however immediately
specialized (e.g. workstation and laptop are two different top-level en-
tities), as opposed to defining object-oriented inheritance relationships
between the sorts - this could have served to enhance the semantics
used in the calculus. The approach in this paper is therefore to define
more high-level resource types; each resource is either of type “Space”
or “Solid”, where a Space may contain 0 to n resources of either resource
type. Secondly, the mobility of a resource also has an influence on its
behavior and controllability. Each resource type therefore has a “Mo-
bility” type of either “Mobile” or “Fixed”. A cargo container could be
classified as either a “mobile space” or a “mobile solid” dependent on
the perspective and allowed detail of a controller. That is, a controller
who is not allowed to view the contents of the cargo container would
perceive it as a solid. Therefore, changes in the structure, location and
policy of resources are represented by reassignment of resource types
and transfer to different spaces. The norms of the environment are the
semantic relationships of the resource types. For example, “a ‘fixed’
resource cannot be moved” is a norm processed at the lowest level of
behavior monitoring. The model also applies to information resources,
where e.g. a “namespace” would be considered a “Fixed Space”, a folder
in the namespace a “Moveable Space” and the electronic documents in
folders would be “Moveable Solids”. However, a folder may be repre-
sented as a solid to a user if he has no rights to read its contents and
can only be aware of its existence.
4. Protocol
The protocol description is in two parts; firstly, it describes the order-
ing and processing of messages passed between the architecture models.
The models are realized as four controller components but the interfaces
are actually between subcomponents of the models, as represented in
the architecture diagram (Figure 2). Secondly, the protocol is defined
as a state model for authorized transient control, which corresponds to
controller states as different control situations and events occur. Each
controller is composed of two threads - one for “listening” and one for
“controlling”. The listening thread is interfaced with the resource and
situation component models, such that it is responsible for coordinating
the reception of situation data and coordinating the output of control
commands to the resource. It is hence the interface between the re-
source and the controller. The “control thread” is interfaced with the
controller and interaction models, and is therefore responsible for co-
ordinating forwarding of control situations to the controller logic and
receiving subsequent controller commands.
The labels a.0 - a. 10 are particular message types. These are listed
below and their flow described in the subsequent paragraph:
0: Initialization / 1: Infon / 2: Situation / 3: Awareness / 4: Control
Situation / 5: Perceptual Signal / 6: Perceptual Error / 7: Control
Architecture and Protocol for Authorized Transient Control 123
Figure 3. The operational model for controllers using the four architecture com-
ponent models. The figure shows messages being exchanged between the component
interfaces. The diagram is repeated for both the access and interaction controllers,
but the message types are essentially the same. Messages “a.n” refer to those of the
Access Controller, while “b.n” are the Interaction Controller’s messages
Figure 4. The local state model of the overall control system as coordination and
interaction messages are passed between the controllers and their threads respectively.
Each state is also labeled with its designated coordinator - AC: Access Controller; IC:
Interaction Controller
ever, there is a more complex state model (see figure 4) for the overall
management system, which forms the basis for coordinating the con-
trollers. In the model presented, the current assumption is that the
Access and Interaction controllers are in the same control domain and
therefore their behavior can be mutually trusted. In this case trust re-
lates to their cooperation based on local coordination messages. Each
controller is therefore said to be “autonomously reactive”, matching the
goals of “adaptive security”. As the overall system state changes, each
controller makes a decision based on its perceptual reference. While
the access and interaction control activities are “closed-loop”, the co-
ordination activities are treated as “opened-loop”, in that there is no
imposition that either controller should provide explicit feedback. Nev-
ertheless, if it is recognized that the state models are out-of-sync this is
considered a “vulnerability window” or “imbalance” in the system [11],
which the controllers must reconcile. Coordination is therefore the task
of asserting that a particular state either exists or is transitioning. The
controller model (CM) is responsible for maintaining the state machine
of each controller; however, one of the controllers plays the role of coordi-
nator per state. The states are described below by defining the controller
that coordinates the relevant state, as well as the events that trigger re-
verse transitions and forward transitions respectively. The applicability
of the model to a traditional PC workstation environment is also used
as a practical example.
In order to clearly draw the connection between the state model and
the earlier discussions about perception - each phase indicates a new
perceptual view or class of perceptions, such that human operators can
perceive the state of the system based on their interests.
5. Conclusion
This paper has addressed two important themes in security for per-
vasive computing. Firstly, the issues of coordinating complexity with
regards to adaptive security and access controls, and, secondly, the is-
sue of conflicting usability and security goals. The approach was to
design the system as a dual-controller system, where one controller was
responsible for mediating interactions and the other for authorizations
and access controls. The system architecture of each controller was de-
signed to explicitly address the four challenges stated in the introduction
of resource modeling, context awareness, adaptive control and dynamic
interaction. The protocol considers that the dual controllers need to be
coordinated, and also includes how the coordination between these two
controllers is managed. The properties of authorization, transience and
control are therefore captured by the interaction of controllers and their
human operators.
Although quite some work related to transient and adaptive security
has been identified throughout the text, it was found that the focus
was either on design aspects, a particular requirement (policies, context
awareness, resource modeling, user modeling etc), or described a specific
128 PRIVACY, SECURITY, TRUST AND CONTEXT
6. Acknowledgements
The ideas with regards to coordination came about during the
problem definition phase of the ongoing TrustCoM project www.eu-
trustcom.com. I also thank my colleagues at TecO - Uni. Karlsruhe
and SAP Research for their useful comments and time to discuss the
ideas of this paper, as well as the feedback from reviewers and general
discussion during the workshop. Finally, thanks to Nicole Robinson for
her assessment of “real world” relevance of the paper.
Appendix: Remarks
Note that the contents and structure of the paper changed significantly after dis-
cussion in the workshop as well as subsequent discussions with colleagues about the
direction of this work. The original workshop paper presented the architecture and
protocol in still a very requirements gathering manner. The major problem cited from
reviews of the first draft was the explicit focus on a particular problem and contribu-
tion. After the workshop the idea that addressing operational matters as opposed to
purely design and implementation of adaptive security came to mind.
References
[1] J. Barwise and J. Perry. Situations and Attitudes. MIT Press, Cambridge, MA,
1983.
Architecture and Protocol for Authorized Transient Control 129
items into and out of his home: the tags are automatically enabled when
he brings them home and disabled when he takes them out. Thus, he
can benefit from the RFID functionality in the secure environment his
home offers without being subject to surveillance in the public.
Smith’s presentation was followed by questions regarding profiles that
could help define what data should be disclosed in which situations. Pro-
files are made up of rules that provide the basis for such decisions based
on situational input. The presenter however rejected this idea, stating
that rules may be suitable for average case scenarios and situations, but
not for extreme cases, which are much more important, after all.
Hohl’s presentation triggered concerns about the actual implementa-
tion of a DRM-based public terminal. It seems that there may be issues
with standard soft- and hardware that could make hardening such a pub-
lic terminal quite challenging. For example, swapping of main memory
contents to the hard disk could leave traces of private information that
could be accessed later by an unauthorized person. Another problem
is controlling access to personal information. The hardware token that
issues the license to the public terminal is a security risk itself.
This page intentionally left blank
MAINTAINING PRIVACY IN RFID
ENABLED ENVIRONMENTS
Proposal for a disable-model
2
Department of Computer Science
[email protected]
1. Introduction
RFID technology is a major enabler of ubiquitous computing envi-
ronments or the pervasive Internet as described and researched by tech-
nologists. Today, the technology is introduced to facilitate supply chain
138 PRIVACY, SECURITY, TRUST AND CONTEXT
Figure 1. Example of how the privacy debate can impact the brand
of other researchers (e.g. [15]). This type of more cost intensive privacy
enhancement only makes sense in the context of high value goods.
Type 1 privacy enhancements
The way we envision the disabling process to flow is as follows: In-
stead of storing the kill password and function, the RFID tag stores a
24-bit enable/disable password and function. When a consumer pays
for his products all tags are by default and automatically disabled. The
disabling process is handled by the cash-registrar in order to avoid con-
sumer time cost. With disablement a new password is randomly set on
all tags. This one password is printed out on the customer’s receipt2.
It can be used by the new product owner to potentially re-enabling the
EPC if needed for recycling, reclamations or intelligent home applica-
tions.
If unauthorized reader devices request the EPC from a disabled tag
without the correct password the tag denies access to the EPC stored
on it. From a layman perspective this means that by default objects
bought do not communicate with any reading device except at one’s
personal request. The approach thus lends itself to calm all those privacy
concerns related to unauthorized tracking and spying. At the same time,
all economically driven intelligent home appliances and future consumer
information needs are maintained. Trust in back-end reader architecture
is not required. Control resides completely with the user.
From a technical perspective, of course, the tag still reacts to process
re-enable requests. At this point several issues can arise from a security
perspective: The most important one is that it is possible for an adver-
sary to not decipher the password, but instead mime an anti-collision
procedure. Anti-collision is a function used to uniquely recognize and
communicate with one tag when several tags respond at the same time.
If anti-collision is now based on the EPC - the structure of which is
standardized - our disable-proposition could be circumvented. Our so-
lution therefore relies on the fact that the EPC itself is not used for
anti-collision. At first sight, this may be considered a major drawback
of our solution. Yet, requirements in logistics suggest that full EPCs are
not suited as a numbering scheme for anti-collision anyways. Forging
through a full EPC is too time consuming. Therefore, other numbering
schemes have been proposed for anti-collision including EPC dependent
hash-values, a random number pre-integrated in the tag, RNG inte-
grated into tags or a 12 to 14 bit serial number extract from the full 96
bit EPC [5]. For all these suggestions, our solution is feasible.
The second security weakness that may be argued is that a 24-bit
password scheme is not a ‘good-enough’ protection. We argue that the
Maintaining Privacy in RFID Enabled Environments 143
4. Discussion
Obviously, both types of privacy enhancements imply additional cost
for tag manufacturers. The most important cost driver is that the pri-
vacy enhancements we propose require tag manufacturers to use non-
volatile and re-writable memory (e.g. EEPROM) instead of ROM for
all item-level tags. Even though this is generally foreseen for tags of
class 2 and upwards, the current specification does not include it for
those low-cost tag classes 0 and 1. In addition to this memory cost the
tags would need to be able to integrate two (or even five) additional
functions3.
Disabling a tag as we propose here only from time to time does not
make sense. Our proposal integrates the requirement that the disable
process itself takes place automatically when goods are checked out at
the cash-register. While the disable model allows for default privacy
and is therefore superior to the kill function industry players will argue
that integrating disablement in cash-registers is costly. We argue that
this may be true, but privacy needs justify the investment. With RFID
cash-registers will undergo considerable technical changes in any way.
Disabling will only be an additional requirement.
Password management can be a challenge in moving goods through
the supply chain as well as in the user domain. Yet, as far as logis-
tics is concerned our proposal is identical to the kill model. Probably,
password information is transferred along with EPC information. When
consumers take products home future scenarios foresee home agents and
identity management systems [2] which manage peoples’ assets, data and
access rights4. In our thinking, such an agent could check new goods
into the home system and set all devices to one common home password.
Consequently, future consumers would not have to remember a myriad
of passwords for each product. We believe that common password archi-
tecture for home readers or smart homes makes sense as consumers can
access their devises more easily. A back-end database containing all tag
data as proposed by Weis [15] as well as processing infrastructure to test
all possible passwords [7] is not required. In the short- and mid-term,
passwords printed out on receipts also don’t increase consumer trans-
action cost since proof concepts in recycling and reclamation have been
based on receipts for the last decades.
Finally, from a security perspective our proposal does, of course, not
allow for highest level protection as is needed in many application areas.
However, we also do not believe that military-level security is required for
yogurt cans or even stereos. Even if the ‘one-common-home-password’
Maintaining Privacy in RFID Enabled Environments 145
we suggest would be decrypted, what would the thief learn more about
my belongings than if he just unlocked the window and stepped in?
5. Conclusion
RFID technology will be a ubiquitous reality in every-day life in the
future. This paper argues that economic interest seeks to maintain an
RFID tag’s functionality after a purchase has been made. On this basis
it is argued that killing RFID tags is an unrealistic solution to preserve
default privacy in the long run. The authors conclude that mass market
RFID should be enhanced with privacy functionality which in our pro-
posal implies write-enhanced memory. Two types of privacy protection
are suggested implying different cost and sophistication.
The major benefit of the solution outlined is that the disable-model
puts RFID communication into the sole control of the user. With this,
the solution embraces current thinking in the development of PET tech-
nologies which takes a user-centric view. Secondly, a compromise is made
between state-of-the-art security and what is economically feasible. Only
‘good-enough’ security is used to develop a proposition that will meet
the privacy needs in a majority of situations. Finally, the model is the
only proposition to our knowledge which allows for a realistic compro-
mise between RFID-based market aspirations and business models on
one side and peoples’ desire for privacy on the other. Consequently, we
believe that the disable-model is a good road to take.
Notes
1. Similar to the bar code, the Electronic Product Code, EPC, contains a serial number
that can be related to a product category and a manufacturer. However, the EPC also
contains a unique serial number associated with more detailed and comprehensive back-end
data. This allows for retrieving an object’s detailed characteristics, history and potentially
other related data [1].
2. Long-term, the password will probably be transferred to an identity device such as a
PDA owned by the consumer.
3. In fact, the low cost RFID tags “Philips I-CODE SL2 ICS10/11” already contains all
components needed for type 1 privacy enhancements, needing only a few design changes.
4. For a reference on agent solutions currently developed to address the challenge of in-
creasingly complex password management see e.g. HP’s work on the ‘e-person’: http://www.-
hpl.hp.com/research/iil/themes/eperson/eperson.htm
References
[1] Auto-ID Center. Technical memo - physical mark-up language update, p.5,
2002.
[2] S. Clauß and M. Köhntopp. Identity management and its support of multilateral
security. Computer Networks, (37):205–219, 2001.
146 PRIVACY, SECURITY, TRUST AND CONTEXT
[3] Ivan Bjerre Damgård. Collision free hash functions and public key signature
schemes. In Eurocrypt ’87, volume 304 of LNCS, pages 203–216. Springer-Verlag,
1988.
[4] FoeBuD e.V. Positionspapier über den Gebrauch
von RFID auf und in Konsumgütern, Presseerklärung.
http://www.foebud.org/texte/aktion/rfid/positions-papier.pdf, 2003.
[5] EPC Global. Specifications for 900 MHz Class 0 RFID Tags, page
15. http://www. epcglobalinc.org/standards_technology/Secure/ v1.0/UHF-
class0.pdf, 2003.
[6] EPC Global. Version 1.0 Specifications for RFID Tags.
http://www.epcglobalinc.org/standards_technology/specifications.html,
2003.
[7] A. Juels. Privacy and Authentication in Low-Cost RFID Tags. Submission to
RFID Privacy Workshop @ MIT, 2003.
[8] Shingo Kinosita, Fumitaka Hoshino, Tomoyuki Komuro, Akiko Fujimura, and
Miyako Ohkubo. Nonidentifiable Anonymous-ID Scheme for RFID Privacy Pro-
tection. To appear in CSS 2003 in Japanese, 2003.
[9] Meg McGinity. RFID: Is This Game of Tag Fair Play? Communications of the
ACM, 47(1):15, 2004.
[10] Miyako Ohkubo, Koutarou Suzuki, and Shingo Kinoshita. Cryptographic Ap-
proach to “Privacy-Friendly” Tags. Submission to RFID Privacy Workshop @
MIT, 2003.
[11] Gregory J. Pottie. Privacy in the Global E-Village. Communications of the
ACM, 47(2):21, 2004.
[12] Peter Schüler. Dem Verbraucher eine Wahl schaffen - Risiken der RFID-Technik
aus Bürgersicht. c’t, (9), 2004.
[13] C. E. Shannon. Communication Theory of Secrecy Systems. The Bell System
Technical Journal, 28(4):656–715, 1949.
[14] S. Spiekermann and U. Jannasch. RFID in the retail outlet: implications for
marketing and privacy. IWI Working Paper, 2004.
[15] S. Weis. Security and Privacy in Radio-Frequency Identification Devices. PhD
thesis, Massachusetts Institute of Technology (MIT), 2003.
SAFEGUARDING PERSONAL
DATA USING TRUSTED COMPUTING
IN PERVASIVE COMPUTING
2
Microsoft Research Cambridge
[email protected]
1. Introduction
The paradigm of ubiquitous and pervasive computing [16] leads to
a much greater intrusion of information and communication technology
into the personal life of everyone than what we experience today. The
users of pervasive computing will use many smart personal objects, in
addition, many services will be provided by a smart environment that
will surround us. However, fears of users about the misuse of their per-
sonal data prevents the acceptance of these services and technologies.
This is especially the case, when an agent running on a personal digital
assistant is acting on behalf of the user and can autonomously release
sensitive information to communicating partners such as service provid-
ing devices in the environment. Nearly everybody has had experience of
misused personal information in the Internet such as unwanted adver-
148 PRIVACY, SECURITY, TRUST AND CONTEXT
tisements and spam. This is only the tip of the iceberg. More serious
abuse of the information may involve selling it to rating agencies, re-
sulting in unwanted “personalization” of prices, interest rates, denial of
credit, etc. Therefore, it is essential that devices providing services han-
dle their users’ personal data with care. If it is not possible to ensure
this, fear of misuse and privacy concerns remain with the user.
2. Attacker Model
The aim of an attacker in this scenario would be to gain access to
private health information. The attacker may gain control over some of
the software on the public terminal, or gain complete control over the
terminal after the user left it. He may read and insert communication
between the smartcard and the terminal, or read and insert communi-
cation between the terminal and the backend database. In addition, the
attacker may also introduce a fake terminal. An attack that involves an
attacker looking at the display of the terminal is not considered.
3. The Approach
Our approach follows closely the idea presented by Korba and Kenny
in [8] for solving the problem that a user can keep control over transmit-
ted personal data is based on the following observation: the interests a
service or application user has in dealing with sensitive data are similar
to those of providers of copyrighted digital contents. Both, the copy-
righted content provider and the patient, i.e., the personal data provider,
are interested in making the supplied data available only for limited use
and processing. Furthermore, unauthorized onward transmission and
use should be prevented. Subsequently, control over transmitted data or
contents has to be enforced.
This parallelism of interests between content providers and patients
(service users) with regard to the processing of data makes rights man-
agement mechanisms a suitable toolset for the protection of sensitive
personal data. Personal data is sent in a protected way to the service-
providing device preventing unauthorized usage and information leakage.
150 PRIVACY, SECURITY, TRUST AND CONTEXT
4. Technical Solution
Successful deployment of a system which enforces the processing of
personal data under given usage restrictions requires an independent pro-
cessing or reporting component. This component can ensure or report
that the applications which are executed are untampered with and pro-
vide a safe execution environment. The Trusted Computing Group [15]
is developing extensions to computing platforms to ensure this. Because
major industry players, including hardware and software manufactures
and content providers, are involved in specifying this platform one can
assume that devices with Trusted Computning or DRM capabilities will
become pervasive. The TCG platform can produce signed attestations
of the integrity of the software and report by this the execution of an
untampered application.
Technically the TCG specifies hardware extensions by which different
stages of starting and running a platform can be verified by measure-
ment functions and reported to the TPM. By this, the trusted domain
is extended with every successful verified component (BIOS, firmware of
devices, bootloader, operating system). This extension of trust is illus-
trated in figure 2. If the platform has successfully started and all the
hash values of the measured components matches the expected values of
a known state platform, the TPM unlocks signing functions to be able to
prove its known state. Microsoft proposes an operating system with the
so called Next Generation Secure Computing Base NGSCB [10] which
extends the existing context in which a process can be executed with a
secure context environment. Only verified code can be executed in this
protected context. Debugging or accessing other processes’ memory is
not possible and should be supported by a special processor mode in
future. ARM the well known microprocessor designer as well proposes a
Safeguarding Personal Data using Trusted Computing 151
5. Discussion
The implementation represents a first step towards using cooperative
mechanisms to protect the privacy of users which are reported and en-
forced technically at the public terminals. The used operating system
supports a verified execution but in itself can not represent the same
core root of trust as trusted computing hardware. The PDA can issue
the right to view and print. Printing is a digital transfer of sensitive data
to another device, the printer. This means that the printer itself should
have to respect the terms of the licence. Currently, a printer without
permanent storage is used.
The implementation described in the previous section does not ad-
dress the threat that the browser may be tricked into posting sensitive
information to untrusted sites. To this end, further isolation of the net-
work environment is required, similar to the isolation of the filesystem
provided by the chrooted ramdisk.
The use of stunnel [14] and HTTPS is very computation intensive
for the user’s device. Using NGSCB-like enforcement mechanisms could
reduce this load and lead to a solution closer to the capabilities of a real
smartcard.
6. Related Work
There is some work that is related to the approach presented here.
As stated before, the idea of using DRM like mechanisms and rights
expression languages for the protection of personal data was discussed
by Korba and Kenny [8]. However, they did not present a working
system architecture or proof of concept implementation. Bussard et
al. [4] demonstrate how to display sensitive information in federations
of devices. However, their approach doesn’t work if the information is
too complex to be displayed on a limited screen (e.g. x-ray pictures).
Kohl [7] pointed out that privacy is in fact a big issue in a hospital
environment, but assumed a central organization for data storage and
processing. Privacy through the use of identity management in a mobile
computing environment is proposed in [6]. It is based on the retention
of personal data and can not be controlled once they are given in foreign
hands. Agrawal et. al [1] attach a licence to data in a database. This
154 PRIVACY, SECURITY, TRUST AND CONTEXT
approach is a good way of ensuring privacy as long as the data does not
cross administrative domain boundaries.
Closer to the method presented here is the suggestion of Langheinrich
in [9]. A policy is attached to personal data to create a sense of account-
ability. The approach of Mont et al. in [12] uses a third party to trace
and audit the use of personal information.
Notes
1. We would like to point out that these issues are not limited to the hospital environment
and also appear in other areas, for instance in e-commerce and web-services in general.
2. Smartcards like this are currently being specified and will be used in the near future
in the German health system under the name “Krankenkarte”.
3. While every patient will be supplied with a smartcard, not every person will own a
PDA.
4. Here, trust is defined as the patient being confident that her data is not misused.
5. Current terminals has to be considered untrusted as long as a user of the terminal has
no chance to convince hisself. It is easy to tamper a terminal with given its public location,
while it is very hard to administrate it such that it remains tamper resistant.
References
[1] Rakesh Agrawal, Jerry Kiernan, Ramakrishnan Srikant, and Yirong Xu. Hippo-
cratic Databases. In 28th Int’l Conf. on Very Large Databases (VLDB), Hong
Kong, 2002.
[2] ARM. TrustZone Technology - Secure extension to the ARM architecture. 2004.
Safeguarding Personal Data using Trusted Computing 155
[3] G. Brose, M. Koch, and K.-P. Löhr. Entwicklung und Verwaltung von Zugriffss-
chutz in verteilten Objektsystemen - eine Krankenhausfallstudie. 2003. Praxis
der Informationsverarbeitung und Kommunikation (PIK).
[4] Laurent Bussard, Yves Roudier, Roger Kilian-Kehr, and Stefano Crosta. Trust
and Authorization in Pervasive B2E Scenarios. In Proceedings of the 6th Infor-
mation Security Conference (ISC’03) Bristol, United Kingdom, October 1st-3rd,
2003.
[5] G. Iachello and G. D. Abowd. Security requirements for environmental sensing
technology. 2003. 2nd Workshop on Ubicomp Security, Oct. 2003, Seattle, WA,
USA.
[6] Uwe Jendricke, Michael Kreutzer, and Alf Zugenmaier. Mobile Identity Man-
agement. Technical Report 178, Institut für Informatik, Universität Freiburg,
October 2002. Workshop on Security in Ubiquitous Computing, UBICOMP.
[7] Ulrich Kohl. From Social Requirements to Technical Solutions - Bridging the
Gap with User-Oriented Data Security. In Proceedings IFIP/Sec ’95, Cape
Town, South Africa, 9-12 May, 1995.
[8] Larry Korba and Steve Kenny. Towards Meeting the Privacy Challenge: Adapt-
ing DRM. 2002. ACM Workshop on Digital Rights Management.
[9] Marc Langheinrich. A Privacy Awareness System for Ubiquitous Computing
Environments. 2001.
[10] Microsoft Corporation. NGSCB: Trusted Computing Base and Software Au-
thentication, 2003.
[11] Guenter Müller, Michael Kreutzer, Moritz Strasser, and et al. Geduldige Tech-
nologie für ungeduldige Patienten, führt Ubiquitous Computing zu mehr Selb-
stbestimmung? In Total vernetzt. Springer: Berlin, Heidelberg, New York, pages
159–186, 2003.
[12] M. Mont, S. Pearson, and P. Bramhall. Towards Accountable Management of
Identity and Privacy: Sticky Policies and Enforceable Tracing Services. 2003.
HPL-2003-49.
[13] NetBSD. http://www.netbsd.org. 2004.
[14] stunnel. www.stunnel.org. 2004.
[15] Trusted Computing Group. TCG Backgrounder. May 2003.
[16] Marc Weiser. The Computer of the 21st Century, 1991. Scientific American,
vo.265. no.3, Sept.1991, pp 66-75.
Part of this work was funded by the DFG / Gottlieb Daimler and
Carl Benz Foundation.
This page intentionally left blank
A SOCIAL APPROACH TO PRIVACY IN
LOCATION-ENHANCED COMPUTING
2
School of Information and Computer Science U.C. Irvine, Irvine CA, 92697, USA
[email protected]
Abstract Place Lab is a system for positioning a user based on passive monitoring
of 802.11 access points. Place Lab seeks preserve the user’s privacy by
preventing disclosures, even to “trusted” systems in the infrastructure.
We are pursing two avenues to explore these and other privacy issues in
the domain of socially-oriented applications. We are doing fieldwork to
understand user needs and preferences as well as developing applications
with significant, fundamental privacy concerns in order to expose the
strengths and weaknesses in our approach.
1. Introduction
Privacy has long been recognized as a central concern for the effective
development and deployment of ubiquitous systems [2, 3, 13, 15, 17].
As both a technical problem and a social problem, it is difficult to deal
with, to design for, and to model coherently.
The traditional frame within which privacy arguments are cast is a
trade-off between risk and reward. This is a popular approach in a
range of fields from public policy to cryptography. The risk/reward
framework, in the pervasive computing context, suggests that individ-
uals make decisions about technology use by balancing perceived risks
against anticipated benefits-that is, in a fundamentally economic ap-
proach, they trade off costs against benefits and adopt technologies in
which the benefits outweigh the costs, while rejecting those in which the
158 PRIVACY, SECURITY, TRUST AND CONTEXT
costs outweigh the benefits. Therefore, many have argued, creating suc-
cessful location-enhanced computing requires finding the most effective
balance between risks and rewards [10, 25].
This approach has a number of problems, though, both as a conceptual
framework and, consequently, as a model for design. Studies of actual
practice fail to display the sort of rational trade-off that this model would
suggest. There are a number of possible reasons.
First, it is likely that the model is over-simplified and neglects a num-
ber of related factors that are important for decision-making about tech-
nology adoption and use. For example, we have found that naturally-
occurring accounts of privacy behaviors depend on recourse as much as
risk and reward. By recourse, we are referring to the actions that can
be taken by users in the event that others misbehave.
Second, recent research in the area of behavioral economics suggests
that traditional rational actor approaches fail to adequately account for
everyday behavior even within their own fairly limited terms of refer-
ence [22]. The notion of stable exchange-values for goods, services, and
labor upon which conventional economic modeling is based seems to fare
poorly when applied to human actors who are meant to embody these
principles. Instead, psychological and social factors seem to interfere
with the mathematical principles of neoclassical economics. In a sim-
ple example, while you might pay a neighborhood kid $20 to mow your
lawn, you would be less likely to mow your neighbor’s lawn for $20. Re-
cent approaches that attempt to incorporate psychological elements into
economics models, such as prospect theory, revise traditional notions of
commodity and value.
Third, and perhaps more fundamentally, studies of technological prac-
tice suggest that technology adoption and use should be seen not simply
in terms of individual decisions about costs and benefits, but rather in
terms of broader patterns of participation in cultural and social life. For
example, in Harper’s (1992) study [11] of the use of active badges in
research laboratories, it is telling that a number of people report partic-
ipating in the use of the system in order to be seen as team players, in
order to provide support to others, etc. In other words, social actions
have symbolic value here, and these are frequently the more salient ele-
ments of adoption decisions. Ito’s studies of mobile messaging amongst
Japanese teens [14], or the studies by Grinter and colleagues of the use of
SMS and Instant Messaging amongst teens in the US and the UK [8, 9]
describe the use of messaging technologies as cultural practices, essen-
tially casting the adoption of these technologies as forms of participation
in social life. To use the technologies is simply part and parcel of ap-
propriate social practice. As technologies become increasingly integrated
A Social Approach to Privacy in Location-Enhanced Computing 159
ganization) in return for some service. Active Badge systems [27] and
related context-based services operate according to this model; infor-
mation about location is relayed to a central server, while then makes
contextualized services available to clients and users. This architectural
approach made sense when both client-side computation and network
bandwidth were limited, and so has been a common structure in pro-
totype ubicomp systems. However, given the relentless march of time
and Moore’s Law, alternative technical approaches are now more fea-
sible, and avoid the sorts of privacy commitments being made in this
architecture.
It should be noted that it is possible to build the same institutional
application with varying degrees of disclosure on the part of users. For
example, if Google made their index of web pages publicly available,
one could turn Google into a personal application since a user could
do their searches while disclosing little to no personal information. In
this scenario, one could download the entire medical index and then
search locally for a specific condition, revealing the possible interest in
a medical condition, but no more beyond that. However, in most cases,
institutional applications have substantial commercial, public interest,
or intellectual property barriers that prevent them from being organized
in this open fashion.
The final class of applications in our taxonomy is social. These appli-
cations require disclosure to people, rather than institutions to work ef-
fectively. Many ubiquitous-computing services, such as friend finder [26]
or context-aware chat [23], are examples of social applications. A friend
finder is an application that alerts you when one of your “friends” is
nearby, facilitating serendipitous social interaction. Clearly, this requires
at least that the user and her friend’s locations be exchanged in some
way.
There are risks in social applications, although they are not as clear as
some other scenarios. In the friend-finder example, by what mechanism
should “friends” be designated? Certainly, it should require some type
of mutual acceptance, otherwise the system can and will be abused by
anyone with the technology. Avenues for recourse are also unclear. Are
the forces of recourse-such as social isolation or embarrassment-strong
enough to affect user behavior? With due respect to considerations of
risks and recourse, we are more interested in how this technology will
be adopted be social actors. It easy to imagine that being on someone’s
“friends list” in a friend-finder application might be as important as
being in someone’s cell-phone address book. Studies of the gift-giving
practices of teens [28] have revealed the social impact of being “in” the
social space of someone’s cell-phone address book to be significant.
162 PRIVACY, SECURITY, TRUST AND CONTEXT
while at the same time presents significant privacy risks. In this way,
we hope to attack the privacy issue “head on” by experimenting with
privacy strategies and mechanisms.
Our application is called “ambush” and is based on the work of My-
natt and Tullio. In [18], Mynatt and Tullio describe an ambush as the
use of a shared calendaring system to infer a person’s probable location
in the future with the intent of “ambushing” them for a quick face-to-face
meeting. This process is used frequently in larger organizations, partic-
ularly by subordinates, to have brief conversations with senior managers
who are between meetings.
We have generalized the notion of ambush to be any location, not
just conference rooms visible in a shared calendar system at work. Our
ambush application allows a user Alice to define a geographic region-say
a public park-and ask to be notified anytime Bob enters that region. If
Alice lives near the park and wants to visit with Bob, clearly both can
benefit from the possible serendipitous, social encounter in the park.
Another use of ambush is micro-coordination. Such tasks are common
in urban environments, such as “Let me know when Charles or DeeDee
get to the subway station so I can go meet them.” Another use of ambush
is the creation of social capital [21] through discovery of shared interests
that are demarcated by places, such as bookstores, music venues, or civic
organizations. It should be noted that current “friend finder” systems
offered by cell-phone providers are actually corner-cases of our ambush
application in which the only location that can be specified is “near me.”
The potential for nefarious activities with ambush are rife, making
risk a significant issue. As previously stated, we chose ambush as a test
application because it forces to come to grips with the privacy concerns.
As an aside, we are not concerning ourselves right now with the secu-
rity and authenticity issues of ambush. We are not addressing questions
like, “How do I know that no malicious entity modified or hacked the
users’ devices to steal their location information?” or “How can I be sure
that this geographic region is Green Lake Park as Alice purports and is
not my home as I suspect?” Although these are interesting questions,
we are focusing our initial investigations on the privacy issues.
We have devised several concrete strategies to help us address the
privacy concerns in ambush. First, our privacy concerns field study
with ESM mentioned above will include questions that are specifically
tailored to an ambush-style application. This can help us craft our
technical strategies to be sensitive to the social norms and perceptions
of our user community.
A Social Approach to Privacy in Location-Enhanced Computing 165
4. Conclusion
Despite being in the early stages of the Place Lab project, we know
that accurately recognizing and addressing privacy concerns is critical to
the success of our system as a platform for location-enhanced computing.
Unfortunately, understanding disclosure of user’s information and its
relationship to an application’s success is difficult to predict. This is
especially true in the domain of social applications in which users disclose
personal data to other individuals. To increase our understanding of
166 PRIVACY, SECURITY, TRUST AND CONTEXT
Notes
1. A number of other technologies including GPS have this same advantage that location
is computed locally.
References
[1] P. Bahl and V. Padmanabhan. RADAR: An in-building RF-based user location
and tracking system. INFOCOM (2), p. 775–784.
[2] L. Barkhuus and A. Dey. Location-based services for mobile-telephony: a study
of users’ privacy concerns. In Proceedings of INTERACT 2003, 9th IFIP TC13
International Conference on Human-Computer Interaction, 2003.
[3] V. Bellotti and A. Sellen. Design for Privacy in Ubiquitous Computing Environ-
ments. In Proceedings of The Third European Conference On Computer Sup-
ported Cooperative Work (ECSCW ’93), Milan, Italy, 1993. Kluwer Academic
Publishers.
[4] P. Castro, P. Chiu, T. Kremeneck, and R. Muntz. A Probabilistic Room Loca-
tion Service For Wireless Neworked Environments. In Proceedings of Ubicomp,
Atlanta GA, 2001.
[5] S. Consolvo and M. Walker. Using the Experience Sample Method to Evalu-
ate Ubicomp Applications. IEEE Pervasive Computing Mobile and Ubiquitous
Systems: The Human Experience, 2(2):24–31, Apr-June 2003.
[6] K. Crowston and M. Williams. Reproduced and Emergent Genres of Commu-
nication on the World Wide Web. The Information Society, 16:201–205.
[7] P. Dourish. What We Talk About When We Talk About Context. Personal
and Ubiquitous Computing, 8(1).
[8] R. Grinter and M. Eldridge. Y do tngrs luv 2 txt msg? In Proceedings of
the European Conference On Computer Supported Collaborative Work (ECSCW
2001), Bonn, Germany, 2001.
[9] R. Grinter and L. Palen. Instant Messaging In Teen Life. In Proceedings of the
ACM Conference On Computer Supported Cooperative Work (CSCW 2002),
New Orleans, LA, 2002.
[10] M. Gruteser and D. Grunwald. A methodological assessment of location privacy
risks in wireless hotspot networks. In Proceedings of the First International
Conference on Security in Pervasive Computing, 2003.
[11] R. Harper. Looking at Ourselves: An Examination of the Social Organization of
Two Research Laboratories. In Proceedings of the ACM Conference Computer-
Supported Cooperative Work, Toronto, Canada, 1992.
A Social Approach to Privacy in Location-Enhanced Computing 167