Borcea, Cristian - Curtmola, Reza - Talasila, Manoop - Mobile Crowd Sensing (2017, CRC Press)
Borcea, Cristian - Curtmola, Reza - Talasila, Manoop - Mobile Crowd Sensing (2017, CRC Press)
Borcea, Cristian - Curtmola, Reza - Talasila, Manoop - Mobile Crowd Sensing (2017, CRC Press)
Crowdsensing
Mobile
Crowdsensing
Cristian Borcea • Manoop Talasila
Reza Curtmola
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access
www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc.
(CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization
that provides licenses and registration for a variety of users. For organizations that have been granted
a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and
are used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
To my wife, Mina, for her support during the writing of this book.
Cristian Borcea
Preface xi
1 Introduction 1
1.1 Evolution of Sensing . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Bridging the Sensing Gap with Mobile Crowdsensing . . . . 3
1.3 Organization of the Book . . . . . . . . . . . . . . . . . . . . 4
2 Mobile Sensing 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 How Did We Get Here? . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Static Wireless Sensor Networks . . . . . . . . . . . . 7
2.2.2 The Opportunity of Mobile People-Centric Sensing . . 8
2.2.3 Mobile Sensing Background . . . . . . . . . . . . . . . 8
2.2.4 Crucial Components that Enable Mobile Sensing . . . 8
2.2.4.1 Apps . . . . . . . . . . . . . . . . . . . . . . 9
2.2.4.2 Sensors . . . . . . . . . . . . . . . . . . . . . 10
2.2.4.3 Software APIs . . . . . . . . . . . . . . . . . 12
2.3 Where Are We? . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 Current Uses . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.2 Known Hurdles . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Crowdsourcing 19
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Crowdsourcing Platforms . . . . . . . . . . . . . . . . . . . . 23
3.4 Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
vii
viii Contents
4.2.2 Infrastructure . . . . . . . . . . . . . . . . . . . . . . . 30
4.2.3 Social . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 How Can Crowdsourcing be Transformed into a Fun Activity
for Participants? . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4 Evolution of Mobile Crowdsensing . . . . . . . . . . . . . . . 33
4.4.1 Emerging Application Domains . . . . . . . . . . . . . 33
4.4.2 Crowdsensing in Smart Cities . . . . . . . . . . . . . . 35
4.5 Classification of Sensing Types . . . . . . . . . . . . . . . . . 36
4.5.1 Participatory Manual Sensing . . . . . . . . . . . . . . 36
4.5.2 Opportunistic Automatic Sensing . . . . . . . . . . . . 37
4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Index 161
Preface
xi
List of Figures
xiii
xiv List of Figures
xvii
1
Introduction
1
2 Mobile Crowdsensing
sensors were introduced in the late 1990s to replace and expand the use of
wired sensing. Although they are called sensors, these devices are small com-
puters with sensing and networking capabilities. In addition, they are battery
powered. Two main factors have driven the research and development of wire-
less sensors: cost and scale. Since they do not need wires for power and control,
they can be deployed across large areas and, thus, lead to ubiquitous sensing.
Instead of having complex and costly control equipment to manage the sensors
and collect their data, wireless sensors contain intelligence that drives their
functionality and self-organize into networks that achieve the required tasks
in a distributed fashion.
Wireless sensors and sensor networks have been deployed by many or-
ganizations for monitoring purposes. For example, they have been used to
monitor the structural integrity of buildings or bridges, the energy consump-
tion in buildings, the parking spaces in a cities, or the behavior of plants and
animals in different environment conditions. However, after about a decade of
using wireless sensors networks, it became clear that the vision of ubiquitous
sensing is not going to be achieved by this technology alone.
Three problems have precluded wireless from widespread deployment: bat-
tery power limitations of sensors, the close-loop and application-specific nature
of most network deployments, and the cost. Unlike computing and networking
technologies, which have improved exponentially over the last half century, the
battery capacity has increased only linearly. Therefore, the lifetime of wireless
sensor networks is limited by the available battery power. This makes heavy-
duty sensor networks impractical in real life because changing the batteries of
sensors deployed in areas that are not easily accessible is difficult and costly.
The alternative to replacing the sensor batteries would be to just replace
the sensors, but this is also costly because the cost per sensor did not become
as low as predicted. The reason is mostly due to economics: Without mass
adoption, the cost per sensor cannot decrease substantially.
The cost per wireless sensor could be reduced if each sensor platform would
have several types of sensors and multiple organizations would share a sen-
sor network for different applications. The practice contradicts this approach,
as most deployments are application specific, contain only one type of sensor,
and belong to one organization. Security and privacy are two major issues that
make organizations deploy close networks. The cost of providing strong secu-
rity and privacy guarantees is reflected in more battery power consumption,
and subsequently, lower network lifetime. Furthermore, sharing the networks
for multiple applications also results in more battery consumption.
Mobile sensing has appeared in the last 10 years as a complementary so-
lution to the static wireless sensor networks. Mobile sensors such as smart-
phones, smart watches, and vehicular systems represent a new type of ge-
ographically distributed sensing infrastructure that enables mobile people-
centric sensing. These mobile sensing devices can be used to enable a broad
spectrum of applications, ranging from monitoring pollution or traffic in cities
to epidemic disease monitoring or real-time reporting from disaster situations.
Introduction 3
Privacy Issues and Solutions This chapter tackles privacy issues in mo-
bile crowdsensing. Unlike crowdsourcing where the participants do not
disclose highly sensitive personal data, crowdsensing data includes infor-
mation such as user location and activity. The chapter presents privacy-
preserving architectures, privacy-aware incentives, and solutions for loca-
tion and context privacy.
6 Mobile Crowdsensing
2.1 Introduction
Ubiquitous mobile devices such as smartphones are nowadays an integral part
of people’s daily lives for computing and communication. Millions of mobile
apps are made available to the smartphone users through app stores such
as Android Play Store and iPhone AppStore. These mobile apps leverage the
cameras, microphones, GPS receivers, accelerometers, and other sensors avail-
able on the phones to sense the physical world and provide personalized alerts
and guidance to the smartphone users. This chapter presents the background
of mobile sensing and its applications to everyday life.
7
8 Mobile Crowdsensing
schemes [35], [162] to address this problem. However, such solutions are not
good enough for long-running applications.
Mobile Apps
Smartphone
Sensors
Software
APIs
Mobile Sensing
FIGURE 2.1
Combination of components that led to mobile sensing.
2.2.4.1 Apps
The types of mobile applications that involve sensing are 1) personal sens-
ing apps (such as personal monitoring and healthcare applications), 2) home
sensing apps (such as HVAC or surveillance applications for smart homes),
3) city-wide sensing apps (vehicular traffic and infrastructure-monitoring ap-
plications for smart cities), 4) vehicle sensing apps (phone-to-car and car-to-
phone communication sensing apps for smart cars), and 5) games (sensing
for augmented reality games). Initially, the mobile application developers and
the researchers have shared the common difficulty of reaching a large number
of users with their sensing apps. Fortunately, companies such as Apple and
Google introduced their own app stores, which created app ecosystems that
made it easy for developers to publish their apps and for users to access them.
Now, the sensing applications can easily be found, downloaded, and in-
stalled by the users who take advantage of the app store ecosystems. Before
10 Mobile Crowdsensing
TABLE 2.1
The broad categories of sensors, radios, and other hardware available on
smartphones for mobile sensing.
Motion/Position Environmental Radios Other
sensors sensors hardware
Accelerometer Ambient light sensor GPS Microphone
Magnetometer Barometer Bluetooth Camera
Gyroscope Temperature sensor WiFi Camera flash
Proximity sensor Air humidity sensor Cellular Touch sensor
Pedometer Radiation sensor Fingerprint
app stores, researchers used to perform user studies that required sensing data
from people in controlled environments with small numbers of users. Today,
with the success of app stores, researchers can provide their sensing apps to
large numbers of users and, thus, increase the scale of their studies substan-
tially. For instance, developers and researchers who are interested in building
accurate activity recognition models can now easily collect huge amounts of
sensor data from many user smartphones through their sensing apps.
2.2.4.2 Sensors
The indispensable component of mobile sensing is the sensors. The typical
sensors available in most of the current smartphones are listed in Table 2.1.
The motion sensors measure acceleration forces and rotational forces along
the three axes. The position sensors, such as magnetometers, measure the
physical position or direction of a device. The environmental sensors mea-
sure various environmental parameters, such as ambient air temperature and
pressure, illumination, and humidity.
The radios available in smartphones are also used to sense the location,
speed, and distance traveled by users. The other hardware available in smart-
phones for sensing are microphones and cameras for multimedia sensing, cam-
era flashes for sensing heart-rate, touch screen sensors for activity recognition,
and fingerprint sensors for security. In addition to these hardware sensors, the
phones provide software-based sensors that rely on one or more hardware
sensors to derive their readings (one such software-based sensor is the orien-
tation sensor, which relies on data from the accelerometer and magnetometer
to derive the phone’s orientation data).
three physical axes (x, y, z) to detect the planet’s geomagnetic north pole,
which is useful for determining the direction of motion of a phone. These
readings are also used by some apps for metal detection.
The gyroscope is another motion sensor that detects orientation with
higher precision by measuring a phone’s rate of rotation around each of
the three physical axes (x, y, and z). These sensor readings are commonly
used for rotation detection (spin, turn, etc.), which is employed, for exam-
ple, in mobile games. In some phones, the orientation is also derived based
on data from accelerometer and magnetometer sensors. The combination
of orientation readings and location readings can help location-based ap-
plications position the user’s location and direction on a map in relation
to the physical world.
The proximity sensor measures the proximity of an object relative to the
view screen of a device. This sensor is typically used to determine whether
a handset is being held up to a person’s ear. This simple form of con-
text recognition using proximity sensor readings can help in saving the
phone’s battery consumption by turning off the phone screen when the
user is on call. The pedometer is a sensor used for counting the number
of steps that the user has taken since the last reboot while the sensor was
activated. In some devices, this data is derived from the accelerometer,
but the pedometer provides greater precision and is energy-efficient.
using WiFi or cellular radios, which use less energy compared to GPS. There-
fore, fine-grain controls over the sensors with support from a robust sensor
API can help save the phone’s battery power, and can enforce standards or
best practices in accessing sensors in an efficient way by mobile application
developers.
In recent years, smartphone operating systems have improved their sensor
API support for third-party sensing applications to efficiently access the avail-
able smartphone sensors. However, the operating systems have not yet evolved
to seamlessly support a sensing paradigm where sensor data collection, data
aggregation, and context analysis can be done at the operating system level or
even in a distributed fashion among co-located smartphones in a given area.
There are many research studies that proposed such sensing systems, but
they have not yet been adopted by any operating system. Another challenge
for developers programming sensing applications is that it is not easy to port
their applications from one operating system to another, as different operating
systems offer different sensor APIs to access the smartphone sensors. This is
another reason why it is necessary to propose sensing abstractions and the
standardization of sensor APIs.
Some of the prominent smartphone operating systems that provide sensor
APIs are Google’s Android, Apple’s iOS, and Microsoft’s Windows Mobile.
Apple’s iOS APIs: The Apple’s iOS APIs provides the Core Motion Frame-
work in the Objective C programming language for accessing sensors. The
Core Motion framework lets the sensing application receive motion data
from the device hardware sensors and process that data. The framework
supports accessing both raw and processed accelerometer data using block-
based interfaces. For mobile devices with a built-in gyroscope, the app
developer can retrieve the raw gyroscope data as well as the processed
data reflecting the altitude and rotation rate of the device. The developer
can use both the accelerometer and gyroscope data for sensing apps that
use motion as input or as a way to enhance the overall user experience.
The available sensor classes are CMAccelerometerData, CMAltitudeData,
CMGyroData, CMMagnetometerData, CMPedometer, CMStepCounter,
CMAltimeter, CMAttitude, CMMotionManager, and CMDeviceMotion.
implementation that has the optimum performance for her sensing appli-
cation. The app has to monitor sensor events to acquire raw sensor data.
A sensor event occurs every time a sensor detects a change in the param-
eters it is measuring. A sensor event provides the app developer with four
pieces of information: the name of the sensor that triggered the event, the
timestamp of the event, the accuracy of the event, and the raw sensor data
that triggered the event.
etc.). For example, one question in such a survey could ask people attending a
football game which other team games they would like to attend in the future.
Millions of people participate daily in online social networks, which provide
a potential platform to utilize and share mobile sensing data. For example, the
CenceMe project [119] uses the sensors in the phone to automatically sense
the events in people’s daily lives and selectively share this status on online
social networks such as Twitter and Facebook, replacing manual actions that
people now perform regularly.
Mobile sensing can be used in the government sector for measuring and re-
porting environmental pollution from a region or an entire city. Environment-
protection agencies can use pollution sensors installed in phones to map with
high accuracy the pollution zones around the country [26, 17]. The availabil-
ity of ambient temperature sensors will soon enable the monitoring of weather
from the smartphones for weather reporting organizations. Municipalities may
collect data about phonic pollution, and then make an effort to reroute ve-
hicular traffic at night from residential areas significantly affected by noise.
Furthermore, governments in a few countries may soon insist on embedding
radiation sensors [27] in all phones for detecting harmful radiation levels.
The cameras available on smartphones have improved greatly in recent
years, and this allows news organizations to take advantage of these high-
quality smartphone cameras to enable citizen journalism [4, 163, 25]. The
citizens can report real-time data in the form of photos, videos, and text
from public events or disaster areas. In this way, real-time information from
anywhere across the globe can be shared with the public as soon as the events
happen.
2.4 Conclusion
In this chapter, we discussed the origin of mobile sensing from static WSNs to
mobile people-centric sensing. We first reviewed the nature of static WSNs and
the background of mobile sensing. We then presented the crucial components
that enable mobile sensing. Subsequently, we presented the current uses of
mobile sensing. Finally, we discussed the known hurdles facing mobile sensing
in practice.
3
Crowdsourcing
3.1 Introduction
Crowdsourcing is the use of collective intelligence to solve problems in a cost-
effective way. Crowdsourcing means that a company or institution has out-
sourced a task, which used to be performed by employees, to another set of
people (i.e., crowd) [89]. Certain tasks could, of course, be done by computers
in a more effective way. Crowdsourcing focuses on tasks that are trivial for
humans, such as image recognition or language translation, which continue to
challenge computer programs. In addition, crowdsourcing is used to perform
tasks that require human intelligence and creativity.
People work on crowdsourcing tasks for payment, for the social good, or
for other social incentives (e.g., competing against other people in a game).
While the labor is not free, it costs significantly less than paying traditional
employees. As a general principle, anyone is allowed to attempt to work on
crowdsourcing tasks. However, certain tasks require expert knowledge (e.g.,
software development).
In addition to tasks done by individual users, crowdsourcing could employ
groups or teams of users to perform complex tasks. For example, the book
titled The Wisdom of Crowds [152] reveals a general phenomenon that the
aggregation of information in groups results in decisions that are often better
than those made by any single member of the group. The book identifies four
key qualities that make a crowd smart: diversity of opinion, independence of
thinking, decentralization, and opinion aggregation. This concept of crowd
wisdom is also called “collective intelligence” [111].
In the rest of this chapter, we present the main categories of crowdsourcing
applications, describe a number of crowdsourcing platforms that mediate the
interaction between the task providers and task workers, and discuss open
problems related to crowdsourcing such as recruitment, incentives, and quality
control.
19
20 Mobile Crowdsensing
3.2 Applications
In the past decade, crowdsourcing has been used for many types of appli-
cations such as scientific applications, serious games/games with a purpose,
research and development, commercial applications, and even public safety
applications.
Some major beneficiaries of crowdsourcing are scientific applications. For
example, Clickworkers [24] was a study that ran for one year to build an
age map of different regions of Mars. Over 100,000 workers participated and
they volunteered 14,000 work hours. Overall, the workers performed routine
science analysis that would normally be done by scientists working for a very
long time. It is important to notice that their analysis was of good quality:
The age map created by workers agrees closely with what was already known
from traditional crater counting.
Another project that employed crowdsourcing for science was Galaxy
Zoo [8], which started with a data set made up of a million galaxies im-
aged by the Sloan Digital Sky Survey. The volunteers were asked to split the
galaxies into ellipticals, mergers, and spirals, and to record the arm direc-
tions of spiral galaxies. Many different participants saw each galaxy in order
to have multiple independent classifications of the same galaxy for high reli-
ability classification. By the end of the first year of the study, more than 50
million classifications were received, contributed by more than 150,000 people.
The scientists concluded that the classifications provided by Galaxy Zoo were
as good as those from professional astronomers, and were subsequently used
in many astronomy research papers.
While scientific applications rely purely on volunteers, other applications
require the users to do some work in exchange for a service. For instance, re-
CAPTCHA [13] is a real-world service that protects websites from spam and
abuse generated by bots (e.g., programs that automatically post content on
websites). To be allowed to access a website, users need to solve a “riddle.”
In the original version of reCAPTCHA, the “riddle” took the form of deci-
phering the text in an image. The text was taken from scanned books, and
thus crowdsourcing was used to digitize many books. Specifically, the service
supplies subscribing websites with images of words that are hard to read for
optical character recognition software, and the websites present these images
for humans to decipher as part of their normal validation procedures. As the
aggregate results for the same image converge, the results are sent to dig-
itization projects. More recently, reCAPTCHA allows the users to perform
image classification with mouse clicks. Hundreds of millions of CAPTCHAs
are solved by people every day. This allows the building of annotated image
databases and large machine learning datasets.
Serious games or games with a purpose represent another significant type
of crowdsourcing applications. The workers in these games compete with each
Crowdsourcing 21
is a core of users who repeatedly propose and win. The large numbers of new
users ensure many answers, while also providing new members for the stable
core.
decision or using a control group to re-check tasks can be used. However, both
solutions increase the cost per task.
Finally, other interesting problems in crowdsourcing include intellectual
property protection, balancing the allocated budget for a task with the out-
come quality, and project management automation.
3.5 Conclusion
This chapter presented an overview of crowdsourcing, with a focus on applica-
tions, platforms, and open issues. The examples described here demonstrate
the success of crowdsourcing in a wide variety of domains from science to com-
merce and from software development to knowledge exchange. Crowdsourcing
has an even bigger potential if its open problems such as recruitment, in-
centives, and quality control are solved. In this chapter, we also saw a few
attempts to use crowdsourcing in the physical world (i.e., not online). These
attempts represent a precursor to mobile crowdsensing, which is presented in
in the next chapter.
4
What Is Mobile Crowdsensing?
4.1 Introduction
In Chapter 2, we have discussed two types of sensing: personal sensing and
public sensing. Personal mobile sensing is favored mostly to monitor a single
individual to provide customized alerts or assistance. Public mobile sensing,
which is commonly termed as mobile crowdsourcing, requires the active par-
ticipation of smartphone users to contribute sensor data, mostly by reporting
data manually from their smartphones. The reports could be road accidents or
new traffic patterns, taking photos for citizen journalism, etc. The aggregated
useful public information can be shared back with the users on a grand scale.
The research advancements in crowdsourcing, and specifically mobile
crowdsourcing, have set the course for mass adoption and automation of mo-
bile people-centric sensing. The resulting new technology, as illustrated in
Figure 4.1, is “Mobile Crowdsensing.” This new type of sensing can be scal-
able and cost-effective for dense sensing coverage across large areas. In many
situations, there is no need for organizations to own a fixed and expensive
sensor network; they can use mobile people-centric sensing on demand and
just pay for the actual usage (i.e., collected data).
This chapter looks first at mobile crowdsourcing and its advantages
through the lens of illustrating applications. Then, it describes emerging
crowdsensing applications in various domains and defines two types of crowd-
sensing, namely participatory manual sensing and opportunistic automatic
sensing.
27
28 Mobile Crowdsensing
FIGURE 4.1
Mobile crowdsensing enables scalable and cost-effective sensing coverage of
large regions.
4.2.1 Environmental
Mobile crowdsourcing helps improve the environment over large areas by mea-
suring pollution levels in a city, or water levels in creeks, or monitoring wildlife
habitats. A smartphone user can participate in such applications to enable the
mapping of various large-scale environmental solutions. Common Sense [33] is
one such example, deployed for pollution monitoring. In the Common Sense
application, the participant carries the external air quality sensing device,
which links with the participant’s smartphone using Bluetooth to record var-
ious air pollutants like CO2, NOx, etc. When large numbers of people col-
What Is Mobile Crowdsensing? 29
Habitat monitoring Citizen journalism
FIGURE 4.2
Advantages of mobile crowdsourcing seen in the environmental, infrastructure,
and social domains.
lectively sense the air quality, it helps the environmental organizations take
appropriate measures across large communities.
Similarly, the IBM Almaden Research Center had developed an applica-
tion, “CreekWatch,” to monitor the water levels and quality in creeks by col-
lecting the data reported by individuals. The sensed data consist of pictures
of various locations across the creeks and text messages about the trash in
them. The water utility services can employ such data to track the pollution
levels in the nearby water resources.
MobGeoSen [99] allows the participants to monitor the local environment
for pollution and other characteristics. The application utilizes the micro-
phone, the camera, the GPS, and external sensing devices (body wearable
health sensors, sensors in vehicles, etc.). The application prompts the partici-
pants on the smartphone screen to add the text annotations and the location
markers on the map during the participant’s journey. The participants can
share the photos with location and appropriate text tags, such that the col-
lected data can be visualized easily on a spatial-temporal visualization tool. To
accommodate the concurrent collection of sensing data from the external sens-
ing devices, an advanced component was developed, which establishes multiple
Bluetooth connections for communication. Furthermore, MobGeSen project
works closely with science teachers and children at schools to demonstrate the
use of the application by monitoring the pollution levels while traveling on
their daily commutes to school.
NoiseTube [110] is a crowdsourcing application that helps in assessing the
noise pollution by involving regular citizens. The participants in the applica-
tion detect the noise levels they are exposed to in their everyday environment
by using their smartphones. Every participant shares the noise measurements
by adding personal annotations and location tags, which help to yield a collec-
tive noise map. NoiseTube requires participants to install the app on smart-
30 Mobile Crowdsensing
phones and requires a backend server to collect, analyze, and aggregate the
data shared by the smartphones.
4.2.2 Infrastructure
The monitoring or measuring of public infrastructure can leverage the mobile
crowdsourcing model, such that government organizations can collect status
data for the infrastructure at low cost. Smart phone users participating in
mobile crowdsourcing can report traffic congestion, road conditions, available
parking spots, potholes on the roads, broken street lights, outages of utility
services, and delays in public transportation. Examples of detecting traffic
congestion levels in cities include MIT’s CarTel [93] and Microsoft Research’s
Nericell [120]. The location and speed of cars is measured in the CarTel appli-
cation by using special sensors installed in the car; these data are communi-
cated to a central server using public WiFi hotspots. In Nericell, smartphones
are used to determine the average speed, traffic delays, noise levels by honks,
and potholes on roads.
Similarly, ParkNet [114] is an application that informs drivers about on-
street parking availability using a vehicular sensing system running over a
mobile ad hoc sensor network consisting of vehicles. This system collects and
disseminates real-time information about vehicle surroundings in urban areas.
To improve the transportation infrastructure, the TrafficSense [121] crowd-
sourcing application helps in monitoring roads for potholes, road bumps, traf-
fic jams, and emergency situations. In this application, participants use their
smartphones to report the sensed data collected from various locations in their
daily commutes. Furthermore, in developing countries, TrafficSense detects the
honks from the vehicles using the audio samples sensed via the microphone.
PetrolWatch [66] is another crowdsourcing system in which fuel prices are
collected using camera phones. The main goals of the system are to collect
fuel prices and allow users to query for the prices. The camera lens is pointed
toward the road by mounting it on the vehicle dashboard and the smartphone
is triggered to capture the photograph of the roadside fuel price boards when
the vehicle approaches the service stations. To retrieve the fuel prices, com-
puter vision algorithms are used to scan these images. Each service station
can have different style and color patterns on display boards. To reduce com-
plexity, the computer vision algorithms are given the location information to
know the service station and its style of the display board. While processing
the image on the smartphone, the algorithms use the location coordinates,
the brand, and the time. The prices determined after analyzing the image are
uploaded to the central server and stored in a database that is linked to a GIS
road network database populated with service station locations. The server
updates fuel prices of the appropriate station if the current price has a newer
timestamp. The system also retains the history of price changes to analyze
pricing trends.
A pilot project in crowdsourcing, Mobile Millennium [86], allows the gen-
What Is Mobile Crowdsensing? 31
eral public to unveil the traffic patterns in urban environments. These patterns
are difficult to observe using sparse, dedicated monitoring sensors in the road
infrastructure. The main goal of the project is to estimate the traffic on all
major highways at specific targeted areas and also on the major interior city
roads. The system architecture consists of GPS-enabled smartphones inside
the vehicles, network provider, cellular data aggregation module, and traffic
estimator. Each participant installs the application on the smartphone for col-
lecting the traffic data and the backend server aggregates the data collected
from all the participants. The aggregated data is sent to the estimation engine,
which will display the current traffic estimates based on traffic flow models.
4.2.3 Social
There are interesting social applications that can be enabled by mobile crowd-
sourcing, where individuals share sensed information with each other. For ex-
ample, individuals can share their workout data, such as how much time one
exercises in a single day, and compare their exercise levels with those of the
rest of the community. This can lead to competition and improve daily exer-
cise routines. BikeNet [69] and DietSense [142] are crowdsourcing applications
where participants share personal analytics with social networks or with a
few private social groups. In BikeNet, smartphone users help in measuring
the location and the bike route quality (e.g., route with less pollution, less
bumpy ride) and aggregate the collected data to determine the most com-
fortable routes for the bikers. In DietSense, smartphone users take pictures of
their lunch and share it within social groups to compare their eating habits.
This is a good social application for the community of diabetics, who can
watch what other diabetics eat and can even provide suggestions to others.
Party Thermometer [59] is a social crowdsourced application in which
queries are sent to the participants who are at parties. For example, a query
could be, “how hot is a particular party?” Similar to the citizen journalism
applications, location is an important factor used to target the queries. But,
unlike in the citizen journalism application, location alone is not enough for
targeting because there is a significant difference between a person who is ac-
tually at a party and a person who is just outside, possibly having nothing to
do with the party. Therefore, in addition to location, party music detection is
also considered by employing the microphone to establish the user’s context
more accurately. To save energy on the phone, the sensing operations should
happen only when necessary. Thus, the application first detects the location of
the party down to a building, and only after that, it performs music detection
using the microphone.
LiveCompare [63] is a crowdsourcing application that leverages the smart-
phone’s camera to allow participants to hunt grocery bargains. The application
utilizes a two-dimensional barcode decoding function to automatically identify
grocery products, as well as localization techniques to automatically pinpoint
store locations. The participants use their camera phones to take a photo of
32 Mobile Crowdsensing
the price tag of their product of interest and the user’s smartphone extracts
the information about the product using the unique UPC barcode located on
the tag. The price-tag barcodes in most grocery stores are identical to the
barcodes on the actual products, which help in global product identification.
The numerical UPC value and the photo are sent to LiveCompare’s central
server, once the barcode has been decoded on the smartphone. These data are
stored in LiveCompare’s database for use in future queries on price compar-
isons. The application provides high-quality data through two complementary
social mechanisms to ensure data integrity: 1) it relies on humans, rather than
machines, to interpret complex sale and pricing information; and 2) each query
returns a subset of the data pool for a user to consider. The user can quickly
flag it, if an image does not seem relevant. This lets users collectively identify
malicious data, which can then be removed from the system.
life-stage data. The main goal is “floracaching,” for which players gain points
and levels within the game by finding and making qualitative observations on
plants. This game is also an example of motivating participatory sensing.
Another participatory sensing game, Who [80], is used to extract relation-
ships and tag data about employees. It was found useful for rapid collection
of large volumes of high-quality data from “the masses.”
Local fog
Tornado
approaching! patches!
50% Highway
discount!
Traffic jam!
Shopping
mall
Free parking spot!
Parking Area
FIGURE 4.3
Mobile crowdsensing: People are both consumers and providers of sensed data.
tion and speed data provided by smartphones. The same information could
be used to provide individualized traffic re-routing guidance for congestion
avoidance [129] or to direct drivers toward free parking spots [114]. Data
about the quality of the roads could also be collected to help municipalities
quickly repair the roads [70]. Similarly, photos (i.e., camera sensor data)
taken by people during/after snowstorms can be analyzed automatically
to prioritize snow cleaning and removal.
• Healthcare and Wellbeing: Wireless sensors worn by people for heart rate
monitoring [9] and blood pressure monitoring [21] can communicate their
information to the owners’ smartphones. Typically, this is done for both
real-time and long-term health monitoring of individuals. Crowdsensing
can leverage these existing data into large-scale healthcare studies that
seamlessly collect data from various groups of people, which can be se-
lected based on location, age, etc. A specific example involves collecting
data from people who regularly eat fast food. The phones can perform
activity recognition and determine the level of physical exercise done by
people, which was proven to directly influence people’s health. For exam-
ple, as a result of such a study in a city, the municipality may decide to
create more bike lanes to encourage people to do more physical activities.
What Is Mobile Crowdsensing? 35
Similarly, the phones can determine the level of social interaction of certain
groups of people (e.g., using Bluetooth scanning, GPS, or audio sensors).
For example, a university may discover that students (or students from
certain departments) are not interacting with each other enough; conse-
quently, it may decide to organize more social events on campus. The same
mechanism coupled with information from “human sensors” can be used
to monitor the spreading of epidemic diseases.
distributes these alerts to other drivers. In this way, drivers on the other
roads can benefit from real-time traffic information.
then makes a soft inference on passenger queues. On the taxi side, taxis
periodically update their status, GPS location, and instantaneous speed.
Meanwhile, the passenger side adopts a crowdsensing strategy to detect
the personal-scale queuing activities. The extensive empirical experiments
demonstrated that the system can accurately and effectively detect the
taxi queues and then validate the passenger queues.
Indoor localization In mobile applications, location-based services are be-
coming increasingly popular to achieve services such as targeted adver-
tisements, geosocial networking, and emergency notifications. Although
GPS provides accurate outdoor localization, it is still challenging to accu-
rately provide indoor localization even by using additional infrastructure
support (e.g., ranging devices) or extensive training before system de-
ployment (e.g., WiFi signal fingerprinting). Social-Loc [97] is designed to
improve the accuracy of indoor localization systems with crowdsensing.
Social-Loc takes as its input the potential locations of individual users,
which are estimated by any underlying indoor localization system, and
exploits both social encounters and non-encounter events to cooperatively
calibrate the estimation errors. Social-Loc is implemented on the Android
platform and demonstrated its performance over two underlying indoor
localization systems: Dead-reckoning and WiFi fingerprint.
Furthermore, in most situations the lack of floor plans makes it difficult
to provide indoor localization services. Consequently, the service providers
have to go through exhaustive and laborious processes with building oper-
ators to manually gather such floor-plan data. To address such challenges,
Jigsaw [71], a floor-plan reconstruction system, is designed such that it
leverages crowdsensed data from mobile users. It extracts the position,
size, and orientation information of individual landmark objects from im-
ages taken by participants. It also obtains the spatial relation between
adjacent landmark objects from inertial sensor data and then computes
the coordinates and orientations of these objects on an initial floor plan.
By combining user mobility traces and locations where images are taken,
it produces complete floor plans with hallway connectivity, room sizes, and
shapes.
4.6 Conclusion
This chapter discussed the origins of mobile crowdsensing, which borrows
techniques from people-centric mobile sensing and crowdsourcing. We first
discussed the advantages of collective sensing in various domains such as en-
vironmental, infrastructure, and social. Then, we investigated non-monetary
incentives for mobile sensing, with a focus on gaming. The chapter contin-
40 Mobile Crowdsensing
5.1 Introduction
This chapter describes several mobile crowdsensing systems that are based on
a centralized design, such as McSense [156], Medusa [134], and Vita [92]. The
chapter also describes the prototype implementation and the sensing tasks
developed for each mobile crowdsensing system.
41
42 Mobile Crowdsensing
FIGURE 5.1
McSense architecture.
fully completed tasks under the Earnings tab. If the accepted task expires
before being completed successfully according to its requirements, it is moved
to the Completed tasks tab and marked as unsuccessfully completed. The
providers do not earn money for the tasks that are completed unsuccessfully.
Background services on phone: When the network is not available, a com-
pleted task is marked as pending upload. A background service on the phone
periodically checks for the network connection. When the connection becomes
available, the pending data is uploaded and finally, these tasks are marked as
successfully completed. If the provider phone is restarted manually or due to
a mobile OS crash, then all the in-progress sensing tasks are automatically
resumed by the Android’s BroadcastReceiver service registered for the Mc-
Sense application. Furthermore, the Accepted and the Completed tabs’ task
lists are cached locally and are synchronized with the server. If the server is
not reachable, the users can still see the tasks that were last cached locally.
Manual Photo Sensing Task: Registered users are asked to take photos
from events on campus. Once the user captures a photo, she needs to click on
the “Complete Task” button to upload the photo and to complete the task.
Once the photo is successfully uploaded to the server, the task is considered
successfully completed. These uploaded photos can be used by the university
news department for their current news articles. On clicking the “Complete
Task” button, if the network is not available, the photo task is marked
as completed and waiting for upload. This task is shown with a pending
icon under the completed tasks tab. A background service would upload the
pending photos when the network becomes available. If a photo is uploaded
to the server after the task expiration time, then the photo is not useful to
the client. Therefore, the task will be marked as “Unsuccessfully completed,”
and the user does not earn money for this task.
Automated Sensing Task using Accelerometer and GPS Sensors:
The accelerometer sensor readings and GPS location readings are collected
at 1-minute intervals. The sensed data is collected along with the userID
and a timestamp, and it is stored into a file in the phone’s internal storage,
which can be accessed only by the McSense application. This data will be
uploaded to the application server on completion of the task (which con-
sists of many data points). Using the collected sensed data of accelerometer
readings and GPS readings, one can identify users’ activities like walking,
44 Mobile Crowdsensing
running, or driving. By observing such daily activities, one can find out how
much exercise each student is getting daily and derive interesting statistics,
such as which department has the most active and healthy students in a
university.
Automated Sensing Task using Bluetooth Radio: In this automated
sensing task, the user’s Bluetooth radio is used to perform periodic (every 5
minutes) Bluetooth scans until the task expires; upon completion, the task
reports the discovered Bluetooth devices with their location back to the Mc-
Sense server. The sensed data from Bluetooth scans can provide interesting
social information, such as how often McSense users are near each other.
Also, it can identify groups who are frequently together, to determine the
level of social interaction among certain people.
Automated Resource Usage Sensing Task: In this automated sensing
task, the usage of a user’s smartphone resources is sensed and reported back
to the McSense server. Specifically, the report contains the mobile applica-
tions usage, the network usage, the periodic WiFi scans, and the battery
level of the smartphone. While logging the network usage details, this au-
tomated task also logs overall device network traffic (transmitted/received)
and per-application network traffic.
1. The authors posted automated tasks only between 6am and 12pm.
Users can accept these tasks when they are available. When a user
accepts the automated sensing task, then the task starts running
in the background automatically. Furthermore, to be able to accept
Systems and Platforms 45
TABLE 5.1
Participants demographic information
Total participants 58
Males 90%
Females 10%
Age 16–20 52%
Age 21–25 41%
Age 26–35 7%
automated sensing tasks, users must have WiFi and GPS radios
switched on.
2. Automated sensing tasks expire each day at 10pm. The server com-
pares the total sensing time of the task to a threshold of 6 hours. If
the sensing time is below 6 hours, then the task is marked as “Un-
successfully Completed,” otherwise it is marked as “Successfully
Completed.”
3. Automated sensing tasks always run as a background service. On
starting or resuming this service, the service always retrieves the
current time from the server. Thus, even when the user sets an
incorrect time on the mobile, the task will always know the correct
current time and will stop sensing after 10pm.
4. Long-term automated sensing tasks are posted for multiple days.
Users are paid only for the number of days they successfully com-
plete the task. The same threshold logic is applied to each day for
these multi-day tasks.
For manual tasks such as photo tasks, users have to complete the task
manually from the Accepted Tasks Tab by taking the photo at the requested
location. Users were asked to take general photos from events on a university
campus. Once the photos are successfully uploaded to the application server
and a basic validation is performed (photos are manually validated for ground
truth), the task is considered successfully completed.
runtime system that coordinates the execution of these tasks between smart-
phones and a cluster on the cloud. The Medusa architecture and design is
evaluated using a prototype that uses ten crowdsensing tasks.
unteers rates the videos. Finally, the selected videos are uploaded to the cloud.
Finally, Alice is notified once all the stages are completed.
mobile devices. These parameters are measured when the mobile users fin-
ish crowdsensing and concurrent computation tasks, as these parameters have
great impact on the experience of mobile users when they are participating in
mobile crowdsensing.
The time delay refers to the periods between the time that the smartphone
initiates a crowdsensing request to the cloud platform and the time that it
receives the responses from the servers on the cloud platform of Vita. The
average time delay observed in the experiments using Vita’s system is 11
seconds. This time delay is very low compared to Medusa’s average time delay,
which is about 64 seconds.
The Vita system uses a service state synchronization mechanism
(S3M) [125] to detect and recover the possible service failures of mobile de-
vices when they are running tasks and collaborating with the cloud platform
of Vita. Service failures could be detected and recovered with the help of S3M,
since S3M includes the function to store the stage execution state on both the
mobile device and the cloud platform of Vita. The experimental results show
that the average time delay with S3M loaded is 12.8 seconds, whereas the
increases in battery consumption and network overhead are relatively higher,
at about 75% and 43%, respectively. Based on these results, developers can
choose whether or not to integrate the S3M model according to their specific
purposes when developing mobile crowdsensing applications on Vita.
ParticipAct platform and its ParticipAct living lab [49] form an ongoing ex-
periment at the University of Bologna. This involves 170 students for one
year in several crowdsensing campaigns that can passively access smartphone
sensors and also prompt for active user collaboration.
Cardone et al. [48] proposed an innovative geo-social model to profile users
along different variables, such as time, location, social interaction, service us-
age, and human activities. Their model also provides a matching algorithm to
autonomously choose people to involve in sensing tasks and to quantify the
performance of their sensing. The core idea is to build time-variant resource
maps that could be used as a starting point for the design of crowdsensing
ParticipActions. In addition, it studies and benchmarks different matching
algorithms aimed to find, according to specific urban crowdsensing goals geo-
localized in the Smart City, the “best” set of people to include in the collec-
tive ParticipAction. The technical challenge here is to find, for the specific
geo-socially modeled region, the good dimensioning of number/profiles of the
involved people and sensing accuracy.
Tuncay et al. [165] propose a participant recruitment and data collection
framework for opportunistic sensing, in which the participant recruitment and
data collection objectives are achieved in a fully distributed fashion and op-
erate in DTN (Delay Tolerant Network) mode. The framework adopts a new
approach to match mobility profiles of users to the coverage of the sensing
mission. Furthermore, it analyzes several distributed approaches for both par-
ticipant recruitment and data collection objectives through extensive trace-
based simulations, including epidemic routing, spray and wait, profile-cast,
and opportunistic geocast. The performances of these protocols are compared
using realistic mobility traces from wireless LANs, various mission coverage
patterns, and sink mobility profiles. The results show that the performances
of the considered protocols vary, depending on the particular scenario, and
suggest guidelines for future development of distributed opportunistic sensing
systems.
5.6 Conclusion
In this chapter, we discussed existing mobile crowdsystems platforms. We first
described the McSense system, which is a mobile crowdsensing platform that
allows clients to collect many types of sensing data from users’ smartphones.
We then presented Medusa, a programming framework for crowdsensing that
provides support for humans-in-the-loop to trigger sensing actions or review
results, recognizes the need for participant incentives, and addresses their
privacy and security concerns. Finally, we discussed Vita, a mobile cyber-
physical system for crowdsensing applications, which enables mobile users
54 Mobile Crowdsensing
6.1 Introduction
We believe that mobile crowdsensing with appropriate incentives and with
a secure architecture can achieve real-world, large-scale, dependable, and
privacy-abiding people-centric sensing. However, we are aware that many chal-
lenges have to be overcome to make the vision a reality. In this chapter, we
describe a general architecture for a mobile crowdsensing system considering
the existing architectures presented previously in Chapter 5. We then point
out important principles that should be followed when implementing the MCS
prototypes in order to address known challenges in data collection, resource
allocation, and energy conservation.
55
56 Mobile Crowdsensing
local analytics if they want to run the application on a wide variety of devices
running on different operating systems.
Second, this approach is inefficient. Applications performing sensing and
processing activities independently without understanding each other’s high-
level context will result in low efficiency when these applications start sensing
similar data from the resource-constrained smartphones. Moreover, there is
no collaboration or coordination across devices, so all the devices may not be
needed when the device population is dense. Therefore, the current architec-
ture is not scalable. Only a small number of applications can be accommodated
on each device. Also, the data gathered from large crowds of the public may
overwhelm the network and the back-end server capacities, thus making the
current architecture non-scalable.
FIGURE 6.1
McSense Android Application showing tabs (left) and task screen for a photo
task (right).
accounts and assigned task details. The server side Java code is deployed on
the Glassfish Application Server, which is an open-source application server.
tically generates virtual tasks with different locations, areas, and durations.
It emulates their execution based on the user profile stored in the data back-
end, in particular, based on location traces: a virtual task is considered to be
successfully completed if the location trace of a user comes to in its range.
To make the model more realistic, Talasila et al also assume that participants
whose device battery level is very low (e.g., less than 20 percent) will never
execute any task, while if the battery level is high (e.g., 80 percent or more),
they will always execute any task they can; the probability of executing a task
increases linearly between 20 and 80 percent.
In particular, given a task, its duration, and the set of participants to
whom it has been assigned, the emulator looks for a participant within the
task area by iterating participant position records in the task duration period.
When it finds one, it stochastically evaluates whether the participant will be
able to complete the task, and then updates the statistics about the policy
under prediction by moving to the next participant location record. In the
future, additional user profile parameters can be added, such as task com-
pletion rate or quality of data provided. The predicted situations are run for
each assignment policy implemented in McSense; city managers can exploit
these additional data to compare possible assignment policies and to choose
the one that better suits their needs, whether it has a high chance of success-
ful completion, minimizes the completion time, or minimizes the number of
participants involved.
• One of the survey questions was: “I tried to fool the system by providing
photos from other locations than those specified in the tasks (the answer does
not influence the payment).” By analyzing the responses for this specific
question, it can be seen that only 23.5% of the malicious users admitted
that they submitted the fake photos (4 admitted out of 17 malicious). This
shows that the problem stated in the article on data reliability is real and
it is important to validate the sensed data;
• One survey question related to user privacy was: “I was concerned about my
privacy while participating in the user study.” The survey results show that
78% of the users are not concerned about their privacy. This shows that
many participants are willing to trade off their location privacy for paid
General Design Principles and Example of Prototype 61
tasks. The survey results are correlated with the collected McSense data
points. The authors posted a few sensing tasks during weekends, which is
considered to be private time for the participants, who are mostly not on
the campus at that time. Talasila et al. observed that 33% of the partici-
pants participated in the sensing and photo tasks, even when spending their
personal time on the weekends. The authors conclude that the task price
plays a crucial role (trading the user privacy) to collect quality sensing data
from any location and time;
• Another two survey questions are related to the usage of phone resources
(e.g., battery) by sensing tasks: 1) “Executing these tasks did not consume
too much battery power (I did not need to re-charge the phone more often
than once a day);” 2) “I stopped the automatic tasks (resulting in incom-
plete tasks) when my battery was low.” The responses to these questions
are interesting. Most of the participants reported that they were carrying
chargers to charge their phone battery as required while running the sens-
ing tasks and were always keeping their phone ready to accept more sensing
tasks. This provides evidence that phone resources, such as batteries, are not
a big concern for continuously collecting sensing data from different users
and locations. Next the battery consumption measurements in detail.
• With Bluetooth and Wi-Fi radios ON, the battery life of the “Droid 2”
phone is over 2 days (2 days and 11 hours);
• With Bluetooth OFF and Wi-Fi radio ON, the battery life of the “Droid 2”
phone is over 3 days (3 days and 15 hours);
• For every Bluetooth discovery the energy consumed is 5.428 Joules. The
total capacity of the “Droid 2” phone battery is 18.5KJ. Hence, over 3000
Bluetooth discoveries can be collected from different locations using a fully
charged phone.
62 Mobile Crowdsensing
NP hard, the authors have proposed two other allocation models: offline allo-
cation and online allocation. The offline allocation model relies on an efficient
1
approximation algorithm that has an approximation ratio of 2− m , where m is
the number of participating smartphones in the system. The online allocation
model relies on a greedy online algorithm, which achieves a competitive ratio
of at most m. Simulation results show that these models achieve high energy
efficiency for the participants’ smartphones: The approximation algorithm re-
duces the total sensing time by more than 81% when compared to a baseline
that uses a random allocation algorithm, whereas the greedy online algorithm
reduces the total sensing time by more than 73% compared to the baseline.
The energy-efficient allocation framework [187] mainly focuses on the col-
lection of location-dependent data in a centralized fashion and without any
time constraints. However, there are scenarios where the service provider aims
to collect time-sensitive and location-dependent information for its customers
through distributed decisions of mobile users. In that case, the mobile crowd-
sensing system needs to balance the resources in terms of rewards and move-
ment costs of the participants for completing tasks. Cheung et al. [53] pro-
posed a solution to such a distributed time-sensitive and location-dependent
task-selection problem. They leveraged the interactions among users as a
non-cooperative task-selection game, and designed an asynchronous and dis-
tributed task-selection algorithm for each user to compute her task selection
and mobility plan. Each user only requires limited information on the aggre-
gate task choices of all users, which is publicly available in many of today’s
crowdsourcing platforms.
Furthermore, the primary bottleneck in crowdsensing systems is the high
burden placed on the participant who must manually collect sensor data to
simple queries (e.g., photo crowdsourcing applications, such as grassroots jour-
nalism [74], photo tourism [151], and even disaster recovery and emergency
management [107]). The Compressive CrowdSensing (CCS) framework [177]
was designed to lower such user burden in mobile crowdsourcing systems. CCS
enables each participant to provide significantly reduced amounts of manually
collected data, while still maintaining acceptable levels of overall accuracy for
the target crowd-based system. Compressive sensing is an efficient technique
of sampling data with an underlying sparse structure. For data that can be
sparsely represented, compressive sensing shows the possibility to sample at a
rate much lower than the Nyquist sampling rate, and then to still accurately
reconstruct signals via a linear projection in a specific subspace. For exam-
ple, when applied to citywide traffic speeds that have been demonstrated to
have a sparse structure, compressive sensing can reconstruct a dense grid of
traffic speeds from a relatively small vector that roughly approximates the
traffic speeds taken at key road intersections. Naive applications of compres-
sive sensing do not work well for common types of crowdsourcing data (e.g.,
user survey responses) because the necessary correlations that are exploited
by a sparsifying base are hidden and non-trivial to identify. CCS comprises a
series of novel techniques that enable such challenges to be overcome. Central
64 Mobile Crowdsensing
to the CCS design is the Data Structure Conversion technique that is able to
search a variety of representations of the data in an effort to find one that
is then suitable for learning a custom sparsifying base (for example, to mine
temporal and spatial relationships). By evaluating CCS with four represen-
tative large-scale datasets, the authors find that CCS is able to successfully
lower the quantity of user data needed by crowd systems, thus reducing the
burden on participants’ smartphone resources. Likewise, SmartPhoto [174] is
another framework that uses a resource-aware crowdsourcing approach for
image sensing with smartphones.
CrowdTasker [176, 185] is another task allocation framework for mobile
crowdsensing systems. CrowdTasker operates on top of the energy-efficient
Piggyback Crowdsensing (PCS) task model, and aims to maximize the cov-
erage quality of the sensing task. In addition, it also satisfies the incentive
budget constraints. In order to achieve this goal, CrowdTasker first predicts
the call and mobility of mobile users based on their historical records. With a
flexible incentive model and the prediction results, CrowdTasker then selects
a set of users in each sensing cycle for PCS task participation, so that the
resulting solution achieves near maximal coverage quality without exceeding
the incentive budget.
6.5 Conclusion
In this chapter, we discussed the general design and implementation principles
for prototypes of mobile crowdsensing. We first presented a general architec-
ture based on the current systems. Subsequently, we discussed the general im-
plementation principles that are needed to build a robust mobile crowdsensing
system. Finally, we presented implementation details for a mobile crowdsens-
ing system prototype, observations from a user study based on this prototype,
and mechanisms for resource management in mobile crowdsensing systems.
7
Incentive Mechanisms for Participants
7.1 Introduction
A major challenge for broader adoption of the mobile crowdsensing systems
is how to incentivize people to collect and share sensor data. Many of the
proposed mobile crowdsensing systems provide monetary incentives to smart-
phone users to collect sensing data. There are solutions based on micro-
payments [141] in which small tasks are matched with small payments. Social
incentive techniques such as sharing meaningful aggregated sensing informa-
tion back to participants were also explored, to motivate individuals to par-
ticipate in sensing. In addition, there are gamification techniques proposed for
crowdsourced applications [115, 81]. In this chapter, we discuss in detail each
of these incentive techniques.
65
66 Mobile Crowdsensing
Users only benefit from the application when the server compares their sub-
mitted price to other related data points. Therefore, the data pool can only
be queried as users simultaneously contribute. LiveCompare relies on hu-
mans, rather than machines, to interpret complex sale and pricing informa-
tion. The only part of a price tag that must be interpreted by a computer is
the barcode to retrieve a Universal Product Code (UPC). Because of this,
LiveCompare does not need to rely on error-prone OCR algorithms to ex-
tract textual tokens or on linguistic models to make sense of sets of tokens.
Furthermore, each LiveCompare query returns a subset of the data pool for
Incentive Mechanisms for Participants 67
a user to consider. If an image does not seem relevant, the user can quickly
flag it. This allows users to collectively identify faulty or malicious data.
90
FIGURE 7.1
Area coverage over time while performing crowdsensing with micro-payments.
Illustration of the coverage overlayed on the campus map. Some areas have
been removed from the map as they are not accessible to students.
age over time in the first 3 weeks, especially during weekdays. Toward the end
of the study, the rate of coverage decreases as most of the common areas have
been covered. The authors speculate that users did not find the price of the
tasks enticing enough to go to areas located far away from their daily routine.
Overall, the micro-payments-based approach achieves a 46% area coverage of
the campus over the four-week time period of the study.
Since the goal of the game is to uniformly cover a large area with sensing
data, it is essential to link the game story to the physical environment. In the
game “Alien vs. Mobile User,” the players must find aliens throughout an area
and destroy them using bullets that can be collected from the target area. The
players collect sensing data as they move through the area. Although the game
could collect any type of sensing data available on the phones, the implemen-
tation collects WiFi data (BSSID, SSID, frequency, signal strength) to build
a WiFi coverage map of the targeted area. The motivation to play the sensing
game is twofold: 1) The game provides an exciting real-world gaming experi-
ence to the players, and 2) The players can learn useful information about the
environment, such as the WiFi coverage map, which lists the locations having
the best WiFi signal strength near the player’s location.
Game story: The aliens in the game are hiding at different locations across
the targeted area. Players can see the aliens on their screens only when
they are close to the alien positions. This is done in order to encourage the
players to walk around to discover aliens; in the process, the CGS collects
sensing data. At the same time, this makes the game more unpredictable
and potentially interesting. The game periodically scans for nearby aliens
and alerts the players when aliens are detected; the player locates the alien
on the game screen and starts shooting at the alien using the game buttons.
When an alien gets hit, there are two possible outcomes: If this is the first
or second time the alien is shot, the alien escapes to a new location to hide
from the player. To completely destroy the alien, the player has to find and
shoot the alien three times, while hints of the alien’s location are provided
after it was shot. In this way, the players are provided with an incentive
to cover more locations. Players are rewarded with points for shooting the
aliens. All players can see the current overall player ranking.
The sensing side of the game: Sensing data is collected periodically when
the game is on. The placement of aliens on the map seeks to ensure uniform
sensing coverage of the area. The challenge, thus, is how to initially place
Incentive Mechanisms for Participants 71
and then move the aliens to ensure fast coverage, while at the same time
maintaining a high player interest in the game.
In the initial phases of sensing, CGS moves each alien to a location that is
not yet covered, but later on it moves the alien intelligently from one loca-
tion to another by considering a variety of factors (e.g., less visited regions,
regions close to pedestrian routes, or regions that need higher sensing accu-
racy). In this way, the game manages to entice users from popular regions to
unpopular ones with a reasonable coverage effort. Generally, the alien will
escape to farther-away regions, and the players might be reluctant to follow
despite the hints provided by CGS. To increase the chances that players
follow the alien, the game provides more points for shooting the alien for a
second time, and even more for the third (fatal) shot.
Game difficulty and achievements: The game was designed with diffi-
culty levels based on the number of killed aliens, the bullets collected from
around the player’s location, and the total score of the player. In this way,
players have extra-incentives to cover more ground. A player has to track
and kill a minimum number of aliens to unlock specific achievements and
to enter the next levels in the game. We leverage the achievements APIs
provided in the Android platform as part of Google Play Game Services,
which allow the players to unlock and display achievements, as shown in
Figure 7.2 (right).
Prototype Implementation: A game prototype was implemented for
Android-based smart-phones and was deployed on Google Play. An alien
appears on the map when the player is close to the alien’s location, as
shown in Figure 7.2 (left). The player can target the alien and shoot it using
the smartphone’s touch screen. When the alien escapes to a new location,
its “blood trail” to the new location is provided to the player as a hint
to track it down (as shown in Figure 7.2 (middle)). The server side of the
game is implemented in Java using one of the Model View Controller frame-
works involving EJBs/JPA models, JSP/HTML views, and servlets, and it
is deployed on the Glassfish Application Server.
FIGURE 7.2
Alien vs. Mobile User app: finding and shooting the alien (left); alien “blood
trail” (middle); player achievements (right).
Outdoor area coverage. Figure 7.3 shows the area coverage efficiency. To
make the results as consistent as possible with the results from the micro-
payments-based approach, the authors investigated the coverage based on
the collected WiFi data from both studies (McSense and game). The authors
observed that players get highly engaged in the game from the first days,
which leads to high coverage quickly (50% of the target area is covered in less
than 3 days). The coverage progress slows down after the initial phase due
to several reasons. First, the results show only the coverage of ground level.
However, starting in the second week, aliens have also been placed at higher
floors in buildings; this coverage is not captured in the figure. Second, the
slowdown is expected to happen after the more common areas are covered,
as the players must go farther from their usual paths. Third, the authors
observe that the coverage remains mostly constant over the weekends as the
school has a high percentage of commuters, and thus mobile users are not
on campus (as is seen on days 4 to 6, and 11 to 13).
Figure 7.3 also overlays the collected WiFi data over the campus map. The
WiFi signal strength data is plotted with the same color coding as in the
McSense study. Overall, the sensing game achieved 87% area coverage of the
campus in a four-week period.
Indoor area coverage. Figure 7.4 plots the correlation of active players
and the number of squares covered at upper floors over time (the authors
started to place aliens on upper floors on day 12). Indoor localization was
achieved based on WiFi triangulation and the barometric pressure sensor in
Incentive Mechanisms for Participants 73
100
FIGURE 7.3
Area coverage over time while performing crowdsensing with mobile gaming
for the first four weeks of the user study. Illustration of the coverage overlayed
on the campus map. Some areas have been removed from the map as they are
not accessible to students.
the phones. The authors observe that indoor coverage correlates well with
the number of active players, and the pattern is similar to outdoor cover-
age. Overall, the game achieved a 35% coverage of the upper floors. Despite
apparently being low, this result is encouraging: The players covered many
hallways and open spaces in each building, but could not go into offices and
other spaces that are closed to students; however, aliens were placed there
as well. To avoid placing aliens in such locations and wasting players’ ef-
fort, the authors plan to investigate a crowdsourcing approach in which the
players mark the inaccessible places while playing the game.
Player activity. Figure 7.5 presents the impact of the number of registered
players and the number of active players on area coverage over time. The
results show the improvement in area coverage with the increase in the
number of registered players in the game. This proves that the players are
interested in the game and are involved in tracking the aliens. The players are
consistently active in the weekdays over the period of the study, and they
are less active in the weekends. For additional insights on the individual
contribution of the players, Figure 7.6 presents the players’ ranks based on
number of covered squares in the area. We observe a power-law distribution
of the players’ contribution to the area coverage.
74 Mobile Crowdsensing
18 Active Players 40
12
25
10
20
8
15
6
4 10
2 5
0 0
12 14 16 18 20 22 24 26 28 30 32 34
Time (days)
FIGURE 7.4
Correlation of active players and the number of squares covered at different
floor levels over time in the last two weeks of the user study.
FIGURE 7.5
Impact of the number of registered players and the number of active players
on area coverage over time in the user study.
300
Number of Covered Squares
250
200
150
100
50
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
Player's Rank (based on Covered Squares)
FIGURE 7.6
Ranking of players based on the number of covered squares in the area (area is
divided into 10x10m squares). 48 out of 53 players covered ground on campus.
ever, it may not be the right fit for participant activity sensing because gaming
would lead to changes in participant activities, and thus will not capture the
desired sensing data of the participant. Instead, the micro-payments-based
solution will be more suitable to capture the expected personal analytics.
In the mobile game study, the focus was mainly on automatically collecting
sensing data, where players are not annoyed with any manual sensing tasks.
In principle, micro-payments are a better fit for manual sensing. However,
mobile games can also be used for this type of sensing if the sensing task does
not have tight time constraints and its requirement can be translated into a
game action. For example, a player could receive an in-game request to take
a photo of the location where a game character was destroyed.
for certain sensing tasks such as mapping a region with sensor readings. For
example, uniform area coverage is not expected to be strongly dependent on
the demographics of the participants and does not require a very large number
of players in the gaming approach. Our results showed that a relatively small
number of passionate players quickly covered a large region.
Ideally, a wider exploration of different alternative designs of the experi-
ments would have provided additional insights. For example, one could imag-
ine a scenario in which each user is asked to perform data collection tasks based
on micro-payments and to play the game alternatively during the study. This
was not feasible given the resources available for the project. Finally, for other
types of sensing tasks, such as collecting personal analytics data, the results
may vary as a function of the area size as well as the population type and size.
7.6 Conclusion
In this chapter, we discussed incentive mechanisms for participants of mobile
crowdsensing. We first discussed social incentives such as sharing meaningful
aggregated sensing information back to participants in order to motivate indi-
viduals to participate in sensing. We then discussed monetary incentives such
as micro-payments in which small tasks are matched with small payments.
Subsequently, we discussed mobile game-based incentives in which partici-
pants play the sensing game on their smartphones while collecting the sensed
data. Finally, we compared the incentive mechanisms in order to derive general
insights.
8
Security Issues and Solutions
8.1 Introduction
This chapter examines security-related issues introduced by the mobile crowd-
sensing platforms, such as ensuring the reliability, quality, and liveness of the
sensed data. By leveraging smartphones, we can seamlessly collect sensing data
from various groups of people at different locations using mobile crowdsens-
ing. As the sensing tasks are associated with monetary incentives, participants
may try to fool the mobile crowdsensing system to earn money. The partici-
pants may also try to provide faulty data in order to influence the outcome
of the sensing task. Therefore, there is a need for mechanisms to validate the
quality of the collected data efficiently. This chapter discusses these issues in
detail and presents solutions to address them.
79
80 Mobile Crowdsensing
FIGURE 8.1
Correlation of earnings and fake photos.
around the country. The participants may claim “fake” pollution to hurt busi-
ness competitors by claiming that the submitted sensed pollution data is as-
sociated with incorrect locations.
In Section 6.3.2, we presented observations of the “McSense” user survey
that were collected from users at the end of the field study to understand
the participant’s opinion on location privacy and usage of phone resources .
We now present insights on data reliability based on the analysis of the data
collected from the “McSense” field studies
FIGURE 8.2
Correlation of user location and fake photos.
photos?” As suspected, the users who spent less time on campus have sub-
mitted more fake photos. This behavior can be observed in Figure 8.2.
Figure 8.2 shows the number of fake photos submitted by each user, with
the users sorted by the total hours spent on the New Jersey Institute of Tech-
nology (NJIT) campus. The participants’ total hours recorded at NJIT cam-
pus are the hours that are accumulated from the sensed data collected from
“Automated Sensing task” described in the “Tasks Developed for McSense”
Section 5.2.2. The NJIT campus is considered to be a circle with a radius of
0.5 miles. If the user is in this circle, then she is considered to be at NJIT.
For most of the submitted fake photos with the false location claim, the users
claimed that they are at a campus location where the photo task is requested,
but actually they are not frequent visitors on the campus.
FIGURE 8.3
Photo counts of 17 cheating people.
is trying to submit the sensing data location. Unfortunately, this solution re-
quires infrastructure support and adds a very high overhead on users’ phones
if it is applied for each sensed data point.
Assumptions: Before going into the details of the scheme, the authors as-
sume that the sensed data is already collected by the McSense system from
providers at different locations. However, this sensed data is awaiting vali-
dation before being sent to the actual clients who requested this data.
For ILR, Talasila et al. assume that the sensed data includes location, time,
and a Bluetooth scan performed at the task’s location and time. The main
idea of this scheme is to corroborate data collected from manual (photo)
tasks with co-location data from Bluetooth scans. We describe next an ex-
ample of how ILR uses the photo and co-location data.
Adversarial Model: Talasila et al. assume all the mobile devices are capable
of determining their location using GPS. The authors also assume McSense
is trusted and the communication between mobile users and McSense is
secure. In the threat model, the authors consider that any provider may act
maliciously and may lie about their location.
A malicious provider can program the device to spoof a GPS location [94]
and start providing wrong location data for all the crowdsensing data re-
quested by clients. Regarding this, the authors consider three threat sce-
narios, where 1) The provider does not submit the location and Bluetooth
scan with a sensing data point; 2) The provider submits a Bluetooth scan
associated with a sensing task, but claims a false location; 3) The provider
submits both a false location and a fake Bluetooth scan associated with a
sensing data point. In Section 8.3.1.3, we will discuss how these scenarios
are addressed by ILR.
The authors do not consider colluding attack scenarios, where a malicious
provider colludes with other providers to show that she is present in the
Bluetooth co-location data of others. It is not practically easy for a mali-
cious provider to employ another colluding user at each sensing location.
Additionally, these colluding attacks can be reduced by increasing the min-
imum node degree requirement in co-location data of each provider (i.e., a
provider P must be seen in at least a minimum number of other providers’
Bluetooth scans at her claimed location and time). Therefore, it becomes dif-
ficult for a malicious provider to create a false high node degree by colluding
with real co-located people at a given location and time.
84 Mobile Crowdsensing
Finally, the other class of attacks that are out of scope are attacks in which a
provider is able to “fool” the sensors to create false readings (e.g., using the
flame of a lighter to create the false impression of a high temperature), but
submits the right location and Bluetooth scan associated with this sensing
task.
• (Step 1) Mark co-located data points as trusted: For each task co-
located with a validated photo task, mark the task’s location as trusted.
• (Step 2) Repeat Step 1 for each newly validated task until all co-located
tasks are trusted or no other task is found.
Validation Process: After executing the two phases of the ILR scheme,
all the co-located data points are validated successfully. If any malicious
provider falsely claims one of the validated tasks’ location at the same time,
then the false claim will be detected in the validation step. Executing the
validation process shown in algorithm 1 will help us detect wrong location
86 Mobile Crowdsensing
validationProcess():
run to validate the location of each task in TList
1: for each task T in TList do
2: if hasV alidator(L, t) == T RU E then
3: Update task T with false location claim at (L, t)
claims around the already validated location data points. For instance, if
we consider task 12 from Figure 8.4 as a malicious provider claiming a
false location exactly at photo task A’s location and time, then task 12 will
be detected in the validationProcess() function as it is not co-located in
the Bluetooth scans of photo task A. In addition to the validation process,
McSense will also do a basic spatio-temporal correlation check to ensure that
the provider is not claiming locations at different places at the same time.
TABLE 8.1
Photo task reliability
Total photos 1784
Number of photos with Bluetooth 204
scans (manually validated in ILR)
Trusted data points added by ILR 148
TABLE 8.2
Number of false location claims
Detected by Total Percentage
ILR scheme detected
Tasks with false loca- 4 16 25%
tion claim
Cheating people 4 10 40%
TABLE 8.3
Simulation setup for the ILR scheme
Parameter Value
Number of nodes 200
% of tasks with false location claims 10, 15, 30, 45, 60
Bluetooth transmission range 10m
Simulation time 2hrs
User walking speed 1m/sec
Node density 2, 3, 4, 5
Bluetooth scan rate 1/min
Simulation Setup:
The simulation setup parameters are presented in Table 8.3. Given a simu-
lation area of 100m x 120m, the node degree (i.e., average number of neighbors
per user) is slightly higher than 5. Talasila et al. varied the simulation area to
achieve node degrees of 2, 3, and 4. The authors consider low walking speeds
(i.e., 1m/sec) for collecting photos. In these simulations, the authors consid-
ered all tasks as photo tasks. A photo task is executed every minute by each
node. Photo tasks are distributed evenly across all nodes. Photo tasks with
false location claims are also distributed evenly across several malicious nodes.
The authors assume the photo tasks in ILR’s phase 1 are manually validated.
After executing the simulation scenarios described below, the authors col-
lected each photo task’s time, location, and Bluetooth scan. As per simulation
settings, there will be 120 completed photo tasks per node at the end of the
simulation (i.e., 24,000 total photo tasks for 200 nodes). Over this collected
data, Talasila et al. applied the ILR validation scheme to detect false location
claims.
Simulation Results:
plotted to gain insights on what will be the right percentage of photo tasks
needed in Phase 1 to bootstrap the ILR scheme. Next, we analyze Figure 8.6:
Therefore, Talasila et al. concluded that the right percentage of photo tasks
needed to bootstrap the ILR scheme is proportional to the expected false
location claims (which can be predicted using the history of the users’ par-
ticipation).
Node density impact on the ILR scheme. In this set of experiments,
the authors assume that 10% of the total photo tasks are submitting false
locations. In Figure 8.7, Talasila et al. analyzed the impact of node den-
sity on the ILR scheme. The authors seek to estimate the minimum node
density required to achieve highly connected graphs, to extend the location
trust transitively to more co-located nodes.
Therefore, the authors concluded that the ILR scheme can efficiently de-
tect false claims with a low number of manual validations, even for low node
densities.
Assumptions:
This section defines the interacting entities in the environment, and the
assumptions the authors make about the system for LINK protocol. The in-
teracting entities in the system are:
• Claimer: The mobile user who claims a certain location and subsequently
has to prove the claim’s authenticity.
• Verifier: A mobile user in the vicinity of the claimer (as defined by the
transmission range of the wireless interface, which is Bluetooth in the im-
plementation). This user receives a request from the claimer to certify the
claimer’s location and does so by sending a message to the LCA.
• Location Certification Authority (LCA): A service provided in the Internet
that can be contacted by location-based services to authenticate claimers’
location. All mobile users who need to authenticate their location are regis-
tered with the LCA.
• Location-Based Service (LBS): The service that receives the location in-
formation from mobile users and provides responses as a function of this
location.
Talasila et al. assume that each mobile device has means to determine its
location. This location is considered to be approximate, within typical GPS
or other localization systems limits. The authors assume the LCA is trusted
and the communication between mobile users and the LCA occurs over secure
channels, e.g., the communication is secured using SSL/TLS. The authors also
assume that each user has a pair of public/private keys and a digital certificate
from a PKI. Similarly, the authors assume the LCA can retrieve and verify the
Security Issues and Solutions 91
certificate of any user. All communication happens over the Internet, except
the short-range communication between claimers and verifiers.
The authors chose Bluetooth for short-range communication in LINK be-
cause of its pervasiveness in cell phones and its short transmission range (10m),
which provides good accuracy for location verification. However, LINK can
leverage WiFi during its initial deployment in order to increase the network
density. This solution trades off location accuracy for number of verifiers.
LCA can be a bottleneck and a single point of failure in the system. To
address these issues, standard distributed systems techniques can be used to
improve the LCA’s scalability and fault tolerance. For example, an individual
LCA server/cluster can be assigned to handle a specific geographic region,
thus reducing the communication overhead significantly (i.e., communication
between LCA servers is only required to access a user’s data when she travels
away from the home region). LINK also needs significant memory and storage
space to store historic data about each pair of users who interact in the sys-
tem. To alleviate this issue, a distributed implementation of the LCA could
use just the recent history (e.g., past month) to compute trust score trends,
use efficient data-intensive parallel computing frameworks such as Hadoop [14]
to pre-compute these trends offline, and employ distributed caching systems
such as Memcached [20] to achieve lower latency for authentication decisions.
Adversarial Model:
Any claimer or verifier may be malicious. When acting individually, ma-
licious claimers may lie about their location. Malicious verifiers may refuse
to cooperate when asked to certify the location of a claimer and may also lie
about their own location in order to slander a legitimate claimer. Additionally,
malicious users may perform stronger attacks by colluding with each other in
order to verify each other’s false claims. Colluding users may also attempt two
classic attacks: mafia fraud and terrrorist fraud [64].
Talasila et al. do not consider selfish attacks, in which users seek to reap
the benefits of participating in the system without having to expend their own
resources (e.g., battery). These attacks are solved by leveraging the centralized
nature of LCA, which enforces a tit-for-tat mechanism, similar to those found
in P2P protocols such as BitTorrent [57], to incentivize nodes to participate in
verifications. Only users registered with the LCA can participate in the system
as claimers and verifiers. The tit-for-tat mechanism requires the verifiers to
submit verifications in order to be allowed to submit claims. New users are
allowed to submit a few claims before being requested to perform verifications.
Finally, the authors rely on the fact that a user cannot easily obtain mul-
tiple user IDs because the user ID is derived from a user certificate, and
obtaining digital certificates is not cheap; this deters Sybil attacks [67]. Fur-
ther, techniques such as [127, 133], complimentary to the LINK protocol, can
be used to address these attacks.
92 Mobile Crowdsensing
the LBS. The verifiers’ IDs consists of the list of verifiers discovered by the
claimer’s Bluetooth scan; in this way, LCA will ignore the certification replies
received from any other verifiers (the purpose of this step is to defend against
mafia fraud attacks as detailed in Section 8.3.2.3). Furthermore, the LCA
timestamps and stores each newly received claim.
The claimer then starts the verification process by broadcasting to its
neighbors a location certification request over the short-range wireless inter-
face (step 4). This message is signed and consists of (userID, serviceID, lo-
cation, seq-no), with the same sequence number as the claim in step 3. The
neighbors who receive the message, acting as verifiers for the claimer, will send
a signed certification reply message to LCA (step 5) (Verifier Pseudo-code, line
8). This message consists of (userID, location, certification-request), where the
userID and location are those of the verifier and certification-request is the
certification-request broadcasted by the claimer. The certification-request is
included to allow the LCA to match the claim and its certification messages.
Additionally, it proves that indeed the certification-reply is in response to the
claimer’s request.
The LCA waits for the certification reply messages for a short period of
time and then starts the decision process (described next in Section 8.3.2.2).
Finally, the LCA informs the LBS about its decision (step 6) (LCA Pseudo-
code, line 9), causing the LBS to provide or deny service to the claimer (LBS
Pseudo-code, line 8).
Claimer lies:
The LCA first checks the user’s spatio-temporal correlation by compar-
ing the currently claimed location with the location of the user’s previously
recorded claim (lines 1–3 in the algorithm). If it is not physically possible to
move between these locations in the time period between the two claims, the
new claim is rejected.
If the claimer’s location satisfies the spatio-temporal correlation, the LCA
selects only the “good” verifiers who responded to the certification request
and who are in the list of verifiers reported by the claimer (lines 5–12).
These verifiers must include in their certification reply the correct certifi-
cation request signed by the claimer (not shown in the code) and must
satisfy the spatio-temporal correlation themselves. Additionally, they must
have trust scores above a certain threshold. The authors only use “good”
verifiers because verifiers with low scores may be malicious and may try to
slander the claimer. Nevertheless, the low score verifiers respond to certifica-
94 Mobile Crowdsensing
Notation of Algorithm 3:
c: claimer
V = {v0 ,v1 ,...vn }: Set of verifiers for claimer c
Nset : Set of verifiers who do not agree with c’s location claim
vi : The i-th verifier in V
Tvi : Trust score of verifier vi
Tc : Trust score of claimer
Wvi : Weighted trust score of verifier vi
Lc : Location claimed by claimer
Lvi : Location claimed by verifier vi
INDtr : Individual threshold to eliminate the low-scored verifiers
AVGtr : Average threshold to ensure enough difference in averages
VRFcnt =0: Variable to hold recursive call count
INC=0.1: Additive increment
DEC=0.5: Multiplicative decrement
secVer[]: Array to hold the response of second level verifications
Contradictory verifications:
If the LCA does not detect collusion between the claimer and verifiers, it
accepts or rejects the claim based on the difference between the sums of the
trust scores of the two sets of verifiers (lines 16–23), those who agree with
the location submitted by the claimer (Ysum ), and those who do not (Nsum ).
Of course, the decision is easy as long as all the verifiers agree with each
other. The difficulty comes when the verifiers do not agree with each other.
This could be due to two causes: malicious individual verifiers, or verifiers
colluding with the claimer, who have escaped detection.
If the difference between the trust score sums of two sets of verifiers is above
a certain threshold, the LCA decides according to the “winning” set. If it is
low, the LCA does not make a decision yet. It continues by checking the trust
score trend of the claimer (lines 24–27): if this trend is poor, with a pattern
of frequent score increases and decreases, the claimer is deemed malicious
and the request rejected. Otherwise, the LCA checks the score trends of the
verifiers who disagree with the claimer (lines 28). If these verifiers are deemed
malicious, the claim is accepted. Otherwise, the claim is ignored, which forces
the claimer to try another authentication later.
96 Mobile Crowdsensing
Note that even if the claim is accepted in this phase, the trust score of
the claimer is preventively decremented by a small value (lines 32–33). In this
way, a claimer who submits several claims that are barely accepted will receive
a low trust score over time; this trust score will prevent future “accepts” in
this phase (lines 29-30) until her trust scores improves.
If the trend scores of both the claimer and the verifiers are good, the
verifiers are challenged to authenticate their location (lines 34–44). This
second level verification is done through a recursive call to the same deci-
sionProcess() function. This function is invoked for all verifiers who do not
agree with the claimer (lines 37–38). If the majority of these verifiers can-
not authenticate their location (i.e., Ignore or Reject answers), the claim is
accepted (lines 39–41). Otherwise, the claim is rejected. The VRFcnt vari-
able is used to keep track of the recursive call count. Since only one addi-
tional verification level is performed, the function returns when its value is 2.
functions or learning methods could be used, but this simple function works
well for many types of attacks, as demonstrated by the experiments.
Colluding users verification. Groups of users may use out-of-band com-
munication to coordinate attacks. For example, they can send location certi-
fication messages to LCA on behalf of each other with agreed-upon locations.
To mitigate such attacks, the LCA maintains an NxN matrix M that tracks
users certifying each other’s claims (N is the total number of users in the
system). M[i][c] counts how many times user i has acted as verifier for user
c. The basic idea is that colluding users will frequently certify each other’s
claims compared with the rest of the users in the system. However, identi-
fying colluding users based solely on this criterion will not work, because a
spouse or a colleague at the office can very frequently certify the location of
certain users. Furthermore, a set of colluding malicious users can use various
permutations of subsets of malicious verifiers to reduce the chances of being
detected.
Therefore, Talasila et al. propose two enhancements. First, the LCA algo-
rithm uses weighted trust scores for verifiers with at least two verifications for
a claimer. The weighted trust score of a verifier v is Wv = Tv /log2 (M[i][c]),
where Tv is the actual trust score of v. The more a user certifies another user’s
claims, the less its certifying information will contribute in the LCA decision.
Talasila et al. chose a log function to induce a slower decrease of the trust
score as the count increases. Nevertheless, a small group of colluding users
can quickly end up with all their weighted scores falling below the threshold
for “good” users, thus stopping the attack.
This enhancement is used until enough verification data is collected. Then,
it is used in conjunction with the second enhancement, which discriminates
between colluding malicious users and legitimate users who just happen to
verify often for a claimer. LINK rejects a claim if the following conditions are
satisfied for the claimer:
FIGURE 8.4
Example of McSense collected photo tasks [A–I] and sensing tasks [1–15] on
the campus map, grouped using Bluetooth discovery co-location data.
demonstrate the advantage of punishing the verifiers as well vs. a method that
would punish only the claimer.
A higher percentage of users verifying often for the same claimer is a
strong indication of malicious behavior (the parameter α, set to 10% in the
implementation, is used for this purpose). The underlying assumption is that
a legitimate user going about her business is verified by many users over time,
and only a few of them would verify often (e.g., family, lab mates).
Lines 7–13 show how the decision is made. If the number of potentially
colluding verifiers is greater than α, the claimer and those verifiers are pun-
ished. Note that the authors do not punish a verifier who did not participate
in verifications for this claimer since the last time she was punished (line 10).
100 Mobile Crowdsensing
FIGURE 8.5
The phases of the ILR scheme.
FIGURE 8.6
ILR performance as a function of the percentage of photos manually validated
in Phase 1. Each curve represents a different percentage of photos with fake
locations.
In this way, the verifiers can redeem themselves, but at the same time, their
contribution is still remembered in M. Finally, as shown in lines 14–18, if the
percentage of potentially colluding users is less than α, the counts for those
users are reset to allow them to have a greater contribution in future verifica-
tions for the claimer (this is correlated with the weighted trust score described
previously).
Security Issues and Solutions 101
FIGURE 8.7
ILR performance as a function of the percentage of photos manually validated
in Phase 1. Each curve represents a different network density, shown as the
average number of neighbors per node.
V2 LBS
(4) broadcast ation claim
(5)
V1 (3) loc
ation certification
claim
reply
V4 (5) LCA
V3 certification
reply
FIGURE 8.8
Basic protocol operation (where C = claimer, Vi = verifiers, LBS = location-
based service, LCA = location certification authority).
not focus on preventing such attacks. Instead, the authors focus on preventing
users that systematically exhibit malicious behavior. Up to a certain amount
of adversarial presence, the simulation results in Section 8.3.2.4 show that
the protocol is able to decrease, over time, the scores of users that exhibit
malicious behavior consistently, and to increase the scores of legitimate users.
All certification requests and replies are digitally signed, thus the attacker
cannot forge them, nor can she deny messages signed under her private key.
Attackers may attempt simple attacks such as causing the LCA to use the
wrong certification replies to verify a location claim. LINK prevents this attack
by requiring verifiers to embed the certification request in the certification
reply sent to the LCA. This also prevents attackers from arbitrarily creating
certification replies that do not correspond to any certification request, as they
will be discarded by the LCA.
Another class of attacks claims a location too far from the previously
claimed location. In LINK, the LCA prevents these attacks by detecting it is
not feasible to travel such a large distance in the amount of time between the
claims.
The LCA’s decision-making process is facilitated when there is a clear
difference between the trust scores of legitimate and malicious users. This
corresponds to a stage in which the user scores have stabilized (i.e., malicious
scores have low scores and legitimate users have high scores). However, there
may be cases when this score difference is not significant and it becomes
challenging to differentiate between a legitimate verifier vouching against a
malicious claimer and a malicious verifier slandering a legitimate claimer. In
this case, the LCA’s decision relies on several heuristic rules. The true nature
of a user (malicious or legitimate) may be reflected in the user’s score trend
Security Issues and Solutions 103
and the LCA can decide based on the score trends of the claimer and verifiers.
The LCA may also potentially require the verifiers to prove their location.
This additional verification can reveal malicious verifiers that are certifying
a position claim (even though they are not in the vicinity of the claimed
position), because the verifiers will not be able to prove their claimed location.
Replay Attack. Attackers may try to slander other honest nodes by
intercepting their certification requests and then replaying them at a later
time in a different location. However, the LCA is able to detect that it has
already processed a certification request (extracted from a certification reply)
because each such request contains a sequence number and the LCA maintains
a record of the latest sequence number for each user. Thus, such duplicate
requests will be ignored.
Individual Malicious Claimer or Verifier Attacks. We now consider
individual malicious claimers that claim a false location. If the claimer follows
the protocol and broadcasts the certification request, the LCA will reject
the claim because the claimer’s neighbors provide the correct location and
prevail over the claimer. However, the claimer may choose not to broadcast
the certification request and only contact the LCA. If the attacker has a good
trust score, she will get away with a few false claims. The impact of this attack
is limited because the attacker trust score is decreased by a small decrement for
104 Mobile Crowdsensing
each such claim, and she will soon end up with a low trust score; consequently,
all future claims without verifiers will be rejected. Accepting a few false claims
is a trade-off the authors adopted in LINK in order to accept location claims
from legitimate users that occasionally may have no neighbors.
An individual malicious verifier may slander a legitimate user who claims
a correct location. However, in general, the legitimate user has a higher trust
score than the malicious user. Moreover, the other (if any) neighbors of the
legitimate user will support the claim. The LCA will thus accept the claim.
Colluding Attack. A group of colluding attackers may try to verify each
other’s false locations using out-of-band channels to coordinate with each
other. For example, one attacker claims a false position and the other attackers
in the group support the claim. LINK deals with this attack by recording the
history of verifiers for each claimer and gradually decreasing the contribution
of verifiers that repeatedly certify for the same claimer (see Section 8.3.2.2).
Even if this attack may be successful initially, repeated certifications from the
same group of colluding verifiers will eventually be ignored (as shown by the
simulations in Section 8.3.2.4).
Mafia Fraud. In this attack, colluding users try to slander honest claimers
without being detected, which may lead to denial-of-service. For example, a
malicious node M1 overhears the legitimate claimer’s certification request and
relays it to a remote collaborator M2 ; M2 then re-broadcasts this certification
request pretending to be the legitimate claimer. This results in conflicting
certification replies from honest neighbors of the legitimate claimer and honest
neighbors of M2 from a different location. This attack is prevented in LINK
because the LCA uses the list of verifiers reported by the legitimate claimer
from its Bluetooth scan. Therefore, LCA ignores the certification replies of
the extra verifiers who are not listed by the legitimate claimer. These extra
verifiers are not punished by LCA, as they are being exploited by the colluding
malicious users. Furthermore, it is difficult for colluding users to follow certain
users in order to succeed in such an attack.
Limitations and Future Work. The thresholds in the protocol are set
based on the expectations of normal user behavior. However, they can be
modified or even adapted dynamically in the future.
LINK was designed under the assumption that users are not alone very
often when sending the location authentication requests. As such, it can lead
to significant false positive rates for this type of scenario. Thus, LINK is best
applicable to environments in which user density is relatively high.
Terrorist fraud is another type of attack in which one attacker relays the
certification request to a colluding attacker at a different location, in order to
falsely claim the presence at that different location. For example, a malicious
node M1 located at location L1 relays its certification request for location L2
to collaborator M2 located at L2 . M2 then broadcasts M1 ’s request to nearby
verifiers. Verifiers certify this location request, and as a result the LCA falsely
believes that M1 is located at L2 . This attack is less useful in practice and
is hard to mount, as it requires one of the malicious users to be located at
Security Issues and Solutions 105
8.3.2.4 Simulations
This section presents the evaluation of LINK using the ns-2 simulator.
The two main goals of the evaluation are: (1) Measuring the false nega-
tive rate (i.e., percentage of accepted malicious claims) and false positive
rate (i.e., percentage of denied truthful claims) under various scenarios, and
(2) verifying whether LINK’s performance improves over time as expected.
Simulation Setup:
The simulation setup parameters are presented in Table 8.4. The average
number of neighbors per user considering these parameters is slightly higher
than 5. Since the authors are interested in measuring LINK’s security perfor-
mance, not its network overhead, they made the following simplifying changes
in the simulations. Bluetooth is emulated by WiFi with a transmission range of
10m. This results in faster transmissions as it does not account for Bluetooth
discovery and connection establishment. However, the impact on security of
106 Mobile Crowdsensing
TABLE 8.4
Simulation setup for the LINK protocol
Parameter Value
Simulation area 100m x 120m
Number of nodes 200
% of malicious users 1, 2, 5, 10, 15
Colluding user group size 4, 6, 8, 10, 12
Bluetooth transmission range 10m
Simulation time 210min
Node speed 2m/sec
Claim generation rate (uniform) 1/min, 1/2min, 1/4min, 1/8min
Trust score range 0.0 to 1.0
Initial user trust score 0.5
‘‘Good’’ user trust score 0.3
threshold
Low trust score difference 0.2
threshold
Trust score increment 0.1
Trust score decrement - common 0.5
case
Trust score decrement - no 0.1
neighbors
this simplification is limited due to the low walking speeds considered in these
experiments. Section 8.3.2.6 will present experimental results on smartphones
that quantify the effect of Bluetooth discovery and Piconet formation. The
second simplification is that the communication between the LCA and the
users does not have any delay; the same applies for the out-of band commu-
nication between colluding users. Finally, a few packets can be lost due to
wireless contention because the authors did not employ reliable communica-
tion in their simulation. However, given the low claim rate, the impact of these
packets is minimal.
To simulate users’ mobility, Talasila et al. used the Time-variant Com-
munity Mobility Model (TVCM model) [90], which has the realistic mobility
characteristics observed from wireless LAN user traces. Specifically, TVCM
selects frequently visited communities (areas that a node visits frequently)
and different time periods in which the node periodically re-appears at the
same location. Talasila et al. use the following values of the TVCM model
in the simulations: 5 communities, 3 periods, and randomly placed commu-
nities represented as squares having the edge length 20m. The TVCM fea-
tures help in providing a close approximation of real-life mobility patterns
compared to the often-used random waypoint mobility model (RWP). Never-
theless, to collect additional results, the authors ran simulations using both
TVCM and RWP. For most experiments, they have seen similar results be-
Security Issues and Solutions 107
tween the TVCM model and the RWP model. Therefore, they omit the
RWP results. There is one case, however, in which the results for TVCM
are worse than the results for RWP: It is the “always malicious individual
verifiers,” and this difference will be pointed out when we discuss this case.
Simulation Results:
Always malicious individual claimers. In this set of experiments, a
certain number of non-colluding malicious users send only malicious claims;
however, they verify correctly for other claims.
If malicious claimers broadcast certification requests, the false negative
rate is always 0. These claimers are punished and, because of low trust scores,
they will not participate in future verifications. For higher numbers of ma-
licious claimers, the observed false positive rate is very low (under 0.1%),
but not 0. The reason is that a small number of good users remain without
neighbors for several claims and, consequently, their trust score is decreased;
similarly, their trust score trend may seem malicious. Thus, their truthful
claims are rejected if they have no neighbors. The users can overcome this
rare issue if they are made aware that the protocol works best when they have
neighbors.
If malicious claimers do not broadcast certification requests, a few of their
claims are accepted initially because it appears that they have no neighbors.
If a claimer continues to send this type of claim, her trust score falls below the
“good” user threshold and all her future claims without verifiers are rejected.
Thus, the false negative rate will become almost 0 over time. The false positive
rate remains very low in this case.
Sometimes malicious individual claimers. In this set of experiments,
a malicious user attempts to “game” the system by sending not only mali-
cious claims but also truthful claims to improve her trust score. Talasila et al.
have evaluated two scenarios: (1) Malicious users sending one truthful claim,
followed by one false claim throughout the simulation, and (2) Malicious users
sending one false claim for every four truthful claims. For the first 10 minutes
of the simulation, they send only truthful claims to increase their trust score.
Furthermore, these users do not broadcast certification requests to avoid being
proved wrong by others.
Figure 8.9 shows that LINK quickly detects these malicious users. Initially,
the false claims are accepted because the users claim to have no neighbors and
have good trust scores. After a few such claims are accepted, LINK detects
the attacks based on the analysis of the trust score trends and punishes the
attackers.
Figure 8.10 illustrates how the average trust score of the malicious users
varies over time. For the first type of malicious users, the multiplicative de-
crease followed by an additive increase cannot bring the score above the
“good” user threshold; hence, their claims are rejected even without the trust
score trend analysis. However, for the second type of malicious users, the
average trust score is typically greater than the “good” user threshold. Nev-
108 Mobile Crowdsensing
FIGURE 8.9
False negative rate over time for individual malicious claimers with mixed
behavior. The claim generation rate is 1 per minute, 15% of the users are
malicious, and average speed is 1m/s.
ertheless, they are detected based on the trust score trend analysis. In these
simulations, the trust score range is between 0 and 1, i.e., additive increase of
a trust score is done until it reaches 1, then it is not incremented anymore. It
stays at 1, until there is a claim rejection or colluding verifier punishment.
Always malicious individual verifiers. The goal of this set of exper-
iments is to evaluate LINK’s performance when individual malicious veri-
fiers try to slander good claimers. In these experiments, there are only good
claimers, but a certain percentage of users will always provide malicious veri-
fications.
Figure 8.11 shows that LINK is best suited for city environments where
user density of at least 5 or higher can be easily found. The authors observed
that for user density less than 4 LINK cannot afford more than 10% malicious
verifiers, and for user density of 3 or less LINK will see high false positive
rates due to no verifiers or due to the malicious verifiers around the claimer.
From Figure 8.12, we observe that LINK performs well even for a relatively
high number of malicious verifiers, with a false positive rate of at most 2%.
The 2% rate happens when a claimer has just one or two neighbors and those
neighbors are malicious. However, a claimer can easily address this attack
by re-sending a claim from a more populated area to increase the number of
verifiers.
Security Issues and Solutions 109
FIGURE 8.10
Trust score of malicious users with mixed behavior over time. The claim gen-
eration rate is 1 per minute, 15% of the users are malicious, and average speed
is 1m/s. Error bars for 95% confidence intervals are plotted.
FIGURE 8.11
False positive rate as a function of the percentage of malicious verifiers for
different node densities. The claim generation rate is 1 per minute and average
speed is 1m/s. Error bars for 95% confidence intervals are plotted.
works well for these group sizes (up to 6% of the total nodes collude with each
other). After a short period of high false negative rates, the rates decrease
sharply and subsequently no false claims are accepted.
In LINK, all colluding users are punished when they are found to be ma-
licious (i.e., the claimer and the verifiers). This decision could result in a few
“good” verifiers being punished once in a while (e.g., family members). Fig-
ures 8.15 and 8.16 shows the false negative and positive rates, respectively,
when punishing and not punishing the verifiers (i.e., the claimers are always
punished). The authors observed that LINK takes a little more time to catch
the colluding users while not punishing verifiers; at the same time, a small in-
crease in the false positive rate is observed while punishing the verifiers. Since
this increase in the false positive rate is not significant, the authors prefer to
punish the verifiers in order to detect malicious claims sooner.
8.3.2.5 Implementation
The LINK prototype has been implemented and tested on Motorola Droid
2 smart phones installed with Android OS 2.2. These phones have 512 MB
RAM, 1 GHz processor, Bluetooth 2.1, WiFi 802.11 b/g/n, 8 GB on board
storage, and 8 GB microSD storage. Since the authors did not have data plans
on their phones, all experiments were performed by connecting to the Internet
over WiFi.
Security Issues and Solutions 111
FIGURE 8.12
False positive rate as a function of the percentage of malicious verifiers for
different claim generation rates. The average speed is 1m/s. Error bars for
95% confidence intervals are plotted.
Client API:
We present the client API in the context of the Coupon LBS and its cor-
responding application. This service distributes location-based electronic dis-
count coupons to people passing by a shopping mall. To prevent users located
farther away from receiving these coupons, the service has to authenticate
their location.
The corresponding application is implemented as an Android Application
Project. The user is provided with a simple “Request Coupon” button as
shown in Figure 8.17. The application submits the current location of the user
to LBS and waits for an answer. Upon receiving the location authentication
request from the LBS, the application invokes the submit claim LINK API.
An optimization done in the LINK package implementation was to limit the
112 Mobile Crowdsensing
FIGURE 8.13
False positive rate over time for different percentages of malicious verifiers.
The claim generation rate is 1 per minute and the average speed is 1m/s.
Error bars for 95% confidence intervals are plotted.
FIGURE 8.14
False negative rate over time for colluding users. Each curve is for a differ-
ent colluding group size. Only 50% of the colluding users participate in each
verification, thus maximizing their chances to remain undetected. The claim
generation rate is 1 per minute and the average speed is 1m/s. Error bars for
95% confidence intervals are plotted.
LCA Server:
LCA is a multi-threaded server that maintains the claim transaction’s
hashmap, list of all user’s details (ID, Bluetooth device address, RSA pub-
lic key, trust score, etc.), and all users’ weight matrices used in the decision
process. One of the important implementation decisions is how long should
a thread that received a claim wait for verifications to arrive1 . This is neces-
sary because some verifier phones may be turned off during the verification
process, go out of the Bluetooth transmission range before the connection
with the claimer is made, or even act maliciously and refuse to answer. This
last example could lead to a denial of service attack on the LCA. Thus, the
LCA cannot wait (potentially forever) until all expected verification messages
1 The LCA knows the number of verifiers from the submitted claim message.
114 Mobile Crowdsensing
FIGURE 8.15
False negative rate over time when punishing and not punishing colluding ver-
ifiers. The size of the colluding group is 12, and 50% of these users participate
in each verification. The claim generation rate is 1 per minute and the average
speed is 1m/s.
FIGURE 8.16
False positive rate over time when punishing and not punishing colluding
verifiers. All parameters are the same as in Figure 8.15.
arrive. It needs a timeout, after which it makes the decision based on the
verification received up to that moment.
Talasila et al. considered a waiting function linear in the number of veri-
fiers. The linear increase is due to the sequential establishment of Bluetooth
connections between the claimer and verifiers (i.e., they cannot be done in
parallel). Since such a connection takes about 1.2s, the authors defined the
waiting time w = number of verifiers * 2s, where 2s is an upper bound for the
Security Issues and Solutions 115
FIGURE 8.17
Coupon application on Android phone.
connection latency. However, this fixed waiting time could lead to long delays
in situations when there are many verifiers and one or two do not answer at
all. Therefore, the authors decided to adapt (i.e., reduce) this waiting time as
a function of the number of received verifications. Upon each received verifi-
cation, w = w * 4/5. The value of the reduction factor can be further tuned,
but so far it worked well in the experiments.
Once all the verification replies are received or the timeout expires, the
LCA processes the claim through the decision process algorithm. Finally, the
LCA informs the claimer and the LBS about its decision.
116 Mobile Crowdsensing
TABLE 8.5
Latency table for individual LINK tasks
Task Total time taken (s)
WiFi communication RTT 0.350
Bluetooth discovery 5.000
Bluetooth connection 1.200
Signing message 0.020
Verifying message 0.006
Measurements Methodology:
For latency, the roundtrip time of the entire LINK protocol (LINK
RTT) is measured in seconds at the claimer mobile’s coupon appli-
cation program. For battery consumption, Power Tutor [186], available
in the Android market, is used to collect power readings every sec-
ond. The log files generated by PowerTutor are parsed to extract the
CPU and WiFi power usage for the application’s process ID. Separate
tests are performed to benchmark the Bluetooth tasks as PowerTutor
does not provide the Bluetooth radio power usage in its logs. All val-
ues are measured by taking the average for 50 claims for each test case.
Micro-Benchmark Results:
In these experiments, the authors used just two phones, one claimer and
one verifier. Table 8.5 shows the latency breakdown for each individual task
in LINK. Bluetooth discovery and Bluetooth connection are the tasks that
took the major part of the response time. Note that the authors limited Blue-
tooth discovery to 5.12s, as explained in Section 8.3.2.5, to reduce the latency.
From these results, they estimated that LINK latency is around 7s for one
verifier; the latency increases linearly with the number of verifiers because the
Bluetooth connections are established sequentially.
Table 8.6 shows the energy consumption breakdown for each task in LINK.
The results show Bluetooth discovery consumes the most energy, while the
Security Issues and Solutions 117
TABLE 8.6
Energy consumption for individual LINK tasks
Task Energy
Consumed
(Joules)
WiFi communication RTT 0.100
Bluetooth discovery 5.428
Bluetooth connection (Claimer side) 0.320
Bluetooth connection (Verifier side) 0.017
Signing message 0.010
Verifying message 0.004
FIGURE 8.18
LINK RTT and total energy consumed by claimer per claim function of the
number of verifiers.
TABLE 8.7
Battery life for different WiFi and Bluetooth radio states
Bluetooth and Bluetooth off Bluetooth and
WiFi off and WiFi on WiFi on
Battery life 10Days 16Hrs 3Days 15Hrs 2Days 11Hrs
Finally, let us recall that LINK is robust to situations when verifiers run out
of the transmission range before the Bluetooth connection is established.
To understand LINK’s feasibility from an energy point of view (i.e., to
answer the second question posted above), Talasila et al. performed an analysis
to see how many claims and verifications a smartphone can execute before it
runs out of battery power. The total capacity of the Motorola Droid 2 battery
is 18.5KJ. Since LINK requires WiFi and Bluetooth to be on, the authors
first measured the effect of these wireless interfaces on the phone lifetime.
Table 8.7 shows the results for different interface states (without running any
applications). The authors observed that even when both are on all the time,
the lifetime is still over 2 days, which is acceptable (most users re-charge
their phones at night). The lifetime is even better in reality because Android
puts WiFi to sleep when there is no process running that uses WiFi (in the
experiments, the authors forced it to be on all the time).
Next, using this result, the authors estimated how many claims and veri-
fications LINK can do with the remaining phone energy:
Security Issues and Solutions 119
TABLE 8.8
Average RTT and energy consumption per claim for multi-claimer case vs.
single claimer case
Average RTT Energy consumed
(s) (Joules)
Multi-Claimer case 15.31 9.25
Single-Claimer case 8.60 7.04
simultaneous claims are performed by three phones, with the values measured
for the case with one single claimer and two verifiers. With the new settings,
the claimers were able to discover all verifiers. However, as expected, this
robustness comes at the cost of higher latency (the energy consumption also
increases, but not as much as the latency). The good news is that the authors
expected all three claimers to be static according to the guidelines for LINK
claimers. Therefore, the increased latency should not impact the successful
completion of the protocol.
Through extensive experiments, the authors shows that Movee can effi-
ciently differentiate fraudulent videos from genuine videos. The experiments
show that Movee achieves a detection accuracy that ranges between 68% and
93% on a Samsung Admire smartphone, and between 76% and 91% on a
Google Glass device.
The Movee system has several limitations. First, it is not transparent to
its users: The user is required to perform a verification step in the beginning
of shooting the video, during which the user needs to move the camera of
the device for 6 seconds in a certain direction. Second, Movee is vulnerable
to a “stitch” attack, in which the attacker creates a fraudulent video by first
live recording a genuine video and then pointing the camera to a pre-recorded
target video. These limitations are addressed by Rahman et al. [136], who
proposed Vamos, a Video Accreditation through Motion Signatures system.
Vamos provides liveness verifications for videos of arbitrary length, is resistant
to a wide range of attacks, and is completely transparent to the users; in
particular, it requires no special user interaction, nor change in user behavior.
To eliminate the initial verification step that was required in Movee, Vamos
uses the entire video and acceleration streams for verification purposes. Vamos
consists of a three-step process. First, it divides the input sample into equal
length chunks. Second, it classifies each chunk as either genuine or fraudulent.
Third, it combines the results of the second step with a suite of novel features
to produce a final decision for the original sample.
Vamos was validated based on two sets of videos: The first set consists of
Security Issues and Solutions 123
150 citizen journalism videos collected from YouTube; the second set consists
of 160 free-form videos collected from a user study. The experimental results
show that the classification performance depends on the video. This leads the
authors to conclude that the success rate of attacks against video liveness
depends on the type of motions encoded in the video. The authors propose
a general classification of videos captured on mobile devices, based on user
motion, camera motion, and distance to subject.
Other work in this space includes InformaCam [15], which provides mech-
anisms to ensure that the media was captured by a specific device at a certain
location and time. A limitation of InformaCam is that it is vulnerable to
projection attacks.
8.4 Conclusion
This chapter examined several security-related issues that may affect the qual-
ity of the data collected by mobile crowdsensing systems. We first presented
general reliability issues associated to sensed data. We then discussed solu-
tions to ensure the reliability and quality of the sensed data, such as the ILR,
LINK, and SHIELD schemes. Finally, we examined solutions to ensure data
liveness and truth discovery.
9
Privacy Concerns and Solutions
9.1 Introduction
The increase in user participation and the rich data collection associated with
it are beneficial for mobile crowdsensing systems. However, sensitive partic-
ipant information may be revealed in this process, such the daily routines,
social context, or location information. This raises privacy concerns, and de-
pending on the severity of the collected sensitive information, participants
may refuse to engage in sensing activities — a serious problem for mobile
crowdsensing systems. This chapter discusses potential solutions to the pri-
vacy issues introduced by mobile crowdsensing platforms.
125
126 Mobile Crowdsensing
9.2.1 Anonymization
A popular approach for preserving privacy of the data is anonymization [153].
Anonymization can be used to remove the identifying information collected
by crowdsensing applications, but it raises two problems. Firstly, the mere
removal of identifying information like names and addresses from the data
cannot guarantee anonymity. For example, in cases where the crowdsensing
applications collect location data, the anonymization will not be able to pre-
vent the identification of individuals. The reason is that anonymized location
data will still show the frequently visited locations of a person, which in turn
may lead to the identification of that person. Secondly, in the context of data
anonymization, data utility and data privacy are conflicting goals. As a result,
the anonymization of data will enhance privacy protection, but decrease data
utility.
9.2.2 Encryption
The privacy of users’ personal data can be achieved by using encryption tech-
niques [183]. By encrypting data submitted by the users, unauthorized third
parties will not be able to use personal data, even if they acquire access to
the encrypted data. However, such cryptographic techniques may be compute-
intensive, which leads to increased energy consumption, and may not be scal-
able because they require the generation and maintenance of multiple keys.
the mobile crowdsensing systems have to find a way to protect the data of
individuals while at the same time enabling the operation of the sensing ap-
plications. We discuss in the following sections solutions proposed to address
the privacy concerns of participants.
In this case, only an adversary compromising the application server will know
the sense data but cannot link back to the participant who originated it.
Now the important question is when to use the POI or the double-
encryption mechanism? To answer this question, a POI determining algorithm
is designed that is used by the Points of Interest generator server to determine
which mechanism needs to be applied in each cell. Before going into the de-
tails of the algorithm, the reader needs to understand the inference techniques
applied on the sensed data to generate a Map of estimated values (MRt ) that
is used as the input to the POI determining algorithm.
Inference techniques are utilized in MCS systems to estimate the variables
of interest in those places where data are not available. Kriging [123] is one
of the most widely used techniques on spatio-temporal datasets. Kriging is
a BLUE (Best Linear Unbiased Estimator) interpolator. That means it is a
linear estimator that matches the correct expected value of the population
while minimizing the variance of the observations [77]. In inference stage, the
application server applies the Kriging technique, using the data reported by
the participants. The result of applying Kriging to the reported data is a Map
of estimated values (MRt ) for round Rt , for each point in the area of interest.
This map presents to the final user the values of the variables of interest over
the entire area. Then, based on the current MRt map, the Points of Interest
generator server runs the POI determining algorithm to define the new set of
points of interest and Voronoi spaces (or cells) for the next round Rt+1 .
The POI determining algorithm is the most important part of the new
hybrid mechanism. The main idea is to calculate the variability of the variable
of interest (e.g., pollution, temperature data) after each round and then change
the size of the cells according to it. If the measurements present very low
variability (measurements are within a small range of variation) inside a cell,
the cell will very likely remain of the same size and the mobile device selects
the Points of Interest mechanism to obfuscate the data, i.e., the users will
send the reports using the location of the POI of their respective cells. On
the other hand, if the variable of interest presents high variability (e.g., very
different pollution or temperature values in different zones of the same cell),
the algorithm will find those zones with different values and create new cells.
If these new cells are smaller than a minimum cell size Smin , the mobile
device selects the double-encryption technique to encrypt the data because
otherwise the user might be recognized considerably more easily (the user will
be confined to a smaller area). In this manner, when the mechanism obfuscates
the data, it protects the privacy of the user more and saves energy, and when
it encrypts the data, it provides more accurate information about the variable
of interest but it spends more energy.
At the same time, they are after an MCS system that is resilient to abusive
users and guarantees privacy protection even against multiple misbehaving
MCS entities (servers). They address these seemingly contradicting require-
ments with the SPPEAR architecture — Security & Privacy-Preserving Ar-
chitecture for Participatory-Sensing Applications.
the privacy of the parties querying mobile nodes, PEPPeR [65] decouples the
process of node discovery from the access control mechanisms used to query
these nodes.
PEPSI [60] is a centralized solution that provides privacy to data queriers
and, at the same time, prevents unauthorized entities from querying the
results of sensing tasks. However, PEPSI does not consider accountability
and privacy-preserving incentive mechanisms and it does not ensure privacy
against cellular Internet Service Providers (ISPs).
pant will be given one report token for each task. It consumes the token when
it submits a report for the task and thus cannot submit more reports. To
satisfy the last condition, when the service provider receives a report, it issues
pseudo-credits to the reporting participant, which can be transformed to c
credit tokens. The participant will deposit these tokens to its credit account.
Second, to achieve the privacy goals, all tokens are constructed in a privacy-
preserving way, such that a request (report) token cannot be linked to a par-
ticipant and a credit token cannot be linked to the task and report from which
the token is earned.
Therefore, the scheme precomputes privacy-preserving tokens for partic-
ipants, which are used to process future tasks. To ensure that participants
will use the tokens appropriately in the smartphones (i.e., they will not abuse
the tokens), commitments to the tokens are also precomputed such that each
request (report) token is committed to a specific task and each credit token
is committed to a specific participant’s smartphone.
maintains the locations of the clients using a pyramid data structure, similar
to a quad-tree. Upon reception of a query, the anonymizer first hashes the
user’s location to the leaf node and then moves up the tree if necessary until
enough neighbors are included. Hilbert cloaking [98] uses the Hilbert space
filling to map 2-D space into 1-D values. These values are then indexed by an
annotated B+-tree, which supports efficient search by value or by rank (i.e.,
position in the 1-D sorted list). The algorithm partitions the 1-D sorted list
into groups of K users. Hilbert cloaks, though achieving K-anonymity, do not
always preserve locality, which leads to large cloak size and high server-side
complexity. Recognizing that Casper does not provide K-anonymity, Ghinita
et al. [73] proposed a framework for implementing reciprocal algorithms using
any existing spatial index on the user locations. Once the anonymous set
(AS) is determined, the cloak region can be represented by rectangles, disks,
or simply the AS itself.
In another solution [102], the authors proposed that the way to protect
the location privacy of these mobile users is through the use of cloud-based
agents, which obfuscate user location and enforce the sharing practices of
their owners. The cloud agents organize themselves in to a quadtree that
enables queriers to directly contact the mobile users in the area of interest
and, based on their own criteria, select the ones to get sensing data from. The
tree is kept in a decentralized manner, stored and maintained by the mobile
agents themselves, thus avoiding the bottlenecks and the privacy implications
of centralized approaches.
The authors in [72] use balanced trees to enforce k-anonymity in the spatial
domain and conceal user locations. The emphasis here is on users issuing
location-based queries as in the case, for example, where a user asks for all
hospitals close to its current location. As these queries may compromise user
privacy, the use of trees is necessitated by the need to partition the geometric
space and answer quickly queries about the location of other users in order
to guarantee k-anonymity. In the solution [102], however, the users want to
hide their location from a querier who is asking for data provided by the
users. Through an appropriate decomposition of the space that is maintained
in a distributed quad tree, as opposed to previous approaches which assume
a centralized anonymizer, the users can easily obfuscate their exact location
according to their own privacy preferences, without relying on other users as
in the k-anonymity approaches.
microphone to infer the context of the smartphone user. Many such applica-
tions aggressively collect sensing data, but do not offer users clear statements
on how the collected data will be used.
Approaches designed to provide location privacy will not protect the users’
context privacy due to the dynamics of user behaviors and temporal corre-
lations between contexts. For example, consider a context-aware application
that learns that a user follows a typical trajectory, i.e., the user goes to a
coffee shop and then goes to a hospital. If the user discloses that she is at
the coffee shop (i.e., a non-sensitive context), this may reveal that the user is
likely to go to the hospital next (i.e., a sensitive context).
MaskIt [78] protects the context privacy against an adversary whose strat-
egy is fixed and does not change over time. Such an adversarial model only
captures offline attacks, in which the attacker analyzes a user’s fixed personal
information and preferences.
Wang and Zhang [172] consider stronger adversaries, whose strategy adapts
in time depending on the user’s context. For example, a context-aware appli-
cation may sell users’ sensing data, and unscrupulous advertisers may push
context-related ads to users. The adversary is able to obtain the released sens-
ing data at the time when an untrusted application accesses the data. The
adversary can only retrieve a limited amount of data due to computational
constraints or limited bandwidth. As a result, the adversary can adaptively
choose different subsets of sensors to maximize its long-term utility. All of this
is captured in modeling a strategic adversary, i.e., a malicious adversary that
seeks to minimize users’ utility through a series of strategic attacks.
The overall goal of the authors is to find the optimal defense strategy
for users to preserve privacy over a series of correlated contexts. As the user
and the adversary have opposite objectives, their dynamic interactions can
be modeled as a zero-sum game. Moreover, since the context keeps changing
over time and both the user and the adversary perform different actions at
different times, the zero-sum game is in a stochastic setting.
The authors model the strategic and dynamic competition between a
smartphone user and a malicious adversary as a zero-sum stochastic game,
where the user preserves context-based service quality and context privacy
against strategic adversaries. The user’s action is to control the released data
granularity of each sensor used by context-aware applications in a long-term
defense against the adversary, while the adversary’s action is to select which
sensing data as the source for attacks. The user’s optimal defense strategy is
obtained at a Nash Equilibrium point of this zero-sum game.
The efficiency of the algorithm proposed by the authors to find the optimal
defense strategy was validated on smartphone context traces from 94 users.
The evaluation results can provide some guidance in the design of future
context privacy-preserving schemes.
Privacy Concerns and Solutions 139
9.7 Conclusion
In this chapter, we discussed privacy issues associated with mobile crowd-
sensing and reviewed several solutions to address them. We first discussed
the privacy-preserving architectures based on their system model and adver-
sary setup. Subsequently, we presented privacy-aware incentives, which are
crucial to maintain enough participation in mobile crowdsensing. Finally, we
discussed solutions for location and context-specific privacy, and for anony-
mous reputation management. All of these provide privacy assurances to the
participants of mobile crowdsensing systems.
10
Conclusions and Future Directions
10.1 Conclusions
The three components of the ubiquitous computing vision are computing,
wireless communication, and wireless sensing. With the widespread use of
smartphones, computing and wireless communication are on their way to
achieving ubiquity. Wireless sensing has been a niche technology until very re-
cently. However, this situation is changing. Encouraged by the ever-expanding
ecosystem of sensors embedded in mobile devices and context-aware mobile
apps, we believe that our society is on the verge of achieving ubiquitous sens-
ing, and mobile crowdsensing is the enabling technology. In the very near
future, crowdsensing is expected to enable many new and useful services for
society in areas such as transportation, healthcare, and environment protec-
tion.
The main outcome of this book will hopefully be to set the course for mass
adoption of mobile crowdsensing. Throughout the book, we have systemati-
cally explored the state of the art in mobile crowdsensing and have identified
its benefits and challenges. The main advantages of this new technology are
its cost-effectiveness at scale and its flexibility to tailor sensing accuracy to
the needs and budget of clients (people or organizations) collecting the data.
The current crowdsensing systems and platforms have already proved these
benefits.
141
142 Mobile Crowdsensing
location at the same time? Or how can we balance incentives and resource
management when multiple clients attempt to collect data from the same set
of participants? New protocols and algorithms will need to be designed and
implemented to solve these types of questions.
A different type of remaining challenge is the relation between global sens-
ing tasks, as seen by clients, and the individual sensing tasks executed by par-
ticipants. Issues such as global task specification, global task decomposition,
and individual task scheduling and fault tolerance will need to be investigated
in order to allow clients to execute more complex global sensing tasks. For ex-
ample, in an emergency situation, an application may need access to multiple
types of sensors, and these types are defined as a function of region and time.
Furthermore, the application should be allowed to specify the desired sensing
density, sensing accuracy, or fault tolerance.
In the next 5 to 10 years, we expect smart cities to fully incorporate crowd-
sensing in their infrastructure management systems. In addition, we believe
that crowdsensing and the Internet of Things will complement each other
in sensing the physical world. Furthermore, public cloud and cloudlets de-
ployed at the edge of the networks are expected to make crowdsensing and
the Internet of Things more effective and efficient. Substantial research and
development is needed to take advantage of this infrastructure, which incor-
porates mobility, sensing, and the cloud. Nevertheless, we are optimistic and
fully expect to see complex applications and services running over this type
of infrastructure during the next decade.
Bibliography
143
144 Bibliography
[73] Gabriel Ghinita, Keliang Zhao, Dimitris Papadias, and Panos Kalnis.
A reciprocal framework for spatial k-anonymity. Information Systems,
35(3):299–314, 2010.
[74] Dan Gillmor. We the media: Grassroots journalism by the people, for
the people. O’Reilly Media, Inc., 2006.
[75] Stylianos Gisdakis, Thanassis Giannetsos, and Panos Papadimitratos.
SPPEAR: security & privacy-preserving architecture for
participatory-sensing applications. In Proceedings of the 2014 ACM
conference on Security and privacy in wireless & mobile networks,
pages 39–50. ACM, 2014.
[76] Stylianos Gisdakis, Thanassis Giannetsos, and Panos Papadimitratos.
Shield: A data verification framework for participatory sensing
systems. In Proceedings of the 8th ACM Conference on Security &
Privacy in Wireless and Mobile Networks (WiSec ’15), pages
16:1–16:12. ACM, 2015.
[77] Pierre Goovaerts. Geostatistics for natural resources evaluation.
Oxford University Press, 1997.
[78] Michaela Götz, Suman Nath, and Johannes Gehrke. Maskit: Privately
releasing user context streams for personalized mobile applications. In
Proceedings of the 2012 ACM SIGMOD International Conference on
Management of Data, SIGMOD ’12, pages 289–300. ACM, 2012.
[79] Bin Guo, Zhiwen Yu, Xingshe Zhou, and Daqing Zhang. From
participatory sensing to mobile crowd sensing. In Pervasive Computing
and Communications Workshops (PERCOM Workshops), 2014 IEEE
International Conference on, pages 593–598. IEEE, 2014.
[80] Ido Guy. Crowdsourcing in the enterprise. In Proceedings of the 1st
international workshop on Multimodal crowd sensing, pages 1–2. ACM,
2012.
[81] Kyungsik Han, Eric A. Graham, Dylan Vassallo, and Deborah Estrin.
Enhancing Motivation in a Mobile Participatory Sensing Project
through Gaming. In Proceedings of 2011 IEEE 3rd international
conference on Social Computing (SocialCom’11), pages 1443–1448,
2011.
[82] T. He, C. Huang, B.M. Blum, J.A. Stankovic, and T. Abdelzaher.
Range-free localization schemes for large scale sensor networks. In
Proceedings of the 9th annual international conference on Mobile
computing and networking, page 95. ACM, 2003.
[83] T. He, S. Krishnamurthy, L. Luo, T. Yan, L. Gu, R. Stoleru, G. Zhou,
Q. Cao, P. Vicaire, and J.A. Stankovic. VigilNet: An integrated sensor
150 Bibliography
[132] Galen Pickard, Iyad Rahwan, Wei Pan, Manuel Cebrian, Riley Crane,
Anmol Madan, and Alex Pentland. Time Critical Social Mobilization:
The DARPA Network Challenge Winning Strategy. Technical Report
arXiv:1008.3172v1, MIT, 2010.
[133] C. Piro, C. Shields, and B. N. Levine. Detecting the Sybil attack in
mobile ad hoc networks. In Proc. of SecureComm’06, 2006.
[142] Sasank Reddy, Andrew Parker, Josh Hyman, Jeff Burke, Deborah
Estrin, and Mark Hansen. Image browsing, processing, and clustering
for participatory DietSense prototype. In Proceedings of the 4th
workshop on Embedded networked sensors, pages 13–17. ACM, 2007.
[143] Sasank Reddy, Katie Shilton, Gleb Denisov, Christian Cenizal,
Deborah Estrin, and Mani Srivastava. Biketastic: sensing and mapping
for better biking. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, pages 1817–1820. ACM, 2010.
[144] John Rula, Vishnu Navda, Fabian Bustamante, Ranjita Bhagwan, and
Saikat Guha. No “one-size fits all”: Towards a Principled Approach for
Incentives in Mobile Crowdsourcing. In Proceedings of the 15th
Workshop on Mobile Computing Systems and Applications
(HotMobile), pages 3:1–3:5, 2014.
[145] N. Sastry, U. Shankar, and D. Wagner. Secure verification of location
claims. In Proc. of the 2nd ACM Workshop on Wireless Security
(Wise’03), pages 1–10, Sep. 2003.
[146] L. Selavo, A. Wood, Q. Cao, T. Sookoor, H. Liu, A. Srinivasan, Y. Wu,
W. Kang, J. Stankovic, D. Young, et al. Luster: wireless sensor
network for environmental research. In Proceedings of the 5th
international conference on Embedded networked sensor systems, page
116. ACM, 2007.
[147] Katie Shilton, Jeff Burke, Deborah Estrin, Ramesh Govindan, Mark
Hansen, Jerry Kang, and Min Mun. Designing the personal data
stream: Enabling participatory privacy in mobile personal sensing.
TPRC, 2009.
[148] Minho Shin, Cory Cornelius, Dan Peebles, Apu Kapadia, David Kotz,
and Nikos Triandopoulos. AnonySense: A system for anonymous
opportunistic sensing. Pervasive and Mobile Computing, 7(1):16–30,
2011.
[149] Daniel P. Siewiorek, Asim Smailagic, Junichi Furukawa, Andreas
Krause, Neema Moraveji, Kathryn Reiger, Jeremy Shaffer, and
Fei Lung Wong. SenSay: A Context-Aware Mobile Phone. In
International Symposium on Wearable Computers, volume 3, page 248,
2003.
[150] D. Singelee and B. Preneel. Location verification using secure distance
bounding protocols. In Proc. of the 2nd IEEE International
Conference on Mobile Ad-hoc and Sensor Systems (MASS’05), pages
834–840, Nov. 2005.
Bibliography 157
[151] Noah Snavely, Steven M. Seitz, and Richard Szeliski. Photo tourism:
exploring photo collections in 3d. In ACM transactions on graphics
(TOG), volume 25, pages 835–846. ACM, 2006.
[152] J. Surowiecki. The Wisdom of Crowds: Why the Many are Smarter
Than the Few and how Collective Wisdom Shapes Business,
Economies, Societies, and Nations. Doubleday, 2004.
[153] Latanya Sweeney. k-anonymity: A model for protecting privacy.
International Journal of Uncertainty, Fuzziness and Knowledge-Based
Systems, 10(05):557–570, 2002.
[154] M. Talasila, R. Curtmola, and C. Borcea. LINK: Location verification
through Immediate Neighbors Knowledge. In Proceedings of the 7th
International ICST Conference on Mobile and Ubiquitous Systems,
(MobiQuitous’10), pages 210–223. Springer, 2010.
[155] Manoop Talasila, Reza Curtmola, and Cristian Borcea. ILR: Improving
Location Reliability in Mobile Crowd Sensing. International Journal of
Business Data Communications and Networking, 9(4):65–85, 2013.
[156] Manoop Talasila, Reza Curtmola, and Cristian Borcea. Improving
Location Reliability in Crowd Sensed Data with Minimal Efforts. In
WMNC’13: Proceedings of the 6th Joint IFIP/IEEE Wireless and
Mobile Networking Conference. IEEE, 2013.
[157] Manoop Talasila, Reza Curtmola, and Cristian Borcea. Alien vs.
Mobile User Game: Fast and Efficient Area Coverage in Crowdsensing.
In Proceedings of the Sixth International Conference on Mobile
Computing, Applications and Services (MobiCASE ’14). ICST/IEEE,
2014.
[158] Manoop Talasila, Reza Curtmola, and Cristian Borcea. Collaborative
Bluetooth-based Location Authentication on Smart Phones. in
Elsevier Pervasive and Mobile Computing Journal, 2014.
[159] Manoop Talasila, Reza Curtmola, and Cristian Borcea. Crowdsensing
in the Wild with Aliens and Micro-payments. IEEE Pervasive
Computing Magazine, 2016.
[160] R. Tan, G. Xing, J. Wang, and H.C. So. Collaborative target detection
in wireless sensor networks with reactive mobility. City University of
Hong Kong, Tech. Rep, 2007.
[161] Evangelos Theodoridis, Georgios Mylonas, Veronica Gutierrez
Polidura, and Luis Munoz. Large-scale participatory sensing
experimentation using smartphones within a Smart City. In
Proceedings of the 11th International Conference on Mobile and
Ubiquitous Systems: Computing, Networking and Services, pages
158 Bibliography
[172] Wei Wang and Qian Zhang. A stochastic game for privacy preserving
context sensing on mobile phone. In 2014 IEEE Conference on
Computer Communications (INFOCOM), pages 2328–2336. IEEE,
2014.
[173] Xinlei Wang, Wei Cheng, P. Mohapatra, and T. Abdelzaher.
ARTSense: Anonymous reputation and trust in participatory sensing.
In Proc. of INFOCOM ’13, pages 2517–2525, 2013.
[174] Yi Wang, Wenjie Hu, Yibo Wu, and Guohong Cao. Smartphoto: a
resource-aware crowdsourcing approach for image sensing with
smartphones. In Proceedings of the 15th ACM international
symposium on Mobile ad hoc networking and computing, pages
113–122. ACM, 2014.
[175] J. White, C. Thompson, H. Turner, B. Dougherty, and D.C. Schmidt.
WreckWatch: automatic traffic accident detection and notification with
smartphones. Mobile Networks and Applications, 16(3):285–303, 2011.
[176] Haoyi Xiong, Daqing Zhang, Guanling Chen, Leye Wang, and Vincent
Gauthier. CrowdTasker: maximizing coverage quality in piggyback
crowdsensing under budget constraint. In Proceedings of the IEEE
International Conference on Pervasive Computing and
Communications (PerCom15).
[177] Liwen Xu, Xiaohong Hao, Nicholas D. Lane, Xin Liu, and Thomas
Moscibroda. More with less: lowering user burden in mobile
crowdsourcing through compressive sensing. In Proceedings of the 2015
ACM International Joint Conference on Pervasive and Ubiquitous
Computing, pages 659–670. ACM, 2015.
[178] N. Xu, S. Rangwala, K.K. Chintalapudi, D. Ganesan, A. Broad,
R. Govindan, and D. Estrin. A wireless sensor network for structural
monitoring. In Proceedings of the 2nd international conference on
Embedded networked sensor systems, pages 13–24. ACM New York,
NY, USA, 2004.
[179] T. Yan, M. Marzilli, R. Holmes, D. Ganesan, and M. Corner. mCrowd:
a platform for mobile crowdsourcing. In Proceedings of the 7th ACM
Conference on Embedded Networked Sensor Systems (SenSys’09),
pages 347–348. ACM, 2009.
[180] Tingxin Yan, Vikas Kumar, and Deepak Ganesan. Crowdsearch:
exploiting crowds for accurate real-time image search on mobile
phones. In Proceedings of the 8th international conference on Mobile
systems, applications, and services, pages 77–90. ACM, 2010.
160 Bibliography
[181] Dejun Yang, Guoliang Xue, Xi Fang, and Jian Tang. Crowdsourcing to
smartphones: incentive mechanism design for mobile phone sensing. In
Proceedings of the 18th annual international conference on Mobile
computing and networking, pages 173–184. ACM, 2012.
[182] Jiang Yang, Lada A. Adamic, and Mark S. Ackerman. Crowdsourcing
and Knowledge Sharing: Strategic User Behavior on Taskcn. In
Proceedings of the 9th ACM Conference on Electronic Commerce, EC
’08, pages 246–255. ACM, 2008.
[183] Andrew Chi-Chih Yao. Protocols for secure computations. In FOCS,
volume 82, pages 160–164, 1982.
[184] Man Lung Yiu, Christian S. Jensen, Xuegang Huang, and Hua Lu.
Spacetwist: Managing the trade-offs among location privacy, query
performance, and query accuracy in mobile services. In Data
Engineering, 2008. ICDE 2008. IEEE 24th International Conference
on, pages 366–375. IEEE, 2008.
[185] Daqing Zhang, Haoyi Xiong, Leye Wang, and Guanling Chen.
CrowdRecruiter: selecting participants for piggyback crowdsensing
under probabilistic coverage constraint. In Proceedings of the 2014
ACM International Joint Conference on Pervasive and Ubiquitous
Computing, pages 703–714. ACM, 2014.
[186] L. Zhang, B. Tiwana, Z. Qian, Z. Wang, R.P. Dick, Z.M. Mao, and
L. Yang. Accurate online power estimation and automatic battery
behavior based power model generation for smartphones. In
Proceedings of the eighth IEEE/ACM/IFIP international conference
on Hardware/software codesign and system synthesis
(CODES/ISSS’10), pages 105–114. ACM, 2010.
[187] Qingwen Zhao, Yanmin Zhu, Hongzi Zhu, Jian Cao, Guangtao Xue,
and Bo Li. Fair energy-efficient sensing task allocation in participatory
sensing with smartphones. In INFOCOM, 2014 Proceedings IEEE,
pages 1366–1374. IEEE, 2014.
Index
161
162 Index