IEEE COMST 2013 Cloud Computing
IEEE COMST 2013 Cloud Computing
net/publication/260671144
CITATIONS READS
658 14,404
2 authors:
All content following this page was uploaded by Yang Xiao on 15 February 2017.
PaaS
attackers, the threat models, as well as existing defense strategies (e.g., Django)
in a cloud scenario. Future research directions are previously
determined for each attribute. Execution Environment
(e.g., Google App Engine)
Index Terms—cloud computing, security, privacy, trust, confi-
dentiality, integrity, accountability, availability.
Higher Infrastructure Services
(e.g., Google Bigtable)
I. I NTRODUCTION
Basic Infrastructure Service
B. Cloud Characteristics and Security Challenges exploit the co-residence issue. A series of security issues
The Cloud Security Alliance has summarized five essential such as data breach [5], [17], [29], computation breach
characteristics [6] that illustrate the relation to, and differences [5], flooding attack [26], etc., are incurred. Although
from, traditional computing paradigm. Multi-tenancy is a definite choice of cloud venders due to
its economic efficiency, it provides new vulnerabilities to
• On-demand self-service – A cloud customer may uni-
the cloud platform. Without changing the multi-tenancy
laterally obtain computing capabilities, like the usage
paradigm, it is imperative to design new security mech-
of various servers and network storage, as on demand,
anisms to deal with the potential risks.
without interacting with the cloud provider.
• Massive data and intense computation – cloud comput-
• Broad network access – Services are delivered across
ing is capable of handling mass data storage and intense
the Internet via a standard mechanism that allows cus-
computing tasks. Therefore, traditional security mecha-
tomers to access the services through heterogeneous thin
nisms may not suffice due to unbearable computation
or thick client tools (e.g., PCs, mobile phones, and
or communication overhead. For example, to verify the
PDAs).
integrity of data that is remotely stored, it is impractical
• Resource pooling – The cloud provider employs a multi-
to hash the entire data set. To this end, new strategies
tenant model to serve multiple customers by pooling
and protocols are expected.
computing resources, which are different physical and
virtual resources dynamically assigned or reassigned
according to customer demand. Examples of resources C. Supporting techniques
include storage, processing, memory, network bandwidth, Cloud computing has leveraged a collection of existing
and virtual machines. techniques, such as Data Center Networking (DCN), Virtual-
• Rapid elasticity – Capabilities may be rapidly and ization, distributed storage, MapReduce, web applications and
elastically provisioned in order to quickly scale out or services, etc.
rapidly released to quickly scale in. From customers’ Modern data center has been practically employed as an
point of view, the available capabilities should appear to effective carrier of cloud environments. It provides massive
be unlimited and have the ability to be purchased in any computation and storage capability by composing thousands
quantity at any time. of machines with DCN techniques.
• Measured service – The service purchased by customers Virtualization technology has been widely used in cloud
can be quantified and measured. For both the provider and computing to provider dynamic resource allocation and service
customers, resource usage will be monitored, controlled, provisioning, especially in IaaS. With virtualization, multiple
metered, and reported. OSs can co-reside on the same physical machine without
Cloud computing becomes a successful and popular busi- interfering each other.
ness model due to its charming features. In addition to the MapReduce [53] is a programming framework that sup-
benefits at hand, the former features also result in serious ports distributed computing on mass data sets. This breaks
cloud-specific security issues. The people whose concern is large data sets down into small blocks that are distributed to
the cloud security continue to hesitate to transfer their business cloud servers for parallel computing. MapReduce speeds up
to cloud. Security issues have been the dominate barrier of the the batch processing on massive data, which makes this be-
development and widespread use of cloud computing. There come the preference of computation model for cloud venders.
are three main challenges for building a secure and trustworthy Apart from the benefits, the former techniques also present
cloud system: new threats that have the capability to jeopardize cloud se-
curity. For instance, modern data center suffers bandwidth
• Outsourcing – Outsourcing brings down both capital
under-provisioning problems [24], which may be exploited and
expenditure (CapEx) and operational expenditure for
may consequently perform a new DOS attack [20] due to the
cloud customers. However, outsourcing also means that
shared infrastructure in cloud environments. Virtual Machine
customers physically lose control on their data and tasks.
(VM) technique also has the capability to enable adversaries
The loss of control problem has become one of the root
to perform cross-VM attacks [17] and timing attacks [60] due
causes of cloud insecurity. To address outsourcing secu-
to VM co-residence. Further details are to be discussed in
rity issues, first, the cloud provider shall be trustworthy
Section II and Section III.
by providing trust and secure computing and data storage;
second, outsourced data and computation shall be veri-
fiable to customers in terms of confidentiality, integrity, D. Attribute-driven Methodology
and other security services. In addition, outsourcing will Security and privacy issues in cloud environments have been
potentially incur privacy violations, due to the fact that studied and surveyed in prior literatures. To better understand
sensitive/classified data is out of the owners’ control. these issues and their connections, researchers have employed
• Multi-tenancy – Multi-tenancy means that the cloud various criteria to build a comprehensive picture.
platform is shared and utilized by multiple customers. Gruschka et al. [32] suggests modeling the security ecosys-
Moreover, in a virtualized environment, data belonging to tem based on the three participants of cloud system: service
different customers may be placed on the same physical user, service instance, and the cloud provider. The authors
machine by certain resource allocation policy. Adver- classify the attack into six categories (i.e., user to service,
saries who may also be legitimate cloud customers may service to user, user to cloud, cloud to user, service to cloud,
XIAO and XIAO: SECURITY AND PRIVACY IN CLOUD COMPUTING 845
concerns with regards to cloud computing. This is largely due explored the L2 cache covert channel with quantitative as-
to the fact that customers outsource their data and computation sessment [71]. It has been demonstrated that even the channel
tasks on cloud servers, which are controlled and managed by bit rate is higher than the former work [17], the channel’s
potentially untrustworthy cloud providers. ability to exfiltrate useful information is still limited, and it
is only practical to leak small secrets such as private keys.
Okamura et al. developed a new attack, which demonstrates
A. Threats to Cloud Confidentiality that CPU load can also be used as a covert channel to encode
1) T1.1 – Cross-VM attack via Side Channels: Ristenpart information [72]. Memory disclosure attack [81], [82] is
et al. [17] demonstrates the existence of Cross-VM attacks in another type of cross-VM attack. In a virtualized environment,
an Amazon EC2 platform. A Cross-VM attack exploits the memory deduplication is a technique to reduce the utilization
nature of multi-tenancy, which enables that VMs belonging of physical memory by sharing the memory pages with same
to different customers may co-reside on the same physical contents. A memory disclosure attack is capable of detecting
machine. Aviram et al. [60] regard timing side-channels as the existence of an application or a file on a co-residing
an insidious threat to cloud computing security due to the VM by measuring the write access time that differs between
fact that a) the timing channels pervasively exist and are deduplicated pages and regular ones.
hard to control due to the nature of massive parallelism and 2) T1.2 – Malicious SysAdmin: The Cross-VM attack dis-
shared infrastructure; b) malicious customers are able to steal cusses how others may violate confidentiality cloud customers
information from other ones without leaving a trail or raising that co-residing with the victim, although it is not the only
alarms. There are two main steps to practically initiate such threat. Privileged sysadmin of the cloud provider can perform
an attack: attacks by accessing the memory of a customer’s VMs. For
instance, Xenaccess [30] enables a sysadmin to directly access
• Step 1: placement. An adversary needs to place a the VM memory at run time by running a user level process
malicious VM on the physical server where the target in Domain0.
client’s VM is located. To achieve this, an adversary
should first determine where the target VM instance is
located; this can be done with network probing tools B. Defense Strategies
such as nmap, hping, wget, etc. An adversary should also Approaches to address cross-VM attack fall into six cate-
be able to determine if there are two VM instances; 1) gories: a) placement prevention intends to reduce the success
comparing Domain0’s IP addresses to see if they match, rate of placement; b) physical isolation enforcement [80];
and 2) measuring the small packet round-trip time can do c) new cache designs [75], [76], [77], [78]; d) fuzzy time
this check. The correctness of co-resident checks can be intends to weaken malicious VM’s ability to receive the
verified by transmitting messages between instances via signal by eliminating fine-grained timers [73]; e) forced VM
a covert channel. After all the prep work, a malicious determinism [60] ensures no timing or other non-deterministic
VM instance must be created on the target physical information leaking to adversaries; f) cryptographic implemen-
machine by specifying a set of parameters (e.g., zone, tation of timing-resistant cache [79]. Since c), d), e), and f)
host type); there are two basic strategies to launch such a are not cloud-specific defense strategies, we do not include
VM: 1) brute-force strategy, which simply launches many details in this section.
instances and checks co-residence with the target; 2) an 1) D1.1.1 – Placement Prevention: In order to reduce the
adversary can exploit the tendency that EC2 launches new risk caused by shared infrastructure, a few suggestions to
instances on the same small set of physical machines. The defend the attack in each step are given in [17]. For instance,
second strategy takes advantage of EC2’s VM assigning cloud providers may obfuscate co-residence by having Dom0
algorithm by starting a malicious VM after a victim VM not respond in traceroute, and/or by randomly assigning in-
is launched so that they will likely be assigned to the ternal IP addresses to launched VMs. To reduce the success
same physical server; this approach surely has better rate of placement, cloud providers might let the users decide
success rate of placement. where to put their VMs; however, this method does not prevent
• Step 2: extraction. After step 1, a malicious VM has a brute-force strategy.
co-resided with the victim VM. Since the malicious VM 2) D1.1.2 – Co-residency Detection: The ultimate solution
and the victim are sharing certain physical resources, such of cross-VM attack is to eliminate co-residency. Cloud cus-
as data cache, network access, CPU branch predicators, tomers (especially enterprises) may require physical isolation,
CPU pipelines, etc., there are many ways an adversary which can even be written into the Service Level Agreements
can employ attacks: 1) measuring a cache usage that can (SLAs). However, cloud vendor may be reluctant to abandon
estimate the current load of the server; 2) estimating a virtualization that is beneficial to cost saving and resource
traffic rate that can obtain the visitor count or even the utilization. One of the left options is to share the infrastructure
frequently requested pages; 3) a keystroke timing attack only with ”friendly” VMs, which are owned by the same
that can steal a victim’s password by measuring time customer or other trustworthy customers. To ensure physical
between keystrokes. isolation, a customer should be enabled to verify its VMs’
As follow-up work, various covert channels are investigated exclusive use of a physical machine. HomeAlone is a system
and in-depth analysis is provided. Attackers can easily exploit [80] that detects co-residency by employing a side-channel
L2 cache, due to its high bandwidth. Xu et al. have particularly (in the L2 memory cache) as a detection tool. The idea is to
XIAO and XIAO: SECURITY AND PRIVACY IN CLOUD COMPUTING 847
silence the activity of ”friendly” VMs in a selected portion of • Co-residency detection is considered as a promising tech-
L2 cache for a certain amount of time, and then measure the nique since customers should be able to check whether
cache usage to check if there is any unexpected activity, which the physical isolation is well enforced. HomeAlone [80]
indicates that the physical machine is co-resided by another has the ability to achieve accuracy of detection on L2
customer. cache side channels. However, besides L2 cache, other
3) D1.1.3 – NoHype: NoHype ([83], [84]) attempts to side channels may be exploited as well. Therefore, in
minimize the degree of shared infrastructure by removing the order to provide thorough detection of co-residence, a
hypervisor while still retaining the key features of virtual- suite of detection methods targeting on various side
ization. The NoHype architecture provides a few features: i) channels should be developed.
the ”one core per VM” feature prevents interference between • NoHype has opened another window to deal with cross-
VMs, eliminates side channels such as L1 cache, and retains VM threat. However, current commodity hardware im-
multi-tenancy, since each chip has multiple cores; ii) memory poses limitations to implement NoHype. Additionally,
partition restricts each VM’s memory access on a assigned live VM migration is not well supported by this new
range; iii) dedicated virtual I/O devices enables each VM to architecture. Therefore, before making a real step for-
be granted direct access to a dedicated virtual I/O device. No- ward, researchers need to address the hardware changes
Hype has significantly reduced the hypervisor attack surface, to accommodate NoHype and to maintain more features
and increased the level of VM isolation. However, NoHype for VM management.
requires to change hardware, making it less practical when
consider applying it to current cloud infrastructures. III. C LOUD I NTEGRITY
4) D1.2.1 – Trusted Cloud Computing Platform: Santos et
Similar to confidentiality, the notion of integrity in cloud
al. [29] present a trusted cloud-computing platform (TCCP),
computing concerns both data integrity and computation in-
which offers a closed box execution environment for IaaS
tegrity. Data integrity implies that data should be honestly
services. TCCP guarantees confidential execution of guest
stored on cloud servers, and any violations (e.g., data is lost,
virtual machines. It also enables customers to attest to the
altered, or compromised) are to be detected. Computation
IaaS provider and to determine if the service is secure before
integrity implies the notion that programs are executed without
their VMs are launched into the cloud.
being distorted by malware, cloud providers, or other mali-
The design goals of TCCP are: 1) to confine the VM
cious users, and that any incorrect computing will be detected.
execution inside the secure perimeter; 2) that a sysadmin
with root privileges is unable to access the memory of a
VM hosted in a physical node. TCCP leverages existing A. Threats to Cloud Integrity
techniques to build trusted cloud computing platforms. This 1) T2.1 – data loss/manipulation: In cloud storage, applica-
focuses on solving confidentiality problems for clients’ data tions deliver storage as a service. Servers keep large amounts
and for computation outsourced to the cloud. With TCCP, the of data that have the capability of being accessed on rare
sysadmin is unable to inspect or tamper with the content of occasions. The cloud servers are distrusted in terms of both
running VMs. security and reliability [14], which means that data may be lost
5) Other opinions: retaining data control back to customer: or modified maliciously or accidentally. Administration errors
Considering the customer’s fear of losing the data control in may cause data loss (e.g., backup and restore, data migration,
cloud environments, Descher et al. [40] propose to retain data and changing memberships in P2P systems [11]). Additionally,
control for the cloud customers by simply storing encrypted adversaries may initiate attacks by taking advantage of data
VMs on the cloud servers. Encrypted VM images guarantee owners’ loss of control over their own data.
rigorous access control since only the authorized users known 2) T2.2 – dishonest computation in remote servers: With
as key-holders are permitted access. Due to the encryption, the outsourced computation, it is difficult to judge whether the
data cannot be mounted and modified within the cloud without computation is executed with high integrity. Since the compu-
an access key, assuring the confidentiality and integrity. This tation details are not transparent enough to cloud customers,
approach offers security guarantees before a VM is launched; cloud servers may behave unfaithfully and return incorrect
however, there are ways to attack the VM during running time computing results; they may not follow the semi-honest model.
[30] and to jeopardize the data and computation. For example, for computations that require large amount of
computing resources, there are incentives for the cloud to
C. Summary and Open issues be ”lazy” [85]. On the other hand, even the semi-honest
model is followed, problems may arise when a cloud server
Regarding confidentiality, cross-VM attack and malicious uses outdated, vulnerable code, has misconfigured policies
SysAdmin mainly threaten a cloud system; both threats or service, or has been previously attacked with a rootkit,
take advantage of the vulnerability of virtualization and co- triggered by malicious code or data [86].
residence. Other tenants perform cross-VM attack, whereas
the malicious SysAdmin is inside attack from cloud vender.
Defending these threats is not a trivial task due to the following B. Defense Strategies
facts: 1) various side channels and other shared components 1) D2.2.1 – Provable Data Possession (PDP): Integrity
can be exploited, and defending each of them is not an easy checking on data is a long-term research topic [62], [68].
job; 2) There are a few open issues to be explored: However, traditional methods cannot be properly adopted
848 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 15, NO. 2, SECOND QUARTER 2013
key
expensive as well as bandwidth consuming. Each of the former
notions is not acceptable in cloud environments. KeyGen
Provable Data Possession, referred to as (PDP) [11], [12],
Prover
key
[14], [15], becomes employed through the process of checking
the data integrity with cloud storage in order to answer the
Verifier challenge
question, ”Is it possible for customers to be sure that the response
outsourced data is honestly stored in cloud?”
a) A Naive Method
For comparison purposes, a naive method is given in [12]. Fig. 3. Schematic of POR System
This idea consists of the client computing a hash value for file
F with a key k (i.e., h(k, F )) and subsequently sending F to
the server. Once the client finds a necessity to check the file, the encrypted file F’. The task is that a set of sentinel values
it releases k and sends k to the server, which is subsequently are embedded into F’, and the server only stores F’ without
asked to re-compute the hash value, based on the F and k; knowing where the sentinels may be since the sentinels are
after this, the server replies to the client with the hash result for indistinguishable from regular data blocks. In the challenge
comparison. The client can initiate multiple checks by keeping and response protocol, the server is asked to return a certain
different keys and hash values. This approach provides strong subset of sentinels in F’. If the server has tampered with or
proof that the server still retains F . However, the negative deleted F’, there is high probability that certain sentinels are
aspect is the high overhead that is produced. This overhead also corrupted or lost; this causes the server to be unable to
exists because each time of verification requires the server generate a complete proof for the original file. Therefore, a
to run a hashing process over the entire file. The notion at client has evidence to prove that the server has corrupted the
this moment is computationally costly, even for lightweight file. Due to the fact that the number of sentinels is limited,
hashing operations. POR adopts error-correcting codes to recover the file with only
b) Original Provable Data Possession (PDP) a small fraction being corrupted.
The original PDP model [11] requires that the data is pre- Similar to PDP, PoR can only be applied to static files. A
processed in the setup phase in order to leave some meta- subtle change to the file will ruin the protocol and completely
data on the client side for verification purposes subsequently, confuse the clients.
for that data to be sent to the cloud server. Once the client d) Scalable PDP
feels a necessity to check the data integrity at a later time, Scalable PDP [14] is an improved version of the original
he/she sends a challenge to the cloud server, which will PDP. The difference in the two are described as the following:
respond with a message based on the data content. After 1) scalable PDP adopts symmetric key encryption instead
combining the reply and the local meta-data, the client is able of public-key to reduce computation overhead, but scalable
to prove whether the integrity of the data is violated. The PDP does not support public verification due to symmetric
probabilistic guarantee of PDP shows that PDP can achieve encryption; 2) scalable PDP has added dynamic operations
a high probability for detecting server misbehavior with low on remote data. One limitation of scalable PDP is that all
computational and storage overhead. challenges and answers are pre-computed, and the number of
PDP is only applicable to static files (i.e., append-only files), updates is limited and fixed as a priori.
meaning that the data may not be changed once uploaded to e) Dynamic PDP
the server. This limitation reduces its applicability to cloud The goal of Dynamic PDP (DPDP) [15] is to support full
computing due to the fact that it is featured with dynamic dynamic operations (e.g., append, insert, modify, and delete).
data management. The purpose of dynamic operations is to enable authenticated
c) Proof of Retrievability (POR) insert and delete functions with rank-based authenticated di-
Proof of Retrievability (PoR) [12] employs an auditing rectories that are built on a skip list. The experiment result
protocol when solving a similar problem to PDP. The problem shows that, although the support of dynamic updates costs
is that each of the two enables clients to check the integrity of certain computational complexity, DPDP is practically effi-
outsourced data without having to retrieve it. PoR is designed cient. For instance, to generate proof for a 1GB file, DPDP
to be lightweight. In other words, it attempts to minimize the only produces 415 KB proof data and 30 ms computational
storage in client and server side, the communication complex- overhead.
ity of an audit, and the number of data-blocks accessed during The DPDP protocol introduces three new operations, which
an audit [12]. are known as PrepareUpdate, PerformUpdate and VerifyUp-
The POR protocol is depicted in Fig. 3. The user stores date. PrepareUpdate is run by a client in order to generate
only a key, which is used to encode the file F in order to get an update request that includes the updates to perform (i.e.,
XIAO and XIAO: SECURITY AND PRIVACY IN CLOUD COMPUTING 849
TABLE I
A PPROACHES OF D ATA I NTEGRITY C HECKING IN C LOUD S TORAGE
modify block i, delete block i, etc.). PerformUpdate is run by based homomorphic authenticator, which has been utilized
a server to perform the actual file update, and subsequently in existing literatures [46]. When combining a homomorphic
returns an update proof to the client who, in turn, verifies the authenticator with random masking, TPA becomes unable to
server behavior during the update. access the data content while it is performing auditing.
f) HAIL: A High-Availability and Integrity Layer for cloud 3) Combating dishonest computing: The outsourcing fea-
storage ture of cloud computing motivates researchers to revisit a
HAIL [16] differs from the prior work with regards to the classic problem that addresses integrity of external computa-
fact that it considers a distributed setting in which a client tions. How may a machine outsource a computation to another
must spread a file across multiple servers with redundancy and machine and then, without running the computation locally,
only store a small constant state in local machine. The main verify the correctness of the result output by the other machine
threats that HAIL combats are mobile adversaries, which may [87]? Conventional strategies to check external computation
possibly corrupt file F by undermining multiple servers. integrity fall into four categories:
g) Summary of PDP
PDP is a class of problems that provides efficient and prac- • D2.2.1 : Re-computation requires the local machine to
tical approaches in order to determine whether the outsourced re-do the computation, and then compare the results.
data is honestly stored. We have summarized and compared Re-computation guarantees 100% accuracy of mistake
several newly emerging approaches in Table I. The evolution detection, and does not require trusting the cloud vendor.
of PDP shows the improvements from static data to dynamic However, the cost is usually unbearable due to the fact
data as well as from single-server setting to distributed-servers that each of the verifications require at least the equal
setting. time as the original computation. To this end, customers
2) D2.1.2 – Third Party Auditor: Instead of letting cus- could possibly have no incentive to verify computation
tomers verify data integrity, it is also possible to offload task integrity in this manner. A variation of re-computation is
of integrity checking to a third party which can be trusted by sampling [66], which offers probabilistic guarantees of
both cloud provider and customers. Wang et al. [45] propose mistake detection, depending on the degree of sampling.
to adopt a third party auditor (TPA) to check the integrity Sampling trades accuracy for efficiency.
of outsourced data in cloud environments. TPA ensures the • D2.2.2 : Replication assigns one computation task to mul-
following: 1) cloud data can be efficiently audited without a tiple machines, and then compares the results. Majority
local data copy, and cloud clients suffer no on-line overhead voting may be employed to determine correctness. Repli-
for auditing; 2) no new vulnerabilities will be introduced cation assumes semi-trust to cloud vender because both
to jeopardize data privacy. The key technique is a public- computation and verification are conducted remotely.
850 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 15, NO. 2, SECOND QUARTER 2013
Intelligent adversaries that control certain amounts of taken into consideration. On the other hand, computation
machines may bypass replication checking by returning integrity is far tougher. The main challenge is the lack of
the same incorrect results. knowledge of the computation internals; if the verification
• D2.2.3 : Auditing [51], [89] usually works together with method applies to generic computations. A well-designed
logging. During the execution of a computation, a logging integrity checking method satisfies the following conditions:
component records all critical events into a log file, i) for practical concerns, the workload of local computation
which is subsequently sent to one or multiple auditors of verification should be less than the original computation.
for review. Auditing is a typical approach to do forensics Otherwise it is not efficient to do outsourcing; ii) the proof
investigation. One drawback of auditing is that if the can be verified by any party to ensure non-repudiation; iii) no
attacker understands the computation better than the or few assumption is imposed.
auditor, it is possible for the attacker to manipulate data There is great research potential in this area:
bits without being detected. • Combing integrity checking with other realistic require-
• D2.2.4 : Trusted Computing [29], [58] enforces the ments is a promising research trend. State-of-the-art
computer to behave consistently in expected ways with researches have studied to support dynamic operation on
hardware and software support. The key technique of cloud data; also, single machine setting is extended to the
integrity checking is known as remote attestation, which distributed setting. However, no prior works investigate
works by having the hardware generate a certificate integrity checking along with both dynamic operation and
stating that what software is running. The certificate can distributed setting. Moreover, other traditional features
then be sent to the verifier to show that the software is on distributed system such as fault-tolerance can be
unaltered. One assumption of trusted computing is that considered as well.
some component like the hardware and the hypervisor is • A significant advance will be made if a practical and
not physically altered. unconditional verification method is developed for com-
Some verification methods ([85], [90], [91], [92]) are do- putation integrity. It is an important attempt [95] to reduce
main or application specific. For example, Wang et al. have computation complexity when applying probabilistically
designed a practical mechanism for linear programming (LP) checkable proofs to matrix multiplication as a case study.
outsourcing [85]; by exploring the duality theorem of LP However, even in one case study, a few serious problems
computation and deriving the conditions that correct result arise. The improvement space is still large.
must satisfy, the verification mechanism only incurs close- • On the other hand, domain-specific method can achieve
to-zero additional cost on both cloud servers and customers. satisfying effects because these methods usually take
Freivalds’ method [89] verifies a m ∗ m matrix multiplication advantage of the inner features of computation. Some
in O(m2 ) with a randomized algorithm. Blanton et al. [92] scientific computations such as Linear Programming and
provide an approach of outsourcing large-scale biometric matrix operation have their outsourcing versions. How-
computation with reasonable overhead. ever, other types such as non-linear optimization are
The above approaches either rely on various assumptions remaining to be outsourced to cloud with strong integrity
that have restrictions, or incur high computation cost. The assurance.
ultimate goal, however, is to provide both practical and
unconditional integrity verification for remote computation. IV. C LOUD AVAILABILITY
Towards this goal, Setty et al. [95] suggest to seek help from
some early research results such as interactive proof [93], Availability is crucial since the core function of cloud com-
and probabilistically checkable proofs (PCPs) [94], which puting is to provide on-demand service of different levels. If a
attempt to design an ideal method that enables a customer certain service is no longer available or the quality of service
to check an answer’s correctness in constant time, with a cannot meet the Service Level Agreement (SLA), customers
suitably encoded proof and under a negligible chance of false may lose faith in the cloud system. In this section, we have
positive [94]. To date, PCP-based verification method is not studied two kinds of threats that impair cloud availability.
practical for general purpose yet, according to Setty’s research.
Although its application on a particular problem (i.e., matrix A. Threats to Cloud Availability
multiplication) seems to be encouraging, the author has also
pointed out some serious issues such as expensive setup time. 1) T3.1 – Flooding Attack via Bandwidth Starvation: In a
flooding attack, which can cause Deny of Service (DoS), a
huge amount of nonsensical requests are sent to a particular
C. Summary and Open issues service to hinder it from working properly. In cloud comput-
Cloud Integrity becomes vulnerable because the customers ing, there are two basic types [34] of flooding attacks:
do not physically control their data and software. For data • Direct DOS – the attacking target is determined, and the
integrity, there are two challenges: i) huge data volume makes availability of the targeting cloud service will be fully
conventional hashing scheme not viable; ii) integrity checking lost.
can only be practical when there are additional requirements, • Indirect DOS – the meaning is twofold: 1) all services
which increase the difficulty; for instance, dynamic operation hosted in the same physical machine with the target
support on remote data is non-trivial with integrity guarantees, victim will be affected; 2) the attack is initiated without
and in distributed setting, both integrity and consistency are a specific target.
XIAO and XIAO: SECURITY AND PRIVACY IN CLOUD COMPUTING 851
B. Defense strategy
The authors in [34] also point out that one of the con- 1) D3.1.1 – defending the new DOS attack: This new type
sequences of a flooding attack is that if a certain cloud of DOS attack differs from the traditional DOS or DDOS [24]
service is unavailable or the quality of service is degraded, attacks in that traditional DOS sends traffic to the targeting
the subscribers of all affected services may need to continue application/host directly while the new DOS attack does not;
paying the bill. However, we argue that since cloud providers therefore, some techniques and counter-measures [21], [22]
must have previously signed a Service Level Agreement (SLA) for handling traditional DOSs are no longer applicable.
with their clients, a responsible party must be determined once A DOS avoidance strategy called service migration [20]
the service level is degraded to some threshold since clients has been developed to deal with the new flooding attack. A
will be aware of that degradation. We will elaborate upon this monitoring agent located outside the cloud is set up to detect
problem (i.e., cloud accountability) in the next section. whether there may be bandwidth starvation by constantly
The nature of under-provisioning and public openness in a probing the cloud applications. When bandwidth degradation
cloud system brings new vulnerability that can be exploited is detected, the monitoring agent will perform application
to carry out a new DOS attack to jeopardize the cloud service migration, which may stop the service temporarily, with it
provision by saturating the limited network bandwidth. As resuming later. The migration will move the current appli-
shown in Fig. 4, links A, B, C are uplinks of router R5, R1, cation to another subnet of which the attacker is unaware.
and R2, respectively. Suppose that link B is the active link Experiment results show that it only takes a few seconds to
and link C is the fail-over link (i.e., a link will be activated migrate a stateless web application from one subnet to another.
when the active link is down). Due to under-provisioning, the 2) D4.1.1 – FRC attack detection: The key of FRC detec-
aggregate capacity of H1, H2, H3, and H4 (which form the tion is to distinguish FRC traffic from normal activity traffic.
subnet 1) is a few times larger than any capacity for links A, Idziorek et al. propose to exploit the consistency and self-
B, or C. In order to saturate link B, attackers (which may be a similarity of aggregate web activity [96]. To achieve this
few hosts controlled by the adversary) in subnet 1 only need goal, three detection metrics are used: i) Zipf ’s law [97] are
to generate enough traffic to target the hosts in another subnet adopted to measure relative frequency and self-similarity of
(e.g., subnet 2). Once link B is saturated by the non-sense web page popularity; ii) Spearman’s footrule is used to find
traffic, hosts in subnet1 are unable to deliver services to cloud the proximity between two ranked lists, which determines the
users. similarity score; iii) overlap between the reference list and the
To initiate such a DOS attack (bandwidth starvation) effec- comparator list measures the similarity between the training
tively, there are a few steps: data and the test data. Combining the three metrics yields a
reliable way of FRC detection.
1) Topology identification – Since only hosts in different
subnets are connected by bottleneck links, an adversary
needs to first identify the network topology. By exploit- C. Summary and Open Issues
ing the multiplexing nature of a router, the number of Service downgrade can be resulted by both internal and
routers between two hosts can be determined; this helps external threats. An internal threat comes from malicious
selected hosts picture the topology. cloud customers who take advantage of the bandwidth under-
2) Gaining access to enough hosts – The number of provisioning property of current DCN architecture to starve
hosts to perform the attack is determined by the uplink’s legitimate service traffic. On the other hand, external threat
capacity, which can be estimated by some tools such as refers to the EDoS attack, which degrades the victim’s long-
Pathload [26], Nettimer [27], or Bprobe [25]. term economic availability. Both DoS and EDos have appeared
3) Carrying out the attack – The author suggests em- in other scenarios, however, the ways they employ to attack the
852 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 15, NO. 2, SECOND QUARTER 2013
cloud platform are novel and worthwhile to be investigated. evidence, it is nearly impossible for them to hold each other
The following issues could be future research directions: responsible for the problem if a dispute arises.
• D3.1.1 gives an avoidance strategy that adopts service 2) T2.3 – Dishonest MapReduce: MapReduce [53] is a
migration. DoS avoidance is however not sufficient to parallel computing paradigm that is widely employed by major
entirely defend this attack, because the adversaries are cloud providers (Google, Yahoo!, Facebook, etc.). MapReduce
not identified yet. The issue can be further addressed splits a large data set into multiple blocks, each of which
by accountability. In this case, to track the malicious are subsequently input into a single worker machine for pro-
behavior, the key is to identify the origin of non-sense cessing. However, working machines may be mis-configured
traffic that tried to saturate the connection link. or malicious, as a result, the processing results returned by
• D4.1.1 describes a mechanism for FRC detection. How- the cloud may be inaccurate. In addition, it is difficult for
ever, it is not clear that how does a victim react to the customers to verify the correctness of results other than by
attack, and the identification of attackers is not presented, running the same task again locally. Dishonest MapReduce
as well. To complement D4.1.1, new researches in this may be viewed as a concrete case of computation integrity
area are expected. problem, as we discussed in Section III (i.e., cloud integrity).
• FRC attack is carried out by consuming bandwidth, The issue will be further addressed by accountability, because
which is one of the resources that are billable. How- even after customers have verified the correctness of MapRe-
ever, other resources, such as computing capabilities duce output, there is still a necessity to identify the faulty
and storage, are also potentially vulnerable to EDoS machines or any other possible reasons that resulted in wrong
attack. Therefore, it is imperative to discover viable threat answers.
models and defense strategies towards a comprehensive 3) T2.4 – Hidden Identity of Adversaries: Due to privacy
study of EDoS attack. concerns, cloud providers should not disclose cloud cus-
tomers’ identity information. Anonymous access is employed
V. C LOUD ACCOUNTABILITY to deal with this issue; although anonymity increases privacy,
While accountability has been studied in other systems it also introduces security problems. Full anonymity requires
[124], [125], [126], [127], [128], [129], [130], [131], it is that a customer’s information must be completely hidden from
essential in order to build trust relationships in cloud environ- absolutely anyone or anything else. In this case, malicious
ment [35], [47], [48], [50], [52], [88]. Accountability implies users can jeopardize the data integrity without being detected
since it becomes easier to hide their identities.
that the capability of identifying a party, with undeniable
evidence, is responsible for specific events [124], [125], [126], 4) T4.2 – Inaccurate Billing of Resource Consumption:
[127], [128], [129], [130], [131]. When dealing with cloud The pay-as-you-go model enables customers to decide how
computing, there are multiple parties that may be involved; to outsource their business based on their necessities as well
a cloud provider and its customers are the two basic ones, as the financial situations. However, it is quite difficult for
and the public clients who use applications (e.g., a web customers to verify the expenses of the resource consump-
application) outsourced by cloud customers may be another tion due to the black box and dynamic nature of cloud
party. A fine-grained identity, however, may be employed computing. From the cloud vendor’s perspective, in order to
to identify a specific machine or even the faulty/ malicious achieve maximum profitability, the cloud providers choose
program that is responsible. to multiplex applications belonging to different customers to
keep high utilization. The multiplexing may cause providers
to incorrectly attribute resource consumption to customers or
A. Threats to Cloud Accountability implicitly bear additional costs, therefore reducing their cost-
1) T2.2 – SLA violation: A. Haeberlen addresses the im- effectiveness [18]. For example, I/O time and internal network
portance of accountability in cloud computing [47], where bandwidth are not metered, even though each incurs non-trivial
the loss of data control is problematic when something goes cost. Additionally, metering sharing effects, such as shared
awry. For instance, the following problems may possibly memory usage, is difficult.
arise: 1) The machines in the cloud can be mis-configured
or defective and can consequently corrupt the customer’s data
or cause his computation to return incorrect results; 2) The B. Defense Strategies
cloud provider can accidentally allocate insufficient resources 1) D2.2.1 – Accountability on Service Level Agreement
for the customer, an act which can degrade the performance (SLA): To deal with this dispute of an SLA violation, a
of the customer’s services and then violate the SLA; 3) An primitive AUDIT (A, S, t1, t2) is proposed in [47] to allow
attacker can embed a bug into the customer’s software in the customers to check whether the cloud provider has fulfilled
order to steal valuable data or to take over the customer’s the SLA (denoted by A) for service S between time internal
machines for spamming or DoS attacks; 4) The customer may t1 and t2. AUDIT will return OK if no fault is detected;
not have access to his data either because the cloud loses it otherwise AUDIT will provide verifiable evidence to expose
or simply because the data is unavailable at an inconvenient the responsible party. The author in [47] does not detail the
time. If something goes wrong, for example, data leaks to a design of AUDIT, instead the author provides a set of building
competitor, or the computation returns incorrect results; it can blocks that may be contributive, including 1) tamper-evident
be difficult for a customer and provider to determine which logs that can record all the history actions of an application,
of them has caused the problem, and, in the absence of solid 2) virtualization-based replays that can audit the actions of
XIAO and XIAO: SECURITY AND PRIVACY IN CLOUD COMPUTING 853
other applications by replaying their logs, 3) trusted time as Auditors. An auditor makes use of the determinism of
stamping that can be used to detect performance fault (i.e., Map/Reduce functions in order to apply an Accountable Test
latency or throughput cannot match the level in SLA), and (A-Test) for each task on each working machine. The AC
4) sampling that can provide probability guarantees and can picks up a task that has been completed by a machine M,
improve efficiency of replay. then re-executes it and compares the output with M’s. If an
2) D2.2.2 – Accountable Virtual Machine: Accountable inconsistency shows up, then M is proved to be malicious. The
Virtual Machine (AVM) [50] is follow-up work of [47]. A-Test will stop when all tasks are tested. A full duplication
The intent of AVM is to enable users to audit the software of an execution requires large computation costs. Instead of
execution on remote machines. AVM is able to 1) detect faults, pursuing a 100% detection rate, the authors determined to
2) identify faulty node, 3) provides verifiable evidence of a provide probability guarantees in order to accelerate the A-test.
particular fault and point to the responsible party. AVM is At this moment, the general idea is to only re-execute a part of
applicable to cloud computing in which customers outsource each task. By carefully selecting the parameters, high detection
their data and software on distrusted cloud servers. AVM rate (e.g., 99%) may be achieved with low computation costs.
allows cloud users to verify the correctness of their code 5) D2.4.1 – Secure Provenance: Secure provenance is in-
in the cloud system. The approach is to wrap any running troduced with an aim to ensure that verifiable evidence might
software in a virtual machine, which keeps a tamper-evident be provided to trace the real data owner and the records of
log [51] to record the entire execution of the software. If we data modification. Secure provenance is essential to improve
assume there is a reference implementation, which defines the data forensic and accountability in cloud systems. Lu et al.
correct execution of the software, the cloud users have enough [41] have proposed a secure provenance scheme based on
information to verify the software correctness by replaying the bilinear paring techniques, first bringing provenance problems
log file and comparing it with the reference copy. If there is into cloud computing. Considering a file stored in cloud,
an inconsistency, there will be mismatches detected. The log when there is dispute on that file, the cloud can provide all
is tamper-evident, meaning that nobody may tamper with the provenance information with the ability to plot all versions of
log file without being detected. Once the integrity of the log the file and the users that modified it. With this information,
file is ensured, the evidence obtained from it is trustworthy. a specific user identity can be tracked.
The evidence is provable by any external party. One limitation 6) D4.2.1 – Verifiable Resource Accounting: Sekar and
of AVM is that it can only detect faults caused by network Maniatis [18] have proposed verifiable resource accounting,
operations since it only logs network input/output messages. which enables cloud customers to be assured that i) their
3) D2.2.3 – Collaborative Monitoring: A solution that is applications indeed consumed the resources they were charged
similar to AVM was developed in [48] by maintaining an ex- for and ii) the consumption was justified based on an agreed
ternal state machine whose job is to validate the correctness of policy. The scheme in [18] considers three roles: the customer
the data and the execution of business logic in a multi-tenancy C, the provider P, and the verifier V. First, C asks P to
environment. The authors in [48] define the service endpoint run task T; then, P generates a report R describing what
as the interface through which the cloud services are delivered resources P thinks that C consumes. C then sends the report
to its end users. It is assumed that the data may only be R and some additional data to V who checks whether R is a
accessed through endpoints that are specified according to the valid consumption report. By implementing a trusted hardware
SLA between the cloud provider and the users. The basic idea layer with other existing technologies such as offloading
is to wrap each endpoint with an adapter that is able to capture monitoring, sampling, and snapshot, it can be ensured that a)
the input/output of the endpoint and record all the operations the provider does not overcharge/undercharge customers and
performed through the endpoint. The log is subsequently sent b) the provider correctly assigns the consumption of a resource
to the external state machine for authentication purposes. To to the principal responsible for using that resource.
perform the correctness verification, the Merkle B-tree [49] is 7) Other Opinions: In order to practically include account-
employed to authenticate the data that is stored in the cloud ability into cloud environment, Ko et al. [69] present the
system. An update operation on the data will also update the Cloud Accountability Life Cycle (CALC), describing the key
MB-tree. A query operation is authenticated by a range query phases to build cloud accountability. The CALC contains
on the MB-tree. Once the correctness checking fails, the state seven phases: 1) policy planning, 2) sense and trace, 3)
machine will report problems and provide verifiable evidence logging, 4) safe-keeping of logs, 5) reporting and replaying,
based on the query result of the MB-tree. 6) auditing, and 7) optimizing and rectifying.
4) D2.3.1 – Accountable MapReduce: In [54], this problem
has been addressed with SecureMR, which adopts full task
duplication to double check the processing result. SecureMR C. Summary and Open Issues
requires that twice two different machines, which will double Accountability is a significant attribute of cloud comput-
the total processing time, execute a task. Additionally, Se- ing because the computing paradigm increases difficulty of
cureMR suffers false positive when an identical faulty program holding an entity responsible for some action. Following a
processes the duplicated tasks. pay-as-you-go billing model, cloud vendor provides resources
Xiao et al. [66] propose to build an Accountable MapRe- rented by customers who may host their web contents opening
duce to detect the malignant nodes. The basic idea is as to public clients. Even a simple action (e.g., a web request)
follows: the cloud provider establishes a trust domain, which will involve multiple parties. On the other hand, accountability
consists of multiple regular worker machines referred to not only handles security threats, it also deals with various
854 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 15, NO. 2, SECOND QUARTER 2013
TABLE II
A PPROACHES OF P RIVACY E NFORCEMENT
up online phase, only symmetric cryptographic operations are 1) a policy ranking strategy to help cloud customers identify
performed in the cloud, without requiring further interaction a cloud provider that best fits their privacy requirements; 2)
with the token. an automatic policy generation mechanism that integrates the
1) Other Opinions: Cryptography is NOT all-purpose policies and requirements from both participants and produces
Van Dijk et al. [63] argue that cryptography alone cannot a specific policy as agreed by them; 3) policy enforcement
provide complete solutions to all privacy issues in cloud that ensures the policy will be fulfilled. A straightforward
computing, even with powerful tools like FHE. The authors path to obtain policy ranking is comparing the customer
formally define a class of privacy problems in terms of various requirement with the policies of multiple service providers
application scenarios. It has been proved that when data is and subsequently picking the one with the highest rank. The
shared among customers, no cryptographic protocol can be comparison may happen on the client’s side, the cloud provider
implemented to offer privacy assurance. Let us define the side, or through a broker. Policy algebra can be employed to
following notations [63]: carry out the policy generation. Each policy should be first
• S – the cloud, which is a highly resourced, monolithic formalized and then integrated with fine-grained policy algebra
entity. [57]. Pearson [28] suggests that privacy should be taken into
• C = C1 , C2 , ..., Cn – a set of customers/tenants of S. account from the outset and should be considered in every
• xi – a static, private value belonging to Ci but stored in phase of cloud service design.
S.
The task of S is to run different applications/functions over C. Open Issues
{xi }. Regarding cloud privacy, there are some open issues to be
a) Class one: private single-client computing studied in future researches:
These applications only process data xi owned by a single • The authors think that accountability and privacy may
client Ci . No other parties should be able to learn any conflict with each other. The enforcement of account-
information about the internal process. A typical example is ability will violate privacy in some degree, and extreme
a tax-preparation program taking as input financial data that privacy protection (e.g., full anonymity to hide users’
belongs to a single client and should be hidden from both S identity) will make accountability more challenging. An
and other clients. This class can be properly solved by an FHE extreme example, a shared file, accessed by multiple
that meets all the secure requirements. users who, may hide their identities due to anonymity
b) Class two: private multi-client computing for the purpose of privacy protection. However, mali-
These applications operate on data set {xi } owned by cious users are tracked with difficultly because of the
multiple clients {Ci }. Since there are more clients involved, anonymous access. From the viewpoint of accountability,
data privacy among clients is preserved in a more complicated general approaches include information logging, replay,
way. There are access-control policies that must be followed tracing [22], etc. These operations may not be completed
when processing data. A real world application is a social without revealing some private information (e.g., account
networking system, in which xi is the personal profile of a name, IP address). We must seek a trade-off in which the
client Ci ; Ci is able to specify which friends can view what requirement of one attribute can be met while simultane-
portions/functions of her data (i.e., gives an access control ously maintaining some degree of the other attribute.
policy). It has been proved that private multi-client computing • The assessment of attributes is another important issue
is unachievable using cryptography. since it provides a quantitative way to evaluate them. The
c) Class three: stateful private multi-client computing goal is to determine how secure a cloud is or how much
This class is a restricted version of class two. The difference privacy can be offered. The meaning is twofold: 1) it
is that the access-control policy on a client’s data is stateful, will be helpful to compare different security approaches;
meaning that it depends on the application execution history. for example, to achieve 100% privacy, scheme A costs
This class is not discussed thoroughly in the paper [63], but 100; scheme B can achieve 99% accountability with cost
the authors do believe it has an important position in cloud of 10. Apparently, scheme B is more practically effi-
applications. cient, although it sacrifices one percent of accountability.
2) Other Opinions: Privacy Preserving Frameworks Without an assessment, it is difficult to compare two
Lin et al. presented a general data protection framework [56] strategies quantitatively. 2) The quantitative clauses of
in order to address the privacy challenges in cloud service pro- the security/privacy requirements can be drafted into the
vision. The framework consists of three key building blocks: Service Level Agreements (SLAs).
856 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 15, NO. 2, SECOND QUARTER 2013
T1.1: Cross-VM attack T2.1: data T2.2: dishonest computation T3.1: Flooding Attack T4.1: FRC attack [96, 97]
via Side Channels [17, T1.2: Malicious loss/manipulation [14] in remote servers [85, 86] via Bandwidth Starvation [20]
60, 71, 72, 81, 82] SysAdmin [30]
D2.1.1: PDP D2.2.1: Re-computation
V2: Loss of Physical Control V4: Cloud Pricing Model V2: Loss of Physical Control
VII. C ONCLUSIONS [6] Cloud Security Alliance (CSA). “Security Guidance for Critical Areas
of Focus in Cloud Computing V2.1,” (Released December 17, 2009).
Throughout this paper, the authors have systematically http://www.cloudsecurityalliance.org/guidance/csaguide.v2.1.pdf
studied the security and privacy issues in cloud computing [7] Cloud Security Alliance (CSA). “Top Threats to Cloud Computing V
based on an attribute-driven methodology, shown in Fig. 5. 1.0,” released March 2010.
[8] The security-as-a-service model. http://cloudsecurity.trendmicro.com
We have identified the most representative security/privacy /the-security-as-a-service-model/
attributes (e.g., confidentiality, integrity, availability, account- [9] S.D. Di Vimercati, S. Foresti, S. Jajodia, S. Paraboschi, and P.
ability, and privacy-preservability), as well as discussing the Samarati, “A data outsourcing architecture combining cryptography
and access control,” Proc. 2007 ACM workshop on Computer security
vulnerabilities, which may be exploited by adversaries in order architecture, 2007, pp. 63-69.
to perform various attacks. Defense strategies and suggestions [10] P. Mell and T. Grance. The NIST Definition of Cloud Computing
were discussed as well. We believe this review will help shape (Draft). [Online] Available: www.nist.gov/itl/cloud/upload/cloud-def-
v15.pdf., Jan. 2011.
the future research directions in the areas of cloud security and [11] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson,
privacy. and D. Song, “Provable data possession at untrusted stores,” In ACM
CCS, pages 598-609, 2007.
[12] A. Juels and B. S. Kaliski, “PORs: Proofs of retrievability for large
ACKNOWLEDGMENT files,” In ACM CCS, pages 584-597, 2007.
[13] Y. Dodis, S. Vadhan, and D. Wichs, “Proofs of retrievability via
This work is supported in part by The U.S. National hardness amplication,” In TCC, 2009.
Science Foundation (NSF), under grants: CNS-0716211, CCF- [14] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, “Scalable and
0829827, CNS-0737325, and CNS-1059265. efficient provable data possession,” SecureComm, 2008.
[15] C. Erway, A. Küpçü, C. Papamanthou, and R. Tamassia, “Dynamic
provable data possession,” Proc. 16th ACM conference on Computer
R EFERENCES and communications security, 2009, pp. 213-222.
[16] K.D. Bowers, A. Juels, and A. Oprea, “HAIL: A high-availability
[1] I. Foster, Y. Zhao, I. Raicu, and S. Lu, “Cloud computing and and integrity layer for cloud storage,” Proc. 16th ACM conference
grid computing 360-degree compared,” Grid Computing Environments on Computer and communications security, 2009, pp. 187-198.
Workshop, 2008. GCE’08, 2009, pp. 1-10. [17] T. Ristenpart, E. Tromer, H. Shacham, and S. Savage, “Hey, you, get
[2] J. Geelan. “Twenty one experts define cloud computing,” Virtual- off of my cloud: exploring information leakage in third-party compute
ization, August 2008. Electronic Mag., article available at http:// clouds,” Proc. 16th ACM conference on Computer and communications
virtualization.sys-con.com/node/612375. security, 2009, pp. 199-212.
[3] R. Buyya, C. S. Yeo, and S. Venugopal. “Market-oriented cloud [18] V. Sekar and P. Maniatis, “Verifiable resource accounting for cloud
computing: Vision, hype, and reality for delivering it services as computing services,” in Proc. 3rd ACM workshop on Cloud computing
computing utilities,” CoRR, (abs/0808.3558), 2008. security workshop, 2011, pp. 21-26.
[4] Luis M. Vaquero, Luis Rodero-Merino and Daniel Morán, “Lock- [19] C. Hoff, “Cloud computing security: From DDoS (distributed denial
ing the sky: a survey on IaaS cloud security,” Computing, 2010, of service) to EDoS (economic denial of sustainability),” [Online]
DOI:10.1007/s00607-010-0140-x. Available: http://www.rationalsurvivability.com/blog/?p=66., 2008.
[5] Google Docs experienced data breach during March 2009. [20] H. Liu, “A New Form of DOS Attack in a Cloud and Its Avoidance
http://blogs.wsj.com/digits/2009/03/08/1214/ Mechanism”, Cloud Computing Security Workshop 2010.
XIAO and XIAO: SECURITY AND PRIVACY IN CLOUD COMPUTING 857
[21] S. Kandula, D. Katabi, M. Jacob, and A. Berger, “Botz-4-sale: Surviv- [47] A. Haeberlen. “A Case for the Accountable Cloud,” 3rd ACM SIGOPS
ing organized ddos attacks that mimic flash crowds,” In Proc. NSDI International Workshop on Large-Scale Distributed Systems and Mid-
(2005). dleware (LADIS ’09), Big Sky, MT, October 2009
[22] A. Yaar, A. Perrig, and D. Song, “Fit: Fast internet traceback,” In Proc. [48] C. Wang and Y. Zhou, “A Collaborative Monitoring Mechanism for
IEEE Infocom (March 2005). Making a Multitenant Platform Accountable.” Hotcloud 2010.
[23] S. Staniford, V. Paxson, and N.Weaver, “How to own the internet in [49] F. Li, M. Hadjieleftheriou, G. Kollios, and L. Reyzin, “Dynamic
your spare time,” In Proc. USENIX Security (2002). authenticated index structures for outsourced databases,” Proc. 2006
[24] Cisco data center infrastructure 2.5 design guide. ACM SIGMOD international conference on Management of data, June,
http://www.cisco.com/ univercd/cc/td/doc/solution/dcidg21.pdf. 2006, pp. 27-29.
[25] R. Carter, and M. Crovella, “Measuring bottleneck link speed in packet- [50] A. Haeberlen, P. Aditya, R. Rodrigues, and P. Druschel, “Accountable
switched networks,” Tech. rep., Performance Evaluation, 1996. virtual machines,” 9th OSDI, 2010.
[26] C. Dovrolis, P. Ramanathan, and D. Moore, “What do packet dispersion [51] A. Haeberlen, P. Kuznetsov, and P. Druschel, ”PeerReview: Practical
techniques measure?” In Proc. IEEE INFOCOM (2001), pp. 905-914. accountability for distributed systems,” In Proc. ACM Symposium on
[27] K. Lai, and M. Baker, “Nettimer: a tool for measuring bottleneck Operating Systems Principles (SOSP), Oct. 2007.
link, bandwidth,” In USITS’01: Proc. 3rd conference on US ENIX [52] S. Pearson and A. Charlesworth, “Accountability as a Way Forward for
Symposium on Internet Technologies and Systems (Berkeley, CA, Privacy Protection in the Cloud,” Proc. 1st International Conference on
USA, 2001), USENIX Association, pp. 11-11. Cloud Computing, Beijing, China: Springer-Verlag, 2009, pp. 131-144.
[28] S. Pearson, “Taking account of privacy when designing cloud comput- [53] J. Dean and S. Ghemawat, “Mapreduce: simplified data processing on
ing services,” Software Engineering Challenges of Cloud Computing, large clusters,” In OSDI’04: Proc. 6th conference on Symposium on
2009. CLOUD ’09. ICSE Workshop on, 2009, pp. 44-52. Opearting Systems Design and Implementation. USENIX Association,
[29] N. Santos, K.P. Gummadi, and R. Rodrigues, “Towards trusted cloud Berkeley, CA, USA.
computing,” Proc. 2009 conference on Hot topics in cloud computing, [54] W. Wei, J. Du, T. Yu, and X. Gu, “SecureMR: A Service Integrity
2009. Assurance Framework for MapReduce,” Proc. 2009 Annual Computer
Security Applications Conference, 2009, pp. 73-82.
[30] B. D. Payne, M. Carbone, and W. Lee, “Secure and Flexible Monitoring
of Virtual Machines,” In Proc. ACSAC’07, 2007. [55] M. Castro and B. Liskov, “Practical Byzantine fault tolerance,” Oper-
ating Systems Review, vol. 33, 1998, pp. 173-186.
[31] A. Squicciarini, S. Sundareswaran, and D. Lin, “Preventing Information
Leakage from Indexing in the Cloud,” 2010 IEEE 3rd International [56] D. Lin and A. Squicciarini, “Data protection models for service
Conference on Cloud Computing, 2010, pp. 188-195. provisioning in the cloud,” Proceeding of the 15th ACM symposium
on Access control models and technologies, 2010, pp. 183-192.
[32] N. Gruschka, M. Jensen, “Attack Surfaces: A Taxonomy for Attacks
[57] P. Rao, D. Lin, E. Bertino, N. Li, and J. Lobo, ”An algebra for fine-
on Cloud Services,” Cloud Computing, IEEE International Conference
grained integration of xacml policies,” In Proc. ACM Symposium on
on, pp. 276-279, 2010 IEEE 3rd International Conference on Cloud
Access Control Models and Technologies, pages 63-72, 2009.
Computing, 2010.
[58] A.R. Sadeghi, T. Schneider, and M. Winandy, “Token-Based Cloud
[33] P. Saripalli, B. Walters, “QUIRC: A Quantitative Impact and Risk
Computing,” Trust and Trustworthy Computing, 2010, pp. 417-429
Assessment Framework for Cloud Security,” Cloud Computing, IEEE
[59] R. Chow, P. Golle, M. Jakobsson, E. Shi, J. Staddon, R. Masuoka, and
International Conference on, pp. 280-288, 2010 IEEE 3rd International
J. Molina, “Controlling data in the cloud: outsourcing computation
Conference on Cloud Computing, 2010.
without outsourcing control,” Proc. 2009 ACM workshop on Cloud
[34] M. Jensen, J. Schwenk, N. Gruschka, and L.L. Iacono, “On tech-
computing security, 2009, pp. 85-90.
nical security issues in cloud computing,” Cloud Computing, 2009.
[60] A. Aviram, S. Hu, B. Ford, and R. Gummadi, “Determinating timing
CLOUD’09. IEEE International Conference on, 2009, pp. 109-116.
channels in compute clouds,” In Proc. 2010 ACM workshop on Cloud
[35] M. Jensen and J. Schwenk, “The accountability problem of flooding computing security workshop (CCSW ’10). ACM, New York, NY,
attacks in service-oriented architectures,” in Proc. IEEE International USA, 103-108.
Conference on Availability, Reliability and Security (ARES), 2009.
[61] A. Lenk, M. Klems, J. Nimis, S. Tai, and T. Sandholm, “What’s inside
[36] J. Oberheide, E. Cooke, and F. Jahanian, “Cloudav: N-version antivirus the Cloud? An architectural map of the Cloud landscape,” Software
in the network cloud,” Proc. 17th conference on Security symposium, Engineering Challenges of Cloud Computing, 2009. CLOUD’09. ICSE
2008, pp. 91-106. Workshop on, 2009, pp. 23-31.
[37] F. Lombardi and R. Di Pietro, “Transparent security for cloud,” Proc. [62] D. L. G. Filho and P. S. L. M. Baretto, “Demon-strating data possession
2010 ACM Symposium on Applied Computing, 2010, pp. 414-415. and uncheatable data transfer,” IACR ePrint archive, 2006, Report
[38] C. Henrich, M. Huber, C. Kempka, J. Müller-Quade, and M. Strefler, 2006/150, http://eprint.iacr.org/2006/150.
“Brief Announcement: Towards Secure Cloud Computing,” Stabiliza- [63] M. Van Dijk and A. Juels, “On the Impossibility of Cryptography
tion, Safety, and Security of Distributed Systems, 2009, pp. 785-786. Alone for Privacy-Preserving Cloud Computing,” IACR ePrint 2010,
[39] S. Subashini and V. Kavitha, “A survey on security issues in service vol. 305.
delivery models of cloud computing,” J. Network and Computer [64] C. Gentry, “Fully homomorphic encryption using ideal lattices,” In
Applications, vol. 34, Jan. 2011, pp. 1-11. STOC, pages 169-178, 2009.
[40] M. Descher, P. Masser, T. Feilhauer, A. Tjoa, and D. Huemer, “Retain- [65] Cloud computing infrastructure, available:
ing Data Control to the Client in Infrastructure Clouds,” Availability, http://en.wikipedia.org/wiki/Cloud computing.
Reliability and Security, 2009. ARES ’09. International Conference on, [66] Z. Xiao and Y. Xiao, “Accountable MapReduce in Cloud Computing,”
2009, pp. 9-16. Proc. The IEEE International Workshop on Security in Computers,
[41] R. Lu, X. Lin, X. Liang, and X.S. Shen, “Secure provenance: the es- Networking and Communications (SCNC 2011) in conjunction with
sential of bread and butter of data forensics in cloud computing,” Proc. IEEE INFOCOM 2011.
5th ACM Symposium on Information, Computer and Communications [67] M. Christodorescu, R. Sailer, D. Schales, D. Sgandurra, and D.
Security, 2010, pp. 282-292. Zamboni, “Cloud security is not (just) virtualization security: a short
[42] S. Pearson, Y. Shen, and M. Mowbray, “A privacy manager for cloud paper,” In Proc. 2009 ACM workshop on Cloud computing security
computing,” Cloud Computing, 2009, pp. 90-106. (CCSW ’09). ACM, New York, NY, USA, 97-102. DOI=10.1145
[43] M. Mowbray and S. Pearson, “A client-based privacy manager for /1655008.1655022 http://doi.acm.org/ 10.1145/1655008.1655022.
cloud computing,” Proc. Fourth International ICST Conference on [68] Y. Deswarte, J.-J. Quisquater, and A. Saidane, “Remote integrity
COMmunication System softWAre and middlewaRE, 2009, pp. 1-8. checking,” in Proc. Conference on Integrity and Internal Control in
[44] W. Itani, A. Kayssi, and A. Chehab, “Privacy as a Service: Privacy- Information Systems (IICIS’03), November 2003.
Aware Data Storage and Processing in Cloud Computing Architec- [69] R.K.L. Ko, B.S. Lee, and S. Pearson, “Towards Achieving Account-
tures,” IEEE International Conference on Dependable, Autonomic and ability, Auditability and Trust in Cloud Computing,” Proc. International
Secure Computing, 2009, pp. 711-716. workshop on Cloud Computing: Architecture, Algorithms and Appli-
[45] C. Wang, Q. Wang, K. Ren, and W. Lou, “Privacy-preserving public cations (CloudComp2011), Springer, 2011, pp.5.
auditing for data storage security in cloud computing,” IEEE INFO- [70] B. Grobauer, T. Walloschek, and E. Stocker, “Understanding cloud
COM 2010, San Diego, CA, March 2010. computing vulnerabilities,” Security and Privacy, IEEE, vol. 9, no. 2,
[46] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, “Enabling public ver- pp. 50-57, 2011.
ifiability and data dynamics for storage security in cloud computing,” [71] Y. Xu, M. Bailey, F. Jahanian, K. Joshi, M. Hiltunen, and R. Schlicht-
in Proc. ESORICS’09, Saint Malo, France, Sep. 2009. ing, “An exploration of L2 cache covert channels in virtualized envi-
858 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 15, NO. 2, SECOND QUARTER 2013
ronments,” in Proc. 3rd ACM workshop on Cloud computing security [96] J. Idziorek, M. Tannian, and D. Jacobson, “Detecting fraudulent use
workshop, New York, NY, USA, 2011, pp. 29-40. of cloud resources,” in Proc. 3rd ACM workshop on Cloud computing
[72] K. Okamura and Y. Oyama, “Load-based covert channels between security workshop, New York, NY, USA, 2011, pp. 61-72.
Xen virtual machines,” in Proc. 2010 ACM Symposium on Applied [97] J. Idziorek and M. Tannian, “Exploiting cloud utility models for profit
Computing, New York, NY, USA, 2010, pp. 173-180. and ruin,” in Cloud Computing (CLOUD), 2011 IEEE International
[73] B. C. Vattikonda, S. Das, and H. Shacham, “Eliminating fine grained Conference on, 2011, pp. 33-40.
timers in Xen,” in Proc. 3rd ACM workshop on Cloud computing [98] M. Naehrig, K. Lauter, and V. Vaikuntanathan, “Can homomorphic
security workshop, New York, NY, USA, 2011, pp. 41-46. encryption be practical?” in Proc. 3rd ACM workshop on Cloud
[74] B. Stone and A. Vance, “Companies slowly join cloud computing,” computing security workshop, New York, NY, USA, 2011, pp. 113-
New York Times, 18 April 2010. 124.
[75] Z. Wang and R. B. Lee, ”New cache designs for thwarting software [99] T. Choi, H.B. Acharya, and M. G. Gouda, “Is that you? Authentication
cache-based side channel attacks,” In 34th International Symposium in a network without identities,” International J. Security and Networks,
on Computer Architecture, pages 494-505,June 2007. Vol. 6, No. 4, 2011, pp. 181 - 190.
[76] Z. Wang and R. B. Lee, “A novel cache architecture with enhanced per- [100] Q. Chai and G. Gong, “On the (in) security of two Joint Encryption
formance and security,” In 41st IEEE/ACM International Symposium and Error Correction schemes,” International J. Security and Networks,
on Microarchitecture, pages 83-93, November 2008. Vol. 6, No. 4, 2011, pp. 191 - 200.
[77] G. Keramidas, A. Antonopoulos, D. N. Serpanos, and S. Kaxiras, [101] S. Tang and W. Li, “An epidemic model with adaptive virus spread
“Non deterministic caches: A simple and effective defense against control for Wireless Sensor Networks,” International J. Security and
side channel attacks,” Design Automation for Embedded Systems, Networks, Vol. 6, No. 4, 2011, pp. 201 - 210.
12(3):221-230, 2008. [102] G. Luo and K.P. Subbalakshmi, “KL-sense secure image steganogra-
[78] J. Kong, O. Aciicmez, J.-P. Seifert, and H. Zhou, “Deconstructing new phy,” International J. Security and Networks, Vol. 6, No. 4, 2011, pp.
cache designs for thwarting software cache-based side channel attacks,” 211 - 225.
In 2nd ACM Workshop on Computer Security Architectures, pages 25- [103] W. Chang, J. Wu, and C. C. Tan, “Friendship-based location privacy
34, October 2008. in Mobile Social Networks,” International J. Security and Networks,
[79] R. Könighofer, “A fast and cache-timing resistant implementation of Vol. 6, No. 4, 2011, pp. 226 - 236.
the AES,” in Proc. 2008 The Cryptopgraphers’ Track at the RSA [104] X. Zhao, L. Li, and G. Xue, “Authenticating strangers in Online Social
conference on Topics in cryptology, Berlin, Heidelberg, 2008, pp. 187- Networks,” International J. Security and Networks, Vol. 6, No. 4, 2011,
202. pp. 237 - 248.
[80] Yinqian Zhang, A. Juels, A. Oprea, and M. K. Reiter, “HomeAlone: [105] D. Walker and S. Latifi, “Partial Iris Recognition as a Viable Biometric
Co-residency Detection in the Cloud via Side-Channel Analysis,” in Scheme,” International J. Security and Networks, Vol. 6 Nos. 2-3, 2011,
2011 IEEE Symposium on Security and Privacy (SP), 2011, pp. 313- pp. 147-152.
328.
[106] A. Desoky, “Edustega: An Education-Centric Steganography Method-
[81] K. Suzaki, K. Iijima, T. Yagi, and C. Artho, “Memory deduplication ology,” International J. Security and Networks, Vol. 6 Nos. 2-3, 2011,
as a threat to the guest OS,” in Proc. Fourth European Workshop on pp. 153-173.
System Security, New York, NY, USA, 2011, p. 1:1-1:6.
[107] N. Ampah, C. Akujuobi, S. Alam, and M. Sadiku, “An intrusion detec-
[82] K. Suzaki, K. Iijima, T. Yagi, and C. Artho, “Software Side Channel
tion technique based on continuous binary communication channels,”
Attack on Memory Deduplication,” in 23rd ACM Symposium on
International J. Security and Networks, Vol. 6 Nos. 2-3, 2011, pp. 174-
Operating Systems Principles, poster, 2011.
180.
[83] E. Keller, J. Szefer, J. Rexford, and R. B. Lee, “NoHype: virtualized
[108] H. Chen and B. Sun, “Editorial,” International J. Security and Net-
cloud infrastructure without the virtualization,” in Proc. 37th annual
works, Vol. 6 Nos. 2/3, 2011, pp. 65-66.
international symposium on Computer architecture, New York, NY,
USA, 2010, pp. 350-361. [109] M. Barua, X. Liang, R. Lu, X. Shen, “ESPAC: Enabling Security
and Patient-centric Access Control for eHealth in cloud computing
[84] J. Szefer, E. Keller, R. B. Lee, and J. Rexford, “Eliminating the
,” International J. Security and Networks, Vol. 6 Nos. 2/3, 2011, pp.
hypervisor attack surface for a more secure cloud,” in Proc. 18th ACM
67-76.
conference on Computer and communications security, New York, NY,
USA, 2011, pp. 401-412. [110] N. Jaggi, U. M. Reddy, and R. Bagai, “A Three Dimensional Sender
[85] C. Wang, K. Ren, J. Wang, “Secure and Practical Outsourcing of Linear Anonymity Metric,” International J. Security and Networks, Vol. 6 Nos.
Programming in Cloud Computing,” In IEEE Trans. Cloud Computing 2/3, 2011, pp. 77-89.
April 10-15, 2011. [111] M. J. Sharma and V. C. M. Leung, “Improved IP Multimedia Sub-
[86] A. Baliga, P. Kamat, and L. Iftode, “Lurking in the Shadows: Identi- system Authentication Mechanism for 3G-WLAN Networks,” Interna-
fying Systemic Threats to Kernel Data (Short Paper),” in 2007 IEEE tional J. Security and Networks, Vol. 6 Nos. 2/3, 2011, pp. 90-100.
Symposium on Security and Privacy, May 2007. [112] N. Cheng, K. Govindan, and P. Mohapatra, “Rendezvous Based Trust
[87] S. Setty, A. Blumberg, M. Walfish, “Toward practical and unconditional Propagation to Enhance Distributed Network Security,” International J.
verification of remote computations,” in the 13th Workshop on Hot Security and Networks, Vol. 6 Nos. 2/3, 2011, pp. 101-111.
Topics in Operating Systems, Napa, CA, USA 2011. [113] A. Fathy, T. ElBatt, and M. Youssef, “A Source Authentication Scheme
[88] K. Benson, R. Dowsley, and H. Shacham, “Do you know where your Using Network Coding,” International J. Security and Networks, Vol.
cloud files are?,” in Proc. 3rd ACM workshop on Cloud computing 6 Nos. 2/3, 2011, pp. 112-122.
security workshop, New York, NY, USA, 2011, pp. 73-82. [114] L. Liu, Y. Xiao, J. Zhang, A. Faulkner, and K. Weber, “Hidden In-
[89] F. Monrose, P. Wycko, and A. D. Rubin, “Distributed execution with formation in Microsoft Word,” International J. Security and Networks,
remote audit,” In NDSS, 1999. Vol. 6 Nos. 2/3, 2011, pp. 123-135.
[90] M. J. Atallah and K. B. Frikken, “Securely outsourcing linear algebra [115] S. S.M. Chow and S. Yiu, “ Exclusion-Intersection Encryption,”
computations,” In ASIACCS, 2010. International J. Security and Networks, Vol. 6 Nos. 2/3, 2011, pp.
[91] R. Motwani and P. Raghavan, “Randomized Algorithms,” Cambridge 136-146.
University Press, 2007. [116] M. Yang, J. C.L. Liu, and Y. Tseng, “Editorial,” International J.
[92] M. Blanton, Y. Zhang, and K. B. Frikken, “Secure and Verifiable Out- Security and Networks, Vol. 5, No.1 pp. 1 - 3, 2010,
sourcing of Large-Scale Biometric Computations,” in Privacy, Security, [117] S. Malliga and A. Tamilarasi, “A backpressure technique for filtering
Risk and Trust (PASSAT), IEEE Third International Conference on spoofed traffic at upstream routers,” International J. Security and
Social Computing (SocialCom), 2011, pp. 1185-1191. Networks, Vol. 5, No.1 pp. 3 - 14, 2010.
[93] S. Goldwasser, S. Micali, and C. Rackoff, “The knowledge complexity [118] S. Huang and S. Shieh, “Authentication and secret search mechanisms
of interactive proof systems,” SIAM Journal on Comp., 18(1):186-208, for RFID-aware wireless sensor networks,” International J. Security
1989. and Networks, Vol. 5, No.1 pp. 15 - 25 , 2010.
[94] S. Arora, C. Lund, R. Motwani, M. Sudan, and M. Szegedy, “Proof [119] Y. Hsiao and R. Hwang, “An efficient secure data dissemination scheme
verification and the hardness of approximation problems,” J. ACM, for grid structure Wireless Sensor Networks,” International J. Security
45(3):501-555, May 1998. and Networks, Vol. 5, No.1 pp. 26 - 34 , 2010.
[95] S. Setty, A. Blumberg, and M. Walfish, “Toward practical and uncon- [120] L. Xu, S. Chen, X. Huang, and Y. Mu, “Bloom filter based secure and
ditional verification of remote computations,” in the 13th Workshop on anonymous DSR protocol in wireless ad hoc networks,” International
Hot Topics in Operating Systems, Napa, CA, USA 2011. J. Security and Networks, Vol. 5, No.1 pp. 35 - 44, 2010.
XIAO and XIAO: SECURITY AND PRIVACY IN CLOUD COMPUTING 859
[121] K. Tsai, C. Hsu, and T. Wu, “Mutual anonymity protocol with integrity Communication Networks, Vol. 2 No. 5, Sept./Oct., 2009, pp. 427-
protection for mobile peer-to-peer networks,” International J. Security 444.
and Networks, Vol. 5, No.1 pp. 45 - 52, 2010.
[122] M. Yang, “Lightweight authentication protocol for mobile RFID net-
works,” International J. Security and Networks, Vol. 5, No.1 pp. 53 -
62, 2010.
[123] J. Wang and G.L. Smith, “A cross-layer authentication design for
Zhifeng Xiao is a PH.D Candidate in the De-
secure video transportation in wireless sensor network,” International
partment of Computer Science at the University
J. Security and Networks, Vol. 5, No.1 pp. 63 - 76, 2010.
of Alabama. He received the bachelor degree in
[124] Y. Xiao, “Accountability for Wireless LANs, Ad Hoc Networks, and
computer science from Shandong University, China,
Wireless Mesh Networks,” IEEE Commun. Mag., special issue on
in 2008. His research interests are in design and
Security in Mobile Ad Hoc and Sensor Networks, Vol. 46, No. 4,
analysis of security distributed and Internet systems.
Apr. 2008, pp. 116-126.
Zhifeng Xiao is a student member of IEEE.
[125] Y. Xiao, “Flow-Net Methodology for Accountability in Wireless Net-
works,” IEEE Network, Vol. 23, No. 5, Sept./Oct. 2009, pp. 30-37.
[126] J. Liu, and Y. Xiao, “Temporal Accountability and Anonymity in
Medical Sensor Networks,” ACM/Springer Mobile Networks and Ap-
plications (MONET), Vol. 16, No. 6, pp. 695-712, Dec. 2011.
[127] Y. Xiao, K. Meng, and D. Takahashi, “Accountability using Flow-net:
Design, Implementation, and Performance Evaluation,” (Wiley Journal
of) Security and Communication Networks, Vol.5, No. 1, pp. 29-49, Yang Xiao worked in industry as a MAC (Medium
Jan. 2012. Access Control) architect involving the IEEE 802.11
[128] Y. Xiao, S. Yue, B. Fu, and S. Ozdemir, “GlobalView: Building standard enhancement work before he joined Dept.
Global View with Log Files in a Distributed / Networked System of Computer Science at The Univ. of Memphis in
for Accountability,” (Wiley Journal of) Security and Communication 2002. Dr. Xiao is currently with Dept. of Computer
Networks, DOI: 10.1002/sec.374. Science (with tenure) at The Univ. of Alabama.
[129] D. Takahashi, Y. Xiao, and K. Meng, “Virtual Flow-net for Ac- He was a voting member of IEEE 802.11 Working
countability and Forensics of Computer and Network Systems,” (Wi- Group from 2001 to 2004. He is an IEEE Senior
ley Journal of) Security and Communication Networks, 2011, DOI: Member. He serves as a panelist for the US National
10.1002/sec.407. Science Foundation (NSF), Canada Foundation for
[130] Z. Xiao and Y. Xiao, “PeerReview Re-evaluation for Accountability Innovation (CFI)’s Telecommunications expert com-
in Distributed Systems or Networks,” International J. Security and mittee, and the American Institute of Biological Sciences (AIBS), as well as
Networks (IJSN), 2010, Vo. 7. No. 1, in press. a referee/reviewer for many national and international funding agencies. His
[131] Z. Xiao, N. Kathiresshan, and Y. Xiao, “A Survey of Accountability research areas are security and communications/networks. He has published
in Computer Networks and Distributed Systems,” (Wiley Journal of) more than 200 refereed journal papers (including 50 IEEE/ACM transactions
Security and Communication Networks, accepted. papers) and over 200 refereed conference papers and book chapters related
[132] J. Liu, Y. Xiao, S. Li, W. Liang, C. L. P. Chen, “Cyber Security and to these research areas. Dr. Xiao’s research has been supported by the
Privacy Issues in Smart Grids,” IEEE Commun. Surveys Tuts., DOI: US National Science Foundation (NSF), U.S. Army Research, The Global
10.1109/SURV.2011.122111.00145, in press. Environment for Network Innovations (GENI), Fleet Industrial Supply Center-
[133] Y. Xiao, X. Shen, B. Sun, and L. Cai, “Security and Privacy in RFID San Diego (FISCSD), FIATECH, and The University of Alabama’s Research
and Applications in Telemedicine,” IEEE Commun. Mag., Vol. 44. No. Grants Committee. He currently serves as Editor-in-Chief for International J.
4, Apr. 2006, pp. 64-72. Security and Networks (IJSN) and International J. Sensor Networks (IJSNet).
[134] H. Chen, Y. Xiao, X. Hong, F. Hu, J. Xie, “A Survey of Anonymity He was the founding Editor-in-Chief for International J. Telemedicine and
in Wireless Communication Systems,” (Wiley Journal) Security and Applications (IJTA) (2007-2009).