Java Abstract 2010 & 2009
Java Abstract 2010 & 2009
ABSTRACT:-
Packet forwarding prioritization (PFP) in routers is one of the
mechanisms commonly available to network operators. PFP can have a
significant impact on the accuracy of network measurements, the
performance of applications and the effectiveness of network
troubleshooting procedures. Despite its potential impacts, no information on
PFP settings is readily available to end users. In this paper, we present an
end-to-end approach for PFP inference and its associated tool, POPI. This is
the first attempt to infer router packet forwarding priority through end-to-
end measurement. POPI enables users to discover such network policies
through measurements of packet losses of different packet types. We
evaluated our approach via statistical analysis, simulation and wide-area
experimentation in Planet Lab. We employed POPI to analyze 156 paths
among 162 Planet Lab sites. POPI flagged 15 paths with multiple priorities,
13 of which were further validated through hop-by-hop loss rates
measurements. In addition, we surveyed all related network operators and
received responses for about half of them all confirming our inferences.
Besides, we compared POPI with the inference mechanisms through other
metrics such as packet reordering [called out-of-order (OOO)]. OOO is
unable to find many priority paths such as those implemented via traffic
policing. On the other hand, interestingly, we found it can detect existence of
the mechanisms which induce delay differences among packet types such as
slow processing path in the router and port-based load sharing.
2. QoS Based Many casting Over Optical Burst switched
Network
ABSTRACT:-
Many distributed applications require a group of destinations to be
coordinated with a single source. Multicasting is a communication paradigm
to implement these distributed applications. However in multicasting, if at
least one of the members in the group cannot satisfy the service requirement
of the application, the multicast request is said to be blocked. On the
contrary in manycasting, destinations can join or leave the group, depending
on whether it satisfies the service requirement or not. This dynamic
membership based destination group decreases request blocking. We study
the behavior of manycasting over optical burst-switched networks (OBS)
based on multiple quality of service (QoS) constraints. These multiple
constraints can be in the form of physical-layer impairments, transmission
delay, and reliability of the link. Each application requires its own QoS
threshold attributes. Destinations qualify only if they satisfy the required
QoS constraints set up by the application. We have developed a
mathematical model based on lattice algebra for this multiconstraint
problem. Due to multiple constraints, burst blocking could be high. We
propose two algorithms to minimize request blocking for the
multiconstrained manycast (MCM) problem. Using extensive simulation
results, we have calculated the average request blocking for the proposed
algorithms. Our simulation results show that MCM-shortest path tree
(MCM-SPT) algorithm performs better than MCM-dynamic membership
(MCM-DM) for delay constrained services and real-time service, where as
data services can be better provisioned using MCM-DM algorithm.
3. on wireless scheduling algorithm for minimizing the queue-
overflow probability
ABSTRACT:-
In this paper, we are interested in wireless scheduling algorithms for
the downlink of a single cell that can minimize the queue-overflow
probability. Specifically, in a large-deviation setting, we are interested in
algorithms that maximize the asymptotic decay rate of the queue-overflow
probability, as the queue-overflow threshold approaches infinity. We first
derive an upper bound on the decay rate of the queue-overflow probability
over all scheduling policies. We then focus on a class of scheduling
algorithms collectively referred to as the “$alpha$ -algorithms.” For a
given $alphageq 1$, the $alpha$-algorithm picks the user for service at each
time that has the largest product of the transmission rate multiplied by the
backlog raised to the power $alpha$. We show that when the overflow
metric is appropriately modified, the minimum-cost-to-overflow under
the $alpha$-algorithm can be achieved by a simple linear path, and it can be
written as the solution of a vector-optimization problem. Using this
structural property, we then show that when $alpha$ approaches infinity,
the $alpha$ -algorithms asymptotically achieve the largest decay rate of the
queue-overflow probability. Finally, this result enables us to design
scheduling algorithms that are both close to optimal in terms of the
asymptotic decay rate of the overflow probability and empirically shown to
maintain small queue-overflow probabilities over queue-length ranges of
practical interest.
4. A distributed CSMA algorithm for Throughput and utility
Maximization in wireless networks
ABSTRACT:-
In multihop wireless networks, designing distributed scheduling
algorithms to achieve the maximal throughput is a challenging problem
because of the complex interference constraints among different links.
Traditional maximal-weight scheduling (MWS), although throughput-
optimal, is difficult to implement in distributed networks. On the other hand,
a distributed greedy protocol similar to IEEE 802.11 does not guarantee the
maximal throughput. In this paper, we introduce an adaptive carrier sense
multiple access (CSMA) scheduling algorithm that can achieve the maximal
throughput distributive. Some of the major advantages of the algorithm are
that it applies to a very general interference model and that it is simple,
distributed, and asynchronous. Furthermore, the algorithm is combined with
congestion control to achieve the optimal utility and fairness of competing
flows. Simulations verify the effectiveness of the algorithm. Also, the
adaptive CSMA scheduling is a modular MAC-layer algorithm that can be
combined with various protocols in the transport layer and network layer.
Finally, the paper explores some implementation issues in the setting of
802.11 networks.
5. A distributed algorithm for min-max and max-min cut
problems in communication networks
ABSTRACT:-
We consider the problem of finding a multicast tree rooted at the
source node and including all the destination nodes such that the maximum
weight of the tree arcs is minimized. It is of paramount importance for many
optimization problems, e.g., the maximum-lifetime multicast problem in
multihop wireless networks, in the data networking community. We explore
some important properties of this problem from a graph theory perspective
and obtain a min-max-tree max-min-cut theorem, which provides a unified
explanation for some important while separated results in the recent
literature. We also apply the theorem to derive an algorithm that can
construct a global optimal min-max multicast tree in a distributed fashion. In
random networks with n nodes and m arcs, our theoretical analysis shows
that the expected communication complexity of our distributed algorithm is
in the order of O(m). Specifically, the average number of messages is 2(n - 1
- γ) -2 ln(n-1) + m at most, in which γis the Euler constant. To our best
knowledge, this is the first contribution that possesses the distributed and
scalable properties for the min-max multicast problem and is especially
desirable to the large-scale resource-limited multihop wireless networks, like
sensor networks.
6. SQUID: A Practical 100% Throughput Scheduler
ABSTRACT:-
Crosspoint buffered switches are emerging as the focus of research in
high-speed routers. They have simpler scheduling algorithms and achieve
better performance than bufferless crossbar switches. Crosspoint buffered
switches have a buffer at each crosspoint. A cell is first delivered to a
crosspoint buffer, and then transferred to the output port. With a speedup of
2, a crosspoint buffered switch has previously been proved to provide 100%
throughput. In this paper, we propose two 100% throughput scheduling
algorithms without speedup for crosspoint buffered switches, called
SQUISH and SQUID. We prove that both schemes can achieve 100%
throughput for any admissible Bernoulli traffic, with the minimum required
crosspoint buffer size being as small as a single cell buffer. Both schemes
have a low time complexity of O(logN), where N is the switch size.
Simulation results show a delay performance comparable to output-queued
switches. We also present a novel queuing model that models crosspoint
buffered switches under uniform traffic.
7. Network Coding-aware routing in wireless networks
ABSTRACT:-
A recent approach-COPE, presented by Katti (Proc. ACM SIGCOMM 2006,
pp. 243-254)-for improving the throughput of unicast traffic in wireless
multihop networks exploits the broadcast nature of the wireless medium
through opportunistic network coding. In this paper, we analyze throughput
improvements obtained by COPE-type network coding in wireless networks
from a theoretical perspective. We make two key contributions. First, we
obtain a theoretical formulation for computing the throughput of network
coding on any wireless network topology and any pattern of concurrent
unicast traffic sessions. Second, we advocate that routing be made aware of
network coding opportunities rather than, as in COPE, being oblivious to it.
More importantly, our model considers the tradeoff between routing flows
close to each other for utilizing coding opportunities and away from each
other for avoiding wireless interference. Our theoretical formulation
provides a method for computing source-destination routes and utilizing the
best coding opportunities from available ones so as to maximize the
throughput. We handle scheduling of broadcast transmissions subject to
wireless transmit/receive diversity and link interference in our optimization
framework. Using our formulations, we compare the performance of
traditional unicast routing and network coding with coding-oblivious and
coding-aware routing on a variety of mesh network topologies, including
some derived from contemporary mesh network test beds. Our evaluations
show that a route selection strategy that is aware of network coding
opportunities leads to higher end-to-end throughput when compared to
coding-oblivious routing strategies.
8. Efficient multicast algorithm for multichannel
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date: Jan. 2010
Volume: 21 Issue:1
On page(s): 86 - 99
ISSN: 1045-9219
INSPEC Accession Number: 11000082
Digital Object Identifier: 10.1109/TPDS.2009.46
Date of Publication: 21 March 2009
Date of Current Version: 01 December 2009
Sponsored by: IEEE Computer Society
ABSTRACT:-
The wireless mesh network is an emerging technology that provides
high quality service to end users as the "last milerdquo of the Internet.
Furthermore, multicast communication is a key technology for wireless
mesh networks. Multicast provides efficient data distribution among a group
of nodes. However, unlike other wireless networks, such as sensor networks
and MANETs, where multicast algorithms are designed to be energy
efficient and to achieve optimal route discovery among mobile nodes,
wireless mesh networks need to maximize throughput. This paper proposes
two multicast algorithms: the level channel assignment (LCA) algorithm and
the multichannel multicast (MCM) to improve the throughput for
multichannel and multi-interface mesh networks. The algorithms build
efficient multicast trees by minimizing the number of relay nodes and total
hop count distances of the trees. The algorithms use dedicated channel
assignment strategies to reduce the interference to improve the network
capacity. We also demonstrate that using partially overlapping channels can
further diminish the interference. Furthermore, additional interfaces help to
increase the bandwidth, and multiple gateways can further shorten the total
hop count distance. Simulations show that those algorithms greatly
outperform the single-channel multicast algorithm. We also observe that
MCM achieves better throughput and shorter delay while LCA can be
realized in distributed manner.
9. Efficient Broadcasting using network coding and directional
antennas in MANET
This paper appears in: INFOCOM 2008. The 27th Conference on Computer
Communications. IEEE
Issue Date: 13-18 April 2008
On page(s): 1499 - 1507
Location: Phoenix, AZ
ISSN: 0743-166X
Print ISBN: 978-1-4244-2025-4
INSPEC Accession Number: 9962388
Digital Object Identifier: 10.1109/INFOCOM.2008.209
Date of Current Version: 02 May 2008
ABSTRACT:-
In this paper, we consider the issue of efficient broadcasting in mobile
ad hoc networks (MANETs) using network coding and directional antennas.
Network coding-based broadcasting focuses on reducing the number of
transmissions each forwarding node performs in the multiple source/multiple
message broadcast application, where each forwarding node combines some
of the received messages for transmission. With the help of network coding,
the total number of transmissions can be reduced compared to broadcasting
using the same forwarding nodes without coding. We exploit the usage of
directional antennas to network coding-based broadcasting to further reduce
energy consumption. A node equipped with directional antennas can divide
the omnidirectional transmission range into several sectors and turns some
of them on for transmission. In the proposed scheme using a directional
antenna, forwarding nodes selected locally only need to transmit broadcast
messages, original or coded, to restricted sectors. We also study two
extensions. The first extension applies network coding to both dynamic and
static forwarding node selection approaches. In the second extension, we
design two approaches for the single source/single message issue in the
network coding-based broadcast application. Performance analysis via
simulations on the proposed algorithms using a custom simulator is
presented.
10.A Multichannel Scheduler for High-Speed wireless
backhaul links with packet concatenation
ABSTRACT:-
Capacity has been an important issue for many wireless backhaul
networks. Both the multihop nature and the large per packet channel access
overhead can lead to its low channel efficiency. The problem may get even
worse when there are many applications transmitting packets with small data
payloads, e.g., Voice over Internet protocol (VoIP). Previously, the use of
multiple parallel channels and employing packet concatenation were treated
as separate solutions to these problems. However, there is no available work
on the integrated design and performance analysis of a complete scheduler
architecture combining these two schemes. In this paper, we propose a
scheduler that concatenates small packets into large frames and sends them
through multiple parallel channels with an intelligent channel selection
algorithm between neighboring nodes. Besides the expected capacity
improvements, we also derive delay bounds for this scheduler. Based on the
delay bound formula, call admission control (CAC) of a broad range of
scheduling algorithms can be obtained. We demonstrate the significant
capacity and resequencing delay improvements of this novel design with a
voice-data traffic mixing example, via both numerical and simulation results.
It is shown that the proposed packet concatenation and channel selection
algorithms greatly outperform the round-robin scheduler in a multihop
scenario.
11. Efficient Load-Aware Routing Scheme for Wireless Mesh
Networks
ABSTRACT:-
This paper proposes a load-aware routing scheme for wireless mesh
networks (WMNs). In a WMN, the traffic load tends to be unevenly
distributed over the network. In this situation, the load-aware routing scheme
can balance the load, and consequently, enhance the overall network
capacity. We design a routing scheme which maximizes the utility, i.e., the
degree of user satisfaction, by using the dual decomposition method. The
structure of this method makes it possible to implement the proposed routing
scheme in a fully distributed way. With the proposed scheme, a WMN is
divided into multiple clusters for load control. A cluster head estimates
traffic load in its cluster. As the estimated load gets higher, the cluster head
increases the routing metrics of the routes passing through the cluster. Based
on the routing metrics, user traffic takes a detour to avoid overloaded areas,
and as a result, the WMN achieves global load balancing. We present the
numerical results showing that the proposed scheme effectively balances the
traffic load and outperforms the routing algorithm using the expected
transmission time (ETT) as a routing metric.
12.DCAR: Distributed Coding-Aware Routing in Wireless
Networks
This paper appears in: Distributed Computing Systems, 2008. ICDCS '08.
The 28th International Conference on
Issue Date: 17-20 June 2008
On page(s): 462 - 469
Location: Beijing
ISSN: 1063-6927
Print ISBN: 978-0-7695-3172-4
INSPEC Accession Number: 10131664
Digital Object Identifier: 10.1109/ICDCS.2008.84
Date of Current Version: 12 August 2008
ABSTRACT:-
The practical network coding system proposed in (S. Katti et al.,
2006) has two fundamental limitations: 1) the coding opportunity is crucially
dependent on the established routes; 2) the coding structure is limited within
a two-hop region. To overcome these limitations, we propose DCAR, the
first distributed coding-aware routing mechanism which combines (a) the
discovery for available paths between a given source and destination, and (b)
the detection for potential network coding opportunities. DCAR has the
potential to find high throughput paths with coding opportunities while
conventional routing fails to do so. In addition, DCAR can detect coding
opportunities on the entire path, thus eliminating the "two-hop" coding
limitation in (S. Katti et al., 2006). We also propose a novel routing metric
called "CRM" (coding-aware routing metric) which facilitates the
comparison between coding-possible and coding-impossible paths. We
implement the DCAR system in NS-2 and conduct extensive evaluation,
which shows that DCAR achieves 7% to 20% throughput gain over the
coding system in [1].
13. DGRAM: A delay Guaranteed Routing and MAC Protocol
for Wireless Sensor Networks
ABSTRACT:-
This paper presents an integrated MAC and routing protocol called
Delay Guaranteed Routing and MAC (DGRAM)for delay sensitive wireless
sensor network (WSN) applications. DGRAM is a TDMA-based protocol
designed to provide deterministic delay guarantee in an energy-efficient
manner. The design is based on slot reuse to reduce latency of a node in
accessing the medium, while ensuring that the medium access is contention-
free. The transmission and reception slots of nodes are carefully computed
so that data is transported from the source toward the sink while the nodes
could sleep at the other times to conserve energy. Thus, routes of data
packets are integrated into DGRAM, i.e., there is no need for a separate
routing protocol in a DGRAM network. We provide detailed design of time
slot assignment and delay analysis of the protocol. We have simulated
DGRAM using ns2 simulator and compared the results with those of
FlexiTP, which is another TDMA protocol that claims to provide delay
guarantee, and with those of a basic TDMA MAC. Simulation results show
that the delay experienced by data packets is always less than the analytical
delay bound for which the protocol is designed. Also, the TDMA frame size
with DGRAM is always lesser compared to that of Flexi, which makes the
maximum possible delay much lesser than that of Flexi. The average
delay experienced by packets and the average total energy spent in the
network are much lesser in a network using DGRAM than that using Flexi
or the basic TDMA MAC.
14. IRM: Integrated File Replication and Consistency
Maintenance in P2P Systems
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : Jan. 2010 Volume : 21 , Issue:1
On page(s): 100 - 113
ISSN : 1045-9219
INSPEC Accession Number: 11002164
Digital Object Identifier : 10.1109/TPDS.2009.43
Date of Publication : 12 March 2009
Date of Current Version : 01 December 2009
Sponsored by : IEEE Computer Society
ABSTRACT:-
In peer-to-peer file sharing systems, file replication and consistency
maintenance are widely used techniques for high system performance.
Despite significant interdependencies between them, these two issues are
typically addressed separately. Most file replication methods rigidly specify
replica nodes, leading to low replica utilization, unnecessary replicas and
hence extra consistency maintenance overhead. Most consistency
maintenance methods propagate update messages based on message
spreading or a structure without considering file replication dynamism,
leading to inefficient file update and hence high possibility of outdated file
response. This paper presents an Integrated file Replication and consistency
Maintenance mechanism (IRM) that integrates the two techniques in a
systematic and harmonized manner. It achieves high efficiency in file
replication and consistency maintenance at a significantly low cost. Instead
of passively accepting replicas and updates, each node determines file
replication and update polling by dynamically adapting to time-varying file
query and update rates, which avoids unnecessary file replications
and updates. Simulation results demonstrate the effectiveness of IRM in
comparison with other approaches. It dramatically reduces overhead and
yields significant improvements on the efficiency of both file replication
and consistency maintenance approaches.
15.A Dynamic Performance-Based Flow Control Method for
High-Speed Data Transfer
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : Jan. 2010
Volume : 21 , Issue:1
On page(s): 114 - 125
ISSN : 1045-9219
INSPEC Accession Number: 11000077
Digital Object Identifier : 10.1109/TPDS.2009.37
Date of Publication : 06 March 2009
Date of Current Version : 01 December 2009
Sponsored by : IEEE Computer Society
ABSTRACT:-
New types of specialized network applications are being created that
need to be able to transmit large amounts of data across dedicated network
links. TCP fails to be a suitable method of bulk data transfer in many of
these applications, giving rise to new classes of protocols designed to
circumvent TCP's shortcomings. It is typical in these high-
performance applications, however, that the system hardware is simply
incapable of saturating the bandwidths supported by the network
infrastructure. When the bottleneck for data transfer occurs in the system
itself and not in the network, it is critical that the protocol scales gracefully
to prevent buffer overflow and packet loss. It is therefore necessary to
build a high-speed protocol adaptive to the performance of each system by
including a dynamic performance-based flow control. This paper develops
such a protocol, performance adaptive UDP (henceforth PA-UDP), which
aims to dynamically and autonomously maximize performance under
different systems. A mathematical model and related algorithms are
proposed to describe the theoretical basis behind effective buffer and CPU
management. A novel delay-based rate-throttling model is also demonstrated
to be very accurate under diverse system latencies. Based on these models,
we implemented a prototype under Linux, and the experimental results
demonstrate that PA-UDP outperforms other existing high-speed protocols
on commodity hardware in terms of throughput, packet loss, and CPU
utilization. PA-UDP is efficient not only for high-speed research networks,
but also for reliable high-performance bulk data transfer over dedicated local
area networks where congestion and fairness are typically not a concern.
16. Maximizing Restorable Throughput in MPLS Networks
This paper appears in: INFOCOM 2008. The 27th Conference on Computer
Communications. IEEE
Issue Date : 13-18 April 2008
On page(s): 2324 - 2332
Location: Phoenix, AZ
ISSN : 0743-166X
Print ISBN: 978-1-4244-2025-4
INSPEC Accession Number: 9945310
Digital Object Identifier : 10.1109/INFOCOM.2008.301
Date of Current Version : 02 May 2008
ABSTRACT:-
MPLS recovery mechanisms are increasing in popularity because they
can guarantee fast restoration and high QoS assurance. Their main advantage
is that their backup paths are established in advance, before a failure event
takes place. Most research on the establishment of primary and backup paths
has focused on minimizing the added capacity required by the backup
paths in the network. However, this so-called spare capacity allocation
(SCA) metric is less practical for network operators who have a fixed
capacitated network and want to maximize their revenues. In this paper we
present a comprehensive study on restorable throughput maximization in
MPLS networks. We present the first polynomial-time algorithms for the
split table version of the problem. For the unsplittable version, we provide a
lower bound for the approximation ratio. We present efficient heuristics
which are shown to have excellent performance. One of our most important
conclusions is that when one seeks to maximize revenue, local recovery
should be the recovery scheme of choice.
17. Elastic Routing Table with Provable Performance for
Congestion Control in DHT Networks
This paper appears in: Distributed Computing Systems, 2006. ICDCS 2006.
26th IEEE International Conference on
Issue Date : 2006
On page(s): 15 - 15
ISSN : 1063-6927
Print ISBN: 0-7695-2540-7
Digital Object Identifier : 10.1109/ICDCS.2006.35
Date of Current Version : 24 July 2006
ABSTRACT:-
Distributed hash table (DHT) networks based on consistent hashing
functions have an inherent load balancing problem. The problem becomes
more severe due to the heterogeneity of network nodes and the non-uniform
and timevarying file popularity. Existing DHT load balancing algorithms are
mainly focused on the issues caused by node heterogeneity. To
deal with skewed lookups, this paper presents an elastic routing table (ERT)
mechanismfor query load balancing, based on the observation that high
degree nodes tend to experience more traffic load. The mechanism allows
each node to have a routing table of variable size corresponding to its
capacity. The indegree and outdegree of the routing table can also be
adjusted dynamically in response to the change of file popularity and
network churn. Theoretical analysis proves the routing table degree is
bounded. The ERT mechanism facilitates locality-aware randomized query
forwarding to further improve lookup efficiency. By relating query
forwarding to a supermarket customer service model, we prove a 2-way
randomized query forwarding policy leads to an exponential
improvement in query processing time over random walking. Simulation
results demonstrate the effectiveness of the ERT mechanism and its related
query forwarding policy forcongestion and query load
balancing. In comparison with the existing "virtual-server"-based load
balancing algorithm and other routing table control approaches, the ERT-
based congestion control protocol yields significant improvements in query
lookup efficiency.
18. Improving Reliability for Application-Layer Multicast
Overlays
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : Aug. 2010
Volume : 21 , Issue:8
On page(s): 1103 - 1116
ISSN : 1045-9219
INSPEC Accession Number: 11388499
Digital Object Identifier : 10.1109/TPDS.2009.166
Date of Publication : 07 January 2010
Date of Current Version : 28 June 2010
Sponsored by : IEEE Computer Society
ABSTRACT:-
Reliability of tree-like multicast overlays caused by nodes' abrupt
failures is considered as one of the major problems for the
Internet application-layer media streaming service. In this paper, we address
this problem by designing a distributed and light-weighted protocol named
the instantaneous reliability oriented protocol (IRP). Unlike most of existing
empirical solutions, we first define the overlay reliability problem formally,
and propose a protocol containing a node joining algorithm (IRP-Join), a
node preemption algorithm (IRP-Preempt), and a node switching algorithm
(IRP-Switch) for reactively constructing and repairing the overlay, as well as
proactively maintaining the overlay. With the formal problem presentation,
we set up a paradigm for solving the overlayreliability problem by
theoretically proving the effectiveness of our algorithms. Moreover, by
comparing IRP with existing solutions via simulation-based experiments and
real-world deployment, we show that IRP achieves a better reliability, while
incurs fewer structural adjustments on the multicast overlay, thus, providing
a superior overall performance.
19. Towards an Effective XML Keyword Search
ABSTRACT:-
Inspired by the great success of information retrieval (IR)
style keyword search on the web, keyword search onXML has emerged
recently. The difference between text database and XML database results in
three new challenges: 1) Identify the user search intention, i.e., identify
the XML node types that user wants to search for and search via. 2)
Resolve keyword ambiguity problems: a keyword can appear as both a tag
name and a text value of some node; a keyword can appear as the text values
of different XML node types and carry different meanings; a keyword can
appear as the tag name of different XML node types with different
meanings. 3) As thesearch results are subtrees of the XML document, new
scoring function is needed to estimate its relevance to a given query.
However, existing methods cannot resolve these challenges, thus return low
result quality in term of query relevance. In this paper, we propose an IR-
style approach which basically utilizes the statistics of underlying XML data
to address these challenges. We first propose specific guidelines that
a search engine should meet in both search intention identification and
relevance oriented ranking for search results. Then, based on these
guidelines, we design novel formulae to identify the search for nodes
and search via nodes of a query, and present a novel XML TF*IDF ranking
strategy to rank the individual matches of all possible search intentions. To
complement our result ranking framework, we also take the popularity into
consideration for the results that have comparable relevance scores. Lastly,
extensive experiments have been conducted to show the effectiveness of our
approach.
20. Deriving Concept-Based User Profiles from Search Engine
Logs
ABSTRACT:-
User profiling is a fundamental component of any personalization
applications. Most existing user profiling strategies are based on objects
that users are interested in (i.e., positive preferences), but not the objects that
users dislike (i.e., negative preferences). In this paper, we focus
on search engine personalization and develop several concept-based user
profiling methods that are based on both positive and negative preferences.
We evaluate the proposed methods against our previously proposed
personalized query clustering method. Experimental results show that
profiles which capture and utilize both of the user's positive and negative
preferences perform the best. An important result from the experiments is
that profiles with negative preferences can increase the separation between
similar and dissimilar queries. The separation provides a clear threshold for
an agglomerative clustering algorithm to terminate and improve the overall
quality of the resulting query clusters.
21. LIGHT: A Query-Efficient yet Low-Maintenance Indexing
Scheme over DHTs
ABSTRACT:-
DHT is a widely used building block for scalable P2P systems.
However, as uniform hashing employed in DHTsdestroys data locality, it is
not a trivial task to support complex queries (e.g., range queries and k-
nearest-neighbor queries) in DHT-based P2P systems. In order to
support efficient processing of such complex queries, apopular solution is to
build indexes on top of the DHT. Unfortunately, existing over-
DHT indexing schemes suffer from either query inefficiency or
high maintenance cost. In this paper, we propose LIGhtweight Hash Tree
(LIGHT)-a query-efficient yet low maintenance indexing scheme.
LIGHT employs a novel naming mechanism anda tree summarization
strategy for graceful distribution of its index structure. We show through
analysis that it can support various complex queries with near-optimal
performance. Extensive experimental results also demonstrate that,
compared with state of the art over-DHT indexing schemes, LIGHT saves
50-75 percent ofindex maintenance cost and substantially improves query
performance in terms of both response time and bandwidth consumption. In
addition, LIGHT is designed over generic DHTs and hence can be easily
implemented and deployed in any DHT-based P2P system.
22. Learning with Positive and Unlabeled Examples Using
Topic-Sensitive PLSA
ABSTRACT:-
It is often difficult and time-consuming to provide a large amount
of positive and negative examples for training a classification system in
many applications such as information retrieval. Instead, users often find it
easier to indicate just a few positive examples of what he or she
likes, and thus, these are the only labeled examplesavailable for
the learning system. A large amount of unlabeled data are easier to obtain.
How to make use of thepositive and unlabeled data for learning is a critical
problem in machine learning and information retrieval. Several approaches
for solving this problem have been proposed in the past, but most of these
methods do not work well when only a small amount of labeled positive data
are available. In this paper, we propose a novel algorithm called Topic-
Sensitive pLSA to solve this problem. This algorithm extends the original
probabilistic latent semantic analysis (pLSA), which is a purely
unsupervised framework, by injecting a small amount of supervision
information from the user. The supervision from users is in the form of
indicating which documents fit the users' interests. The supervision is
encoded into a set of constraints. By introducing the penalty terms for these
constraints, we propose an objective function that trades off the likelihood of
the observed data and the enforcement of the constraints. We develop an
iterative algorithm that can obtain the local optimum of the objective
function. Experimental evaluation on three data corpora shows that the
proposed method can improve the performance especially only with a small
amount of labeled positive data.
23. Incremental Evaluation of Visible Nearest Neighbor
Queries
ABSTRACT:-
In many applications involving spatial objects, we are only interested
in objects that are directly visible from query points. In this paper, we
formulate the visible nearest neighbor (VkNN) query and present
incremental algorithms as a solution, with two variants differing in how to
prune objects during the search process. One variant applies visibility
pruning to only objects, whereas the other variant applies visibility pruning
to index nodes as well. Our experimental results show that the latter
outperforms the former. We further propose the aggregate VkNN query that
finds the visible k nearest objects to a set of query points based on an
aggregate distance function. We also propose two approaches to processing
the aggregate VkNN query. One accesses the database via multiple
VkNN queries, whereas the other issues an aggregate nearest neighbor
query to retrieve objects from the database and then re-rank the results based
on the aggregate visible distance metric. With extensive experiments, we
show that the latter approach consistently outperforms the former one.
24. MABS: Multicast Authentication Based on Batch Signature
ABSTRACT:-
Conventional block-based multicast authentication schemes overlook
the heterogeneity of receivers by letting the sender choose the block size,
divide a multicast stream into blocks, associate each block with a signature,
and spread the effect of the signature across all the packets in the block
through hash graphs or coding algorithms. The correlation among packets
makes them vulnerable to packet loss, which is inherent in the Internet and
wireless networks. Moreover, the lack of Denial of Service (DoS) resilience
renders most of them vulnerable to packet injection in hostile environments.
In this paper, we propose a novel multicast authenticationprotocol,
namely MABS, including two schemes. The basic scheme (MABS-B)
eliminates the correlation among packets and thus provides the perfect
resilience to packet loss, and it is also efficient in terms of latency,
computation, and communication overhead due to an efficient cryptographic
primitive called batch signature, which supports the authentication of any
number of packets simultaneously. We also present an enhanced
scheme MABS-E, which combines the basic scheme with a packet filtering
mechanism to alleviate the DoS impact while preserving the perfect
resilience to packet loss.
25. Secure Data Collection in Wireless Sensor Networks Using
Randomized Dispersive Routes
ABSTRACT:-
Compromised node and denial of service are two key
attacks in wireless sensor networks (WSNs). In this paper, we
study data delivery mechanisms that can with high probability circumvent
black holes formed by these attacks. We argue that classic multipath routing
approaches are vulnerable to such attacks, mainly due to their deterministic
nature. So once the adversary acquires the routing algorithm, it can compute
the same routes known to the source, hence, making all information sent
over these routes vulnerable to its attacks. In this paper, we develop
mechanisms that generate randomized multipath routes. Under our designs,
the routes taken by the ¿ shares¿ of different packets change over time.
So even if the routing algorithm becomes known to the adversary, the
adversary still cannot pinpoint the routes traversed by each packet. Besides
randomness, the generated routes are also highly dispersive and energy
efficient, making them quite capable of circumventing black holes. We
analytically investigate the security and energy performance of the proposed
schemes. We also formulate an optimization problem to minimize the end-
to-end energy consumption under given security constraints. Extensive
simulations are conducted to verify the validity of our mechanisms.
26. Secure Data Objects Replication in Data Grid
ABSTRACT:-
Secret sharing and erasure coding-based approaches have been used in
distributed storage systems to ensure the confidentiality, integrity, and
availability of critical information. To achieve performance goals in data
accesses, these data fragmentation approaches can be combined with
dynamic replication. In this paper, we consider data partitioning (both secret
sharing and erasure coding) and dynamic replication in data grids, in which
security and data access performance are critical issues. More specifically,
we investigate the problem of optimal allocation of sensitive data objects
that are partitioned by using secret sharing scheme or erasure coding scheme
and/or replicated. The grid topology we consider consists of two layers. In
the upper layer, multiple clusters form a network topology that can be
represented by a general graph. The topology within each cluster is
represented by a tree graph. We decompose the share replica allocation
problem into two sub problems: the optimal intercluster resident set problem
(OIRSP) that determines which clusters need share replicas and the optimal
intracluster share allocation problem (OISAP) that determines the number of
share replicas needed in a cluster and their placements. We develop two
heuristic algorithms for the two sub problems. Experimental studies show
that the heuristic algorithms achieve good performance in reducing
communication cost and are close to optimal solutions.
27. Detecting Application Denial-of-Service Attacks: A Group-
Testing-Based Approach
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : Aug. 2010
Volume : 21 , Issue:8
On page(s): 1203 - 1216
ISSN : 1045-9219
INSPEC Accession Number: 11388500
Digital Object Identifier : 10.1109/TPDS.2009.147
Date of Publication : 03 September 2009
Date of Current Version : 28 June 2010
Sponsored by : IEEE Computer Society
ABSTRACT:-
Application DoS attack, which aims at disrupting application service
rather than depleting the network resource, has emerged as a larger threat to
network services, compared to the classic DoS attack. Owing to its high
similarity to legitimate traffic and much lower launching overhead than
classic DoS attack, this new assault type cannot be efficiently detected or
prevented by existing detection solutions. To identify application Do attack,
we propose a novel group testing (GT)-based approach deployed on back-
end servers, which not only offers a theoretical method to obtain short
detection delay and low false positive/negative rate, but also provides an
underlying framework against general network attacks. More specifically,
we first extend classic GT model with size constraints for practice purposes,
then redistribute the client service requests to multiple virtual servers
embedded within each back-end server machine, according to
specific testing matrices. Based on this framework, we propose a two-mode
detection mechanism using some dynamic thresholds to efficiently identify
the attackers. The focus of this work lies in the detection algorithms
proposed and the corresponding theoretical complexity analysis. We also
provide preliminary simulation results regarding the efficiency and
practicability ofthis new scheme. Further discussions over implementation
issues and performance enhancements are also appended to show its great
potentials.
28. An Application-Level Data Transparent Authentication
Scheme without Communication Overhead
ABSTRACT
With abundant aggregate network bandwidth, continuous data streams
are commonly used in scientific and commercial applications.
Correspondingly, there is an increasing demand of authenticating
these data streams. Existing strategies explore data stream authentication by
using message authentication codes (MACs) on a certain number
of data packets (a data block) to generate a message digest, then either
embedding the digest into the original data, or sending the digest out-of-
band to the receiver. Embedding approaches inevitably change the
original data, which is not acceptable under some circumstances (e.g., when
sensitive information is included in the data). Sending the digest out-of-band
incurs additional communication overhead, which consumes more critical
resources (e.g., power in wireless devices for receiving information) besides
network bandwidth. In this paper, we propose a novel strategy, DaTA, which
effectively authenticates data streams by selectively adjusting some
interspaced delay. This authentication scheme requires no change to the
original data and no additional communication overhead. Modeling-based
analysis and experiments conducted on unimplemented prototype system
in an LAN and over the Internet show that our proposed scheme is efficient
and practical.
29. Using Web-Referral Architectures to Mitigate Denial-of-
Service Threats
ABSTRACT:-
The web is a complicated graph, with millions of websites
interlinked together. In this paper, we propose to usethis web sitegraph
structure to mitigate flooding attacks on a website, using a new web referral
architecture for privileged service (“WRAPS”). WRAPS allows a legitimate
client to obtain a privilege URL through a simple click on a referral
hyperlink, from a website trusted by the target website. Using that URL, the
client can get privileged access to the target website in a manner that is far
less vulnerable to a distributed denial-of-service (DDoS) flooding attack
than normal access would be. WRAPS does not require changes to
web client software and is extremely lightweight for referrer websites, which
makes its deployment easy. The massive scale of the websitegraph could
deter attempts to isolate a website through blocking all referrers. We present
the design ofWRAPS, and the implementation of a prototype system
used to evaluate our proposal. Our empirical study demonstrates that
WRAPS enables legitimate clients to connect to a website smoothly in
spite of a very intensive flooding attack, at the cost of small overheads on
the website's ISP's edge routers. We discuss the security
properties of WRAPS and a simple approach to encourage many small
websites to help protect an important site during DoS attacks.
30. Record Matching over Query Results from Multiple Web
Databases
ABSTRACT:-
Record matching, which identifies the records that represent the
same real-world entity, is an important step for data integration. Most state-
of-the-art record matching methods are supervised, which requires the user
to provide training data. These methods are not applicable for
the Web database scenario,where the records tomatch are query results
dynamically generated on-the-fly. Such records are query-dependent and a
prelearned method using training examples from previous query results may
fail on the results of a new query. To address the problem
of record matching in the Web database scenario, we present an
unsupervised, online recordmatching method, UDD, which, for a
given query, can effectively identify duplicates from the query
resultrecords of multiple Web databases. After removal of the same-source
duplicates, the ¿presumed¿ nonduplicate records from the same source
can be used as training examples alleviating the burden of users having to
manually label training examples. Starting from the nonduplicate set, we use
two cooperating classifiers, a weighted component similarity summing
classifier and an SVM classifier, to iteratively identify duplicates in
the query results from multiple Web databases. Experimental results show
that UDD works well for the Web database scenario where existing
supervised methods do not apply.
31. Vulnerability Discovery with Attack Injection
ABSTRACT:-
The increasing reliance put on networked computer systems
demands higher levels of dependability. This is even more relevant as new
threats and forms of attack are constantly being revealed, compromising the
security of systems. This paper addresses this problem by presenting
an attack injection methodology for the automatic discovery of
vulnerabilities in software components. The proposed methodology,
implemented in AJECT, follows an approach similar to hackers and security
analysts to discover vulnerabilities in network-connected servers. AJECT
uses a specification of the server's communication protocol and predefined
test case generation algorithms to automatically create a large number
of attacks. Then, while it injects these attacks through the network, it
monitors the execution of the server in the target system and the responses
returned to the clients. The observation of an unexpected behavior suggests
the presence of a vulnerability that was triggered by some
particular attack (or group of attacks). This attack can then be used to
reproduce the anomaly and to assist the removal of the error. To assess the
usefulness of this approach, several attack injection campaigns were
performed with 16 publicly available POP and IMAP servers. The results
show that AJECT could effectively be used to locate vulnerabilities, even on
well-known servers tested throughout the years.
32. Capturing Router Congestion and Delay
ABSTRACT:-
Using a unique monitoring experiment, we capture all packets
crossing a (lightly utilized) operational access router from a Tier-1
provider, and use them to provide a detailed examination
of router congestion and packet delays. The complete capture enables not
just statistics as seen from outside the router, but also an accurate
physical router model to be identified. This enables a comprehensive
examination of congestion andb delay from three points of view: the
understanding of origins, measurement, and reporting. Our study defines
new methodologies and metrics. In particular, the traffic reporting enables a
rich description of the diversity of micro congestion behavior, without
model assumptions, and at achievable computational cost.
33. Continuous Monitoring of Spatial Queries in Wireless
Broadcast
ABSTRACT:-
Wireless data broadcast is a promising technique for
information dissemination that leverages the computational
capabilities of the mobile devices in order to enhance the scalability of the
system. Under this environment, the data are continuously broadcast by the
server, interleaved with some indexing information for query processing.
Clients may then tune in the broadcast channel and process
their queries locally without contacting the server. Previous work
on spatial query processing for wireless broadcast systems has only
considered snapshot queries over static data. In this paper, we propose an air
indexing framework that 1) outperforms the existing (i.e., snapshot)
techniques in terms of energy consumption while achieving low access
latency and 2) constitutes the first method supporting efficient
processing of continuous spatial queries over moving objects.
34. Energy Maps for Mobile Wireless networks coherence
Time Versus Spreading Period
ABSTRACT:-
We show that even though mobile networks are highly
unpredictable when viewed at the individual node scale, the end-to-end
quality-of-service (QoS) metrics can be stationary when
the mobile network is viewed in the aggregate. We define
the coherence time as the maximum duration for which the end-to-end QoS
metric remains roughly constant, and the spreading period as the minimum
duration required to spread QoS information to all the nodes. We show that
if the coherence time is greater than the spreading period, the end-to-end
QoS metric can be tracked. We focus on the energy consumption as the end-
to-end QoS metric, and describe a novel method by which
an energy map can be constructed and refined in the joint memory of
the mobile nodes. Finally, we show how energy maps can be utilized by an
application that aims to minimize a node's total energy consumption over its
near-future trajectory.
35. Energy-Efficient SINR-Based Routing for Multihop
Wireless
ABSTRACT:-
ABSTRACT:-
Joint analysis of security and routing protocols in wireless
networks reveals vulnerabilities of secure network traffic that remain
undetected when security and routing protocols are analyzed independently.
We formulate a class of continuous metrics to evaluate the vulnerability
of network traffic as a function of security and routing protocols used in
wireless networks. We develop two complementary vulnerability
definitions using set theoretic and circuit theoretic interpretations
of the security of network traffic, allowing a network analyst or an
adversary to determine weaknesses in the secure network. We formalize
node capture attacks using the vulnerability metric as a nonlinear integer
programming minimization problem and propose the GNAVE algorithm, a
Greedy Node capture Approximation using Vulnerability Evaluation. We
discuss the availability of security parameters to the adversary and show that
unknown parameters can be estimated using probabilistic analysis. We
demonstrate vulnerability evaluation using the proposed metrics and node
capture attacks using the GNAVE algorithm through detailed examples and
simulation.
37.Large Connectivity for Dynamic Random Geometric Graphs
ABSTRACT:-
We provide the first rigorous analytical results
for the connectivity of dynamic random geometric graphs - a model for
mobile wireless networks in which vertices move in random directions in
the unit torus. The model presented here follows the one described. We
provide precise asymptotic results for the expected length of
theconnectivity and disconnectivity periods of the network. We believe that
the formal tools developed in this work could be extended to be used in
more concrete settings and in more realistic models, in the same manner as
the development of the connectivity threshold for static random
geometric graphs has affected a lot of research done on ad hoc networks.
38.Measuring Capacity Bandwidth of Targeted Path Segments
ABSTRACT:-
Accurate measurement of network bandwidth is important for
network management applications as well as flexible Internet applications
and protocols which actively manage and dynamically adapt to changing
utilizationof network resources. Extensive work has focused on two
approaches to measuring bandwidth: measuring it hop-by-hop, and
measuring it end-to-end along a path. Unfortunately, best-practice
techniques for the former are inefficient and techniques for the latter are
only able to observe bottlenecks visible at end-to-end scope. In this paper,
we develop end-to-end probing methods which can measure
bottleneck capacity bandwidth alongarbitrary, targeted subpaths of a path in
the network, including subpaths shared by a set of flows. We evaluate our
technique through ns simulations, then provide a comparative Internet
performance evaluation against hop-by-hop and end-to-end techniques. We
also describe a number of applications which we foresee as standing to
benefit from solutions to this problem, ranging from network
troubleshooting and capacity provisioning to optimizing the
layout of application-level overlay networks, to optimized replica placement.
39. Mitigation of Control Channel Jamming Under Node Capture
Attacks
ABSTRACT:-
Availability of service in many wireless networks depends on the
ability for network users to establish and maintain communication channels
using control messages from base stations and other users. An adversary
with knowledge of the underlying communication protocol can mount an
efficient denial of service attack by jammingthe communication channels
used to exchange control messages. The use of spread spectrum techniques
can deter an external adversary from such control channel jamming attacks.
However, malicious colluding insiders or an adversary who captures or
compromises system users is not deterred by spread spectrum, as they know
the required spreading sequences. For the case of internal adversaries, we
propose a framework for control channelaccess schemes using the random
assignment of cryptographic keys to hide the location of control channels.
We propose and evaluate metrics to quantify the probabilistic availability
of service under control channel jammingby malicious or compromised
users and show that the availability of service degrades gracefully as the
number ofcolluding insiders or compromised users increases. We propose an
algorithm called GUIDE for the identification ofcompromised users in the
system based on the set of control channels that are jammed. We evaluate
the estimation error using the GUIDE algorithm in terms of the false alarm
and miss rates in the identification problem. We discuss various design
trade-offs between robustness to control channel jamming and resource
expenditure.
40.Mobility Management Approaches for Mobile IP Networks
ABSTRACT
In wireless networks, efficient management of mobility is a crucial
issue to support mobile users. The mobile Internet protocol (MIP) has been
proposed to support global mobility in IP networks. Several mobility
management strategies have been proposed which aim reducing the
signaling traffic related to the Mobile Terminals (MTs) registration with the
Home Agents (HAs) whenever their Care-of-Addresses (CoAs) change.
They use different foreign agents (FAs) and Gateway FAs (GFAs)
hierarchies to concentrate the registration processes. For high-mobility MTs,
the Hierarchical MIP (HMIP) and Dynamic HMIP (DHMIP) strategies
localize the registration in FAs and GFAs, yielding to high-
mobility signaling. The Multicast HMIP strategy limits the registration
processes in the GFAs. For high-mobility MTs, it provides
lowest mobility signaling delay compared to the HMIP and
DHMIP approaches. However, it is resource consuming strategy
unless for frequent MT mobility. Hence, we propose an analytic model to
evaluate the mean signaling delay and the mean bandwidth per call
according to the type of MT mobility. In our analysis, the MHMIP
outperforms the DHMIP and MIP strategies in almost all the studied cases.
The main contribution of this paper is the analytic model that allows
the mobility management approaches performance evaluation.
41.Multiple Routing Configurations for Fast IP Network Recovery
ABSTRACT:-
As the Internet takes an increasingly central role in our
communications infrastructure, the slow convergence ofrouting protocols
after a network failure becomes a growing problem. To assure
fast recovery from link and node failures in IP networks, we present a
new recovery scheme called Multiple Routing Configurations (MRC). Our
proposed scheme guarantees recovery in all single failure scenarios, using a
single mechanism to handle both link and node failures, and without
knowing the root cause of the failure. MRC is strictly connectionless, and
assumes only destination based hop-by-hop forwarding. MRC is based on
keeping additional routing information in the routers, and allows packet
forwarding to continue on an alternative output link immediately after the
detection of a failure. It can be implemented with only minor changes to
existing solutions. In this paper we present MRC, and analyze its
performance with respect to scalability, backup path lengths, and load
distribution after a failure. We also show how an estimate of the traffic
demands in the network can be used to improve the distribution of the
recovered traffic, and thus reduce the chances of congestion when MRC is
used.
42. Residual-Based Estimation of Peer and Link Lifetimes in P2P
Networks
ABSTRACT:-
Existing methods of measuring lifetimes in P2P systems usually
rely on the so-called Create-Based Method (CBM), which divides a given
observation window into two halves and samples users
ldquocreatedrdquo in the first half every Delta time units until they die or the
observation period ends. Despite its frequent use, this approach has no
rigorous accuracy or overhead analysis in the literature. To shed more light
on its performance, we first derive a model for CBM and show that small
window size or large Delta may lead to highly inaccurate lifetime
distributions. We then show that create-based sampling exhibits an inherent
tradeoff between overhead and accuracy, which does not allow any
fundamental improvement to the method. Instead, we propose a completely
different approach for sampling user dynamics that keeps
track of only residual lifetimes of peers and uses a simple renewal-process
model to recover the actual lifetimes from the observed residuals. Our
analysis indicates that for reasonably large systems, the proposed method
can reduce bandwidth consumption by several orders of magnitude
compared to prior approaches while simultaneously achieving higher
accuracy. We finish the paper by implementing a two-tier
Gnutella network crawler equipped with the proposed sampling
method and obtain the distribution of ultra peer lifetimes in a network of 6.4
million users and 60 million links. Our experimental results show that ultra
peer lifetimes are Pareto with shape alpha ap 1.1;
however, link lifetimes exhibit much lighter tails with alpha ap 1.8.
43.SIMPS Using Sociology for Personal Mobility
ABSTRACT:-
Assessing mobility in a thorough fashion is a crucial step toward
more efficient mobile network design. Recent research on mobility has
focused on two main points: analyzing models and studying their impact on
data transport. These works investigate the consequences of mobility. In this
paper, instead, we focus on the causes of mobility. Starting from established
research in sociology, we propose SIMPS, a mobility model of human
crowds with pedestrian motion. This model defines a process called
sociostation, rendered by two complimentary behaviors, namely socialize
and isolate, that regulate an individual with regard to her/his own sociability
level.SIMPS leads to results that agree with scaling laws observed both in
small-scale and large-scale human motion. Although our model defines only
two simple individual behaviors, we observe many emerging collective
behaviors (group formation/splitting, path formation, and evolution).
44. Spatio-Temporal Network Anomaly Detection by Assessing
Deviations of Empirical Measures
ABSTRACT:-
We introduce an Internet traffic anomaly detection mechanism
based on large deviations results for empirical measures. Using past traffic
traces we characterize network traffic during various time-of-day intervals,
assuming that it is anomaly-free. We present two different approaches to
characterize traffic: (i) a model-free approach based on the method of types
and Sanov's theorem, and (ii) a model-based approach modeling traffic using
a Markov modulated process. Using these characterizations as a reference
we continuously monitor traffic and employ large deviations and decision
theory results to ldquocomparerdquo the empirical measure of the monitored
traffic with the corresponding reference characterization, thus, identifying
traffic anomalies in real-time. Our experimental results show that applying
our methodology (even short-lived) anomalies are identified within a small
number of observations. Throughout, we compare the two approaches
presenting their advantages and disadvantages to identify and
classify temporal network anomalies. We also demonstrate how our
framework can be used to monitor traffic from multiple network elements in
order to identify both spatial and temporal anomalies. We validate our
techniques by analyzing real traffic traces with time-stamped anomalies.
45.On the Integration of Unicast and Multicast Cell Scheduling in
Buffered Crossbar Switches
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : June 2009
Volume : 20 , Issue:6
On page(s): 818 - 830
ISSN : 1045-9219
INSPEC Accession Number: 10601748
Digital Object Identifier : 10.1109/TPDS.2008.262
Date of Publication : 31 December 2008
Date of Current Version : 28 April 2009
Sponsored by : IEEE Computer Society
ABSTRACT:-
Internet traffic is a mixture of unicast and multicast flows.
Integrated schedulers capable of dealing with both traffic types have been
designed mainly for Input Queued (IQ) buffer-less crossbar switches.
Combined Input and crossbar queued (CICQ) switches, on the other hand,
are known to have better performance than their buffer-less predecessors due
to their potential in simplifying the scheduling and improving the switching
performance. The design of integrated schedulers in CICQ switches has thus
far been neglected. In this paper, we propose a novel CICQ architecture that
supports both unicast and multicast traffic along with its appropriate
scheduling. In particular, we propose an integrated round-robin-based
scheduler that efficiently services bothunicast and multicast traffic
simultaneously. Our scheme, named multicast and unicast round
robin scheduling (MURS), has been shown to outperform all existing
schemes under various traffic patterns. Simulation results suggested that we
can trade the size of the internal buffers for the number of input
multicast queues. We further propose a hardware implementation of our
algorithm for a 16 times 16 buffered crossbar switch. The implementation
results suggest that MURS can run at 20 Gbps line rate and a clock cycle
time of 2.8 ns, reaching an aggregate switching bandwidth of 320 Gbps.
46.BSMR: Byzantine-Resilient Secure Multicast Routing in Multihop
Wireless Networks
ABSTRACT:-
Multihop wireless networks rely on node cooperation to
provide multicast services. The multihop communication offers increased
coverage for such services but also makes them more vulnerable to insider
(or Byzantine) attacks coming from compromised nodes that behave
arbitrarily to disrupt the network. In this work, we identify vulnerabilities of
on-demand multicast routing protocols for multihop wireless networks and
discuss the challenges encountered in designing mechanisms to defend
against them. We propose BSMR, a novel securemulticast routing protocol
designed to withstand insider attacks from colluding adversaries. Our
protocol is a software-based solution and does not require additional or
specialized hardware. We present simulation results that demonstrate
that BSMR effectively mitigates the identified attacks.
47.Energy-Efficient SINR-Based Routing for Multihop Wireless
Networks
ABSTRACT:-
In this paper, we develop an energy-efficient routing scheme that
takes into account the interference created by existing flows in the network.
The routing scheme chooses a route such that the network expends the
minimum energy satisfying with the minimum constraints of flows. Unlike
previous works, we explicitly study the impact of routing a new flow on
the energy consumption of the network. Under certain assumptions on how
links are scheduled, we can show that our proposed algorithm is
asymptotically (in time) optimal in terms of minimizing the
average energy consumption. We also develop a distributed version of the
algorithm. Our algorithm automatically detours around a congested area in
the network, which helps mitigate network congestion and improve
overall network performance. Using simulations, we show that
the routes chosen by our algorithm (centralized and distributed) are
more energy efficient than the state of the art.
48.BRA: A Bidirectional Routing Abstraction for Asymmetric Mobile
Ad Hoc Networks
ABSTRACT:-
Wireless links are often asymmetric due to heterogeneity in the
transmission power of devices, non-uniform environmental noise, and other
signal propagation phenomena. Unfortunately, routing protocols for mobile
ad -hoc networks typically work well only in bidirectional networks. This
paper first presents a simulation study quantifying the impact
of asymmetric links on network connectivity and routing performance. It
then presents a framework called BRA that provides a bidirectional
abstraction of the asymmetric network to routing protocols. BRA works by
maintaining multi-hop reverse routes for unidirectional links and provides
three new abilities: improved connectivity by taking advantage of the
unidirectional links, reverse route forwarding of control packets to enable
off-the-shelf routing protocols, and detection packet loss on unidirectional
links. Extensive simulations of AODV layered on BRA show that packet
delivery increases substantially (two-fold in some instances) in asymmetric
networks compared to regular AODV, which only routes
on bidirectional links.
49. Capacity Improvement and Analysis for Voice/Data Traffic
over WLANs
ABSTRACT:-
Voice over wireless local area network (VoWLAN) is an
emerging application taking advantage of the promising voice over Internet
Protocol (VoIP) technology and the wide deployment of WLANs all over
the world. The real-time nature of voice traffic determines that controlled
access rather than random access should be adopted. Further, to fully exploit
the capacity of the WLAN supporting voice traffic, it is essential to explore
statistical multiplexing and to suppress the large overhead. In this paper, we
propose mechanisms to enhance the WLAN with voice quality of service
(QoS) provisioning capability when supporting hybrid voice/data traffic.
Voice multiplexing is achieved by a polling mechanism in the contention-
free period and deterministic priority access for voice traffic in the
contention period. Header overhead for voice traffic is also reduced
significantly. Delay-tolerant data traffic is guaranteed an average portion of
service time in the long run. A session admission control algorithm is
presented to admit voice traffic into the system with QoS guarantee.
Analytical and simulation results demonstrate the effectiveness
and efficiency of our proposed solutions
50. DCMP: A Distributed Cycle Minimization Protocol for
Peer-to-Peer Networks
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : March 2008
Volume : 19 , Issue:3
On page(s): 363 - 377
ISSN : 1045-9219
INSPEC Accession Number: 9836274
Digital Object Identifier : 10.1109/TPDS.2007.70732
Date of Current Version : 31 January 2008
Sponsored by : IEEE Computer Society
ABSTRACT:-
Broadcast-based peer-to-peer (P2P) networks, including flat
(for example, Gnutella) and two-layer super peer implementations
(for example, Kazaa), are extremely popular nowadays due to their
simplicity, ease of deployment, and versatility. The
unstructured network topology, however, contains many cyclic paths, which
introduce numerous duplicate messages in the system. Although such
messages can be identified and ignored, they still consume a large
proportion of the bandwidth and other resources, causing bottlenecks in the
entire network. In this paper, we describe the distributed cycle
minimization protocol (DCMP), a dynamic fully decentralized protocol that
significantly reduces the duplicate messages by eliminating
unnecessary cycles. As queries are transmitted through the
peers, DCMP identifies the problematic paths and attempts to break the
cycles while maintaining the connectivity of the network. In
order to preserve the fault resilience and load balancing properties of
unstructured P2P systems, DCMP avoids creating a hierarchical
organization. Instead, it applies cycle elimination symmetrically around
some powerful peers to keep the average path length small. The overall
structure is constructed fast with very low overhead. With the information
collected during this process, distributed maintenance is performed
efficiently even if peers quit the system without notification. The
experimental results from our simulator and the prototype implementation
on Planet Lab confirm that DCMPsignificantly improves the scalability of
unstructured P2P systems without sacrificing their desirable properties.
Moreover, due to its simplicity, DCMP can be easily implemented in various
existing P2P systems and is orthogonal to the search algorithms.
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : Feb. 2008
Volume : 19 , Issue:2
On page(s): 145 - 158
ISSN : 1045-9219
INSPEC Accession Number: 9767645
Digital Object Identifier : 10.1109/TPDS.2007.70725
Date of Current Version : 04 January 2008
Sponsored by : IEEE Computer Society
ABSTRACT:-
This paper presents Harmonic Ring (HRing), a structured peer-to-
peer (P2P) overlay where long links are built along the ring with decreasing
probabilities coinciding with the Harmonic Series. HRing constructs routing
tablesbased on the distance between node positions instead of node IDs in
order to eliminate the effect of node ID distribution on the long link
distribution and load balance. It supports leave-and-rejoin load balance
without incurring uneven long link distribution. In addition, node IDs can be
any form, like number, string, address, and date, without the prerequisite of
uniform distribution, so they can preserve the semantics and range locality
of data objects. HRing supports multidimensional range queries. Each node
is expected to have O(ln(n)) long links. The construction of O(ln(n)) long
links for a node costs (O(ln(n)) messages. Routing queries achieve O(ln(n))
hops. Analyses and simulations demonstrate the efficiency of query routing
and the effectiveness of the long link construction method.
52. Message Complexity Analysis of Mobile Ad Hoc Network
Address Auto configuration Protocols
ABSTRACT:-
This paper proposes a novel method to perform a
quantitative analysis of message complexity and applies this method in
comparing the message complexity among the mobile ad hoc network
(MANET) addressautoconfiguration protocols (AAPs). The original
publications on the AAPs had incomplete parts, making them insufficient to
use on practical MANETs. Therefore, the first objective of the research was
to complete the AAPs by filling in the missing gaps to make them
operational. The missing procedures that were filled in have been developed
based on the most logical procedures being accurate to the original protocol
publications. The research in this paper finds applications in
wireless networks that apply reduced addresses to achieve less memory
usage, smaller overhead, and higher throughput (for example, the IPv6 low-
power wireless personal address network (6LoWPAN)), but, as a result,
possess a high address duplication probability. This research consists of two
cases, where the first case deals with the message complexity analysis of the
single-node joining case (SJC) and the second case deals with
the complexity analysis of the MANET group merging case (GMC).
53. OCGRR: A New Scheduling Algorithm for Differentiated
Services Networks
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : May 2007
Volume : 18 , Issue:5
On page(s): 697 - 710
ISSN : 1045-9219
Digital Object Identifier : 10.1109/TPDS.2007.351711
Date of Current Version : 23 April 2007
Sponsored by : IEEE Computer Society
ABSTRACT:-
We propose a new fair scheduling technique, called OCGRR
(Output Controlled Grant-based Round Robin), for the support of DiffServ
traffic in a core router. We define a stream to be the same-class packets
from a given immediate upstream router destined to an output port of the
core router. At each output port, streams may be isolated in separate buffers
before being scheduled in a frame. The sequence of traffic transmission
in a frame starts from higher-priority traffic and goes down to lower-priority
traffic. A frame may have a number of small rounds for each class. Each
stream within a class can transmit a number of packets in the frame based on
its available grant, but only one packet per small round, thus reducing the
inter transmission time from the same stream and achieving a smaller jitter
and startup latency. The grant can be adjusted in a way to prevent the
starvation of lower priority classes. We also verify and demonstrate the good
performance of our scheduler by simulation and comparison with
other algorithms in terms of queuing delay, jitter, and start-up latency.
54. On Localized Application-Driven Topology Control for
Energy Efficient Wireless Peer-to-Peer File Sharing
ABSTRACT:-
Wireless Peer-to-Peer (P2P) file sharing is widely envisioned as one
of the major applications of ad hoc networks in the near future. This trend is
largely motivated by the recent advances in high speed wireless
communication technologies and high traffic demand for P2P file sharing
applications. To achieve the ambitious goal of realizing a
practical wireless P2P network, we need a scalable topology control protocol
to solve the neighbor discovery problem and network organization problem.
Indeed, we believe that the topology control mechanism should be
application driven in that we should try to achieve an efficient connectivity
among mobile devices in order to better serve the file sharing application.
We propose a new protocol which consists of two components, namely
Adjacency Set Construction (ASC) and Community-Based Asynchronous
Wakeup (CAW). Our proposed protocol is shown to be able to enhance the
fairness and provide incentive mechanism in wireless P2P file sharing
applications. It is also capable of increasing the energy efficiency.
55. The Server Reassignment Problem for Load Balancing in
Structured P2P Systems
This paper appears in: Parallel and Distributed Systems, IEEE Transactions
on
Issue Date : Feb. 2008
Volume : 19 , Issue:2
On page(s): 234 - 246
ISSN : 1045-9219
INSPEC Accession Number: 9767648
Digital Object Identifier : 10.1109/TPDS.2007.70735
Date of Current Version : 04 January 2008
Sponsored by : IEEE Computer Society
ABSTRACT:-
Application-layer peer-to-peer (P2P) networks are considered to
be the most important development for next-generation Internet
infrastructure. For these systems to be effective, load balancing
among the peers is critical. Most structured P2P systems rely on ID-space
partitioning schemes to solve the load imbalance problem and have been
known to result in an imbalance factor of ominus (logN) in the zone sizes.
This paper makes two contributions. First, we propose
addressing the virtual-server-based load balancing problem systematically
using an optimization-based approach and derive an effective algorithm to
rearrange loads among the peers. We demonstrate the superior performance
of our proposal in general and its advantages over previous strategies in
particular. We also explore other important issues vital to the performance
in the virtual server framework, such as the effect of the number of
directories employed in the system and the performance ramification of user
registration strategies. Second, and perhaps more significantly, we
systematically characterize the effect of heterogeneity on load balancing
algorithm performance and the conditions in which heterogeneity may be
easy or hard to deal with based on an extensive study of a wide spectrum
of load and capacity scenarios.
56. Toward Broadcast Reliability in Mobile Ad Hoc Networks
with Double Coverage
ABSTRACT:-
The broadcast operation, as a fundamental service in mobile
ad hoc networks (MANETs), is prone to the broadcast storm problem if
forwarding nodes are not carefully designated. The objective of reducing
broadcast redundancy while still providing high delivery ratio under high
transmission error rate is a major challenge in MANETs. In this paper, we
propose a simple broadcast algorithm, called double-covered broadcast
(DCB), which takes advantage of broadcast redundancy to improve the
delivery ratio in an environment that has rather high transmission error rate.
Among the 1-hop neighbors of the sender, only selected forwarding nodes
retransmit the broadcast message. Forwarding nodes are selected in such a
way that 1) the sender's 2-hop neighbors are covered and 2) the sender's 1-
hop neighbors are either forwarding nodes or no forwarding nodes covered
by at least two forwarding neighbors. The retransmissions of the forwarding
nodes are received by the sender as the confirmation of their reception of the
packet. The no forwarding 1-hop neighbors of the sender do not
acknowledge the reception of the broadcast. If the sender does not detect all
its forwarding nodes' retransmissions, it resends the packet until the
maximum number of retries is reached. Simulation results show that the
proposed broadcast algorithm provides good performance under a high
transmission error rate environment
57. Route Reservation in Ad Hoc Wireless Networks
ABSTRACT:-
This paper investigates whether and when route reservation-based
(RB) communication can yield better delay performance than non-
reservation-based (NRB) communication in ad hoc wireless networks.
In addition to posing this fundamental question, the requirements (in terms
of route discovery, medium access control (MAC) protocol, and pipelining,
etc.) for making RB switching superior to NRB switching are also identified.
A novel analytical framework is developed and the network performance
under both RB and NRB schemes is quantified. It is shown that if the
aforementioned requirements are met, then RB schemes can indeed yield
better delay performance than NRB schemes. This advantage, however,
comes at the expense of lower throughput and good put compared to NRB
schemes
58. Face Recognition Using Laplacian faces
This paper appears in: Pattern Analysis and Machine Intelligence, IEEE
Transactions on
Issue Date: March 2005
Volume: 27 Issue:3
On page(s): 328 - 340
ISSN: 0162-8828
INSPEC Accession Number: 8327483
Digital Object Identifier: 10.1109/TPAMI.2005.55
Date of Current Version: 31 January 2005
PubMed ID: 15747789
Sponsored by: IEEE Computer Society
ABSTRACT:-
We propose an appearance-based face recognition method called the
Laplacianface approach. By using locality preserving projections (LPP), the
face images are mapped into a face subspace for analysis. Different from
principal component analysis (PCA) and linear discriminate analysis (LDA)
which effectively see only the Euclidean structure of face space, LPP finds
an embedding that preserves local information, and obtains a face subspace
that best detects the essential face manifold structure. The Laplacianfaces are
the optimal linear approximations to the eigenfunctions of the Laplace
Beltrami operator on the face manifold. In this way, the unwanted variations
resulting from changes in lighting, facial expression, and pose may be
eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP
can be obtained from different graph models. We compare the proposed
Laplacianface approach with Eigenface and Fisher face methods on three
different face data sets. Experimental results suggest that the proposed
Laplacianface approach provides a better representation and achieves lower
error rates in face recognition.
59. Online Handwritten Script Recognition
This paper appears in: Pattern Analysis and Machine Intelligence, IEEE
Transactions on
Issue Date : Jan. 2004
Volume : 26 , Issue:1
On page(s): 124 - 130
ISSN : 0162-8828
INSPEC Accession Number: 7879989
Digital Object Identifier : 10.1109/TPAMI.2004.1261096
Date of Current Version : 14 June 2004
Sponsored by : IEEE Computer Society
ABSTRACT:-
Automatic identification of handwritten script facilitates many
important applications such as automatic transcription of multilingual
documents and search for documents on the Web containing a
particular script. The increase in usage of handheld devices which
accept handwritten input has created a growing demand for algorithms that
can efficiently analyze and retrieve handwritten data. This paper proposes a
method to classify words and lines in an online handwritten document into
one of the six major scripts: Arabic, Cyrillic, Devnagari, Han, Hebrew, or
Roman. The classification is based on 11 different spatial and temporal
features extracted from the strokes of the words. The proposed system
attains an overall classification accuracy of 87.1 percent at the word level
with 5-fold cross validation on a data set containing 13,379 words. The
classification accuracy improves to 95 percent as the number of words in the
test sample is increased to five, and to 95.5 percent for complete text lines
consisting of an average of seven words.
60. Wireless Intrusion detection system and a new attack
model
This paper appears in: Computers and Communications, 2006. ISCC '06.
Proceedings. 11th IEEE Symposium on
Issue Date : 26-29 June 2006
On page(s): 35 - 40
ISSN : 1530-1346
Print ISBN: 0-7695-2588-1
Digital Object Identifier : 10.1109/ISCC.2006.22
Date of Current Version : 11 September 2006
ABSTRACT:-
Denial-of-Service attacks, and jamming in particular, are a threat
to wireless networks because they are at the same time easy to
mount and difficult to detect and stop. We propose a distributed intrusion
detection system in which each node monitors the traffic flow on the
network and collects relevant statistics about it. By combining each node’s
view we are able to tell if (and which type of) an attack happened or if the
channel is just saturated. However, this system opens the possibility for
misuse. We discuss the impact of the misuse on the system and the best
strategies for each actor.