0% found this document useful (0 votes)
200 views57 pages

I Path Inference in Wireless Sensor Networks

The document proposes a novel path inference approach called iPath to reconstruct routing paths in wireless sensor networks. iPath exploits the observation of high path similarity, where packets from a node and its parent are likely to follow the same path, to iteratively infer long paths from short ones. It starts with an initial set of known paths and performs iterative inference. A lightweight hash function is used to verify inferred paths. iPath was implemented and evaluated on real-world sensor network traces, showing higher reconstruction ratios than state-of-the-art approaches.

Uploaded by

Jupaka Praveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
200 views57 pages

I Path Inference in Wireless Sensor Networks

The document proposes a novel path inference approach called iPath to reconstruct routing paths in wireless sensor networks. iPath exploits the observation of high path similarity, where packets from a node and its parent are likely to follow the same path, to iteratively infer long paths from short ones. It starts with an initial set of known paths and performs iterative inference. A lightweight hash function is used to verify inferred paths. iPath was implemented and evaluated on real-world sensor network traces, showing higher reconstruction ratios than state-of-the-art approaches.

Uploaded by

Jupaka Praveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 57

I PATH INFERENCE IN WIRELESS SENSOR

NETWORKS
ABSTRACT:

Recent wireless sensor networks (WSNs) are becoming increasingly complex with the
growing network scale and the dynamic nature of wireless communications. Many
measurement and diagnostic approaches depend on per-packet routing paths for
accurate and fine-grained analysis of the complex network behaviors. In this project,
we propose iPath, a novel path inference approach to reconstructing the per-packet
routing paths in dynamic and large-scale networks. The basic idea of iPath is to
exploit high path similarity to iteratively infer long paths from short ones. iPath starts
with an initial known set of paths and performs path inference iteratively. iPath
includes a novel design of a lightweight hash function for verification of the inferred
paths. In order to further improve the inference capability as well as the execution
efficiency, iPath includes a fast bootstrapping algorithm to reconstruct the initial set
of paths. We also implement iPath and evaluate its performance using traces from
large-scale WSN deployments as well as extensive simulations. Results show that
iPath achieves much higher reconstruction ratios under different network settings
compared to other state-of-the-art approaches.
TABLE OF CONTENTS

NAME OF THE CONTENTS PAGENO

CHAPTER-1: INTRODUCTION 1

1.1 Introduction 2

1.2 System Analysis 4

1.3 System Study 6

1.4 Systems Requirement And Specification 8

CHAPTER-2: LITERATION SURVEY 9

CHAPTER-3: SYSTEM DESIGN 13

3.1 Input Design and Output Design 14

3.2 Data Flow Diagram 16

3.3 Uml Diagrams 17

3.3.1 Use Case Diagram 17

3.3.2 Class Diagram 18

3.3.3-Sequence Diagram 19

CHAPTER-4: IMPLEMENTATION 20

4.1 Modules 21
4.2 Modules Description 21

4.3 Software Environment 22

CHAPTER-5: TESTING 28

5.1 Testing Methodologies 29

CHAPTER-6: RESULTS 36

CONCLUSION

REFERENCES
CHAPTER -1

INTRODUCTION

INTRODUCTION:

INTRODUCTION
WIRELESS sensor networks (WSNs) can be applied in many
application scenarios, e.g., structural protection [1], ecosystem
management [2], and urban CO monitoring [3]. In a typical
WSN, a number of self-organized sensor nodes report the
sensing data periodically to a central sink via multi hop
wireless. Recent years have witnessed a rapid growth of sensor
network scale. Some sensor networks include hundreds even
thousands of sensor nodes [2], [3]. These networks often
employ dynamic routing protocols [4]–[6] to achieve fast
adaptation to the dynamic wireless channel conditions. The
growing network scale and the dynamic nature of wireless
channel make WSNs become increasingly complex and hard to
manage. M Reconstructing the routing path of each received
packet at the sink side is an effective way to understand the
network's complex internal behaviors [7], [8]. With the routing
path of each packet, many measurement and diagnostic
approaches [9]–[13] are able to conduct effective management
and protocol optimizations for deployed WSNs consisting of a
large number of unattended sensor nodes. For example, PAD
[10] depends on the routing path information to build a
Bayesian network for inferring the root causes of abnormal
phenomena. Path information is also important for a network
manager to effectively manage a sensor network. For example,
given the per-packet path information, a network manager can
easily find out the nodes with a lot of packets forwarded by
them, i.e., network hop spots. Then, the manager can take
actions to deal with that problem, such as deploying more
nodes to that area and modifying the routing layer protocols.
Furthermore, per-packet path information is essential to
monitor the fine-grained per-link metrics. For example, most
existing delay and loss measurement approaches [9], [14]
assume that the routing topology is given as a priori. The time-
varying routing topology can be effectively obtained by per-
packet routing path, significantly improving the values of
existing WSN delay and loss tomography approaches. A
straightforward approach is to attach the entire routing path in
each packet. The problem of this approach is that its message
overhead can be large for packets with long routing paths.
Considering the limited communication resources of WSNs,
this approach is usually not desirable in practice. In this
project, we propose I Path, a novel path inference approach to
reconstruct routing paths at the sink side. Based on a real-
world complex urban sensing network with all node generating
local packets, we find a key observation: It is highly probable
that a packet from node and one of the packets from 's parent
will follow the same path starting from 's parent toward the
sink. We refer to this observation as high path similarity. Fig. 1
shows a simple example where S is the sink node. Denotes a
packet from A, and denotes packets from B (A's parent). High
path similarity states that it is highly probable that will follow
the same path (i.e., , which means the subpath by removing
node A from ) as one of B's packet, say , i.e., . The basic idea
of iPath is to exploit high path similarity to iteratively infer
long paths from short ones. iPath starts with a known set of
paths (e.g., the one-hop paths are already known) and performs
path inference iteratively. During each iteration, it tries to infer
paths one hop longer until no paths can be inferred. In order to
ensure correct inference, iPath needs to verify whether a short
path can be used for inferring a long path. For this purpose,
iPath includes a novel design of a lightweight hash function.
Each data packet attaches a hash value that is updated hop by
hop. This recorded hash value is compared against the
calculated hash value of an inferred path. If these two values
match, the path is correctly inferred with a very high
probability. In order to further improve the inference capability
as well as its execution efficiency, iPath includes a fast
bootstrapping algorithm to reconstruct a known set of paths.
iPath achieves a much higher reconstruction ratio in networks
with relatively low packet delivery ratio and high routing
dynamics.

The contributions of this work are the following.


• We observe high path similarity in a real-world sensor
network. Based on this observation, we propose an iterative
boosting algorithm for efficient path inference. We propose a
lightweight hash function for efficient verification within
iPath. We further propose a fast bootstrapping algorithm to
improve the inference capability as well as its execution
efficiency.
• We propose an analytical model to calculate the successful
reconstruction probability in various network conditions such
as network scale, routing dynamics, packet losses, and node
density.
• We implement iPath and evaluate its performance using
traces from large-scale WSN deployments as well as extensive
simulations. iPath achieves higher reconstruction ratio under
different network settings compared to states of the art.

SYSTEM ANALYSIS

EXISTING SYSTEM:

 With the routing path of each packet, many measurement and diagnostic
approaches are able to conduct effective management and protocol
optimizations for deployed WSNs consisting of a large number of
unattended sensor nodes. For example, PAD depends on the routing path
information to build a Bayesian network for inferring the root causes of
abnormal phenomena.
 Path information is also important for a network manager to effectively
manage a sensor network. For example, given the per-packet path
information, a network manager can easily find out the nodes with a lot of
packets forwarded by them, i.e., network hop spots. Then, the manager can
take actions to deal with that problem, such as deploying more nodes to
that area and modifying the routing layer protocols.
 Furthermore, per-packet path information is essential to monitor the fine-
grained per-link metrics. For example, most existing delay and loss
measurement approaches assume that the routing topology is given as a
priori.
 The time-varying routing topology can be effectively obtained by per-
packet routing path, significantly improving the values of existing WSN
delay and loss tomography approaches.

DISADVANTAGES OF EXISTING SYSTEM:

 The growing network scale and the dynamic nature of wireless channel
make WSNs become increasingly complex and hard to manage.
 The problem of existing approach is that its message overhead can be large
for packets with long routing paths.
 Considering the limited communication resources of WSNs, this approach
is usually not desirable in practice.

PROPOSED SYSTEM:

 In this project, we propose iPath, a novel path inference approach to


reconstruct routing paths at the sink side. Based on a real-world complex
urban sensing network with all node generating local packets, we find a
key observation: It is highly probable that a packet from node and one
of the packets from ‘s parent will follow the same path starting from ‘s
parent toward the sink. We refer to this observation as high path similarity.
 The basic idea of iPath is to exploit high path similarity to iteratively infer
long paths from short ones. iPath starts with a known set of paths (e.g., the
one-hop paths are already known) and performs path inference iteratively.
During each iteration, it tries to infer paths one hop longer until no paths
can be inferred.
 In order to ensure correct inference, iPath needs to verify whether a short
path can be used for inferring a long path. For this purpose, iPath includes
a novel design of a lightweight hash function. Each data packet attaches a
hash value that is updated hop by hop. This recorded hash value is
compared against the calculated hash value of an inferred path. If these
two values match, the path is correctly inferred with a very high
probability.
 In order to further improve the inference capability as well as its execution
efficiency, iPath includes a fast bootstrapping algorithm to reconstruct a
known set of paths.

ADVANTAGES OF PROPOSED SYSTEM:

 We observe high path similarity in a real-world sensor network.


 It’s an iterative boosting algorithm for efficient path inference.
 It’s a lightweight hash function for efficient verification within iPath.
 The proposed system further propose a fast bootstrapping algorithm to
improve the inference capability as well as its execution efficiency.
 iPath achieves higher reconstruction ratio under different network settings
compared to states of the art.
SYSTEM STUDY

FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business


proposal is put forth with a very general plan for the project and some cost
estimates. During system analysis the feasibility study of the proposed
system is to be carried out. This is to ensure that the proposed system is
not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system
will have on the organization. The amount of fund that the company can pour
into the research and development of the system is limited. The expenditures
must be justified. Thus the developed system as well within the budget and
this was achieved because most of the technologies used are freely available.
Only the customized products had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is,
the technical requirements of the system. Any system developed must not have
a high demand on the available technical resources. This will lead to high
demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing
this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by


the user. This includes the process of training the user to use the system
efficiently. The user must not feel threatened by the system, instead must
accept it as a necessity. The level of acceptance by the users solely depends on
the methods that are employed to educate the user about the system and to
make him familiar with it. His level of confidence must be raised so that he is
also able to make some constructive criticism, which is welcomed, as he is the
final user of the system.
SYSTEM REQUIREMENTS AND SPECIFICATIONS

HARDWARE REQUIREMENTS:

 System : Intel core5.


 Hard Disk : 500GB.
 Ram : 2Gb

Software Requirements:

 Database Server : MySQL

 Database Client : SQLYOG

 Server : Apache Tomcat

 Platform : Java

 Server side Technologies : JEE(Servlets,JSP)

 Client Side Technologies : HTML,CSS,JavaScript,AJAX

 IDE : Eclipse/sTS
CHAPTER- 2

LITERATION SURVEY

CHAPTER- 3

SYSTEM DESIGN:

INPUT DESIGN AND OUTPUT DESIGN

Input Design

Information Design assumes a crucial function in the existence pattern of


programming improvement; it requires exceptionally cautious consideration of
engineers. The information configuration is to take care of information to the
application as exact as could be expected under the circumstances. So inputs
should be planned viably with the goal that the mistakes happening while at
the same time taking care of are limited. As indicated by Software
Engineering Concepts, the info structures or screens are intended to give to
have an approval command over as far as possible, range & other related
approvals.
This framework has input screens in practically all the modules. Blunder
messages are created to caution the client at whatever point he submits a few
missteps & aides him in the correct manner so invalid passages are not made.
Let us see profoundly about this under module plan.

Info configuration is the way toward changing over the client made
contribution to a PC based organization. The objective of the information
configuration is to make the information section intelligent & liberated from
blunders. The mistake is in the information are constrained by the information
plan. The application has been created in easy to understand way. The
structures have been planned in such a manner during the preparing the cursor
is set in the position where must be entered. The client is additionally given
inside a choice to choose a proper contribution from different choices
identified with the field in specific cases.

Approvals are needed for every information entered. At whatever point a


client enters a mistaken information, blunder message is shown & the client
can proceed onward to the resulting pages in the wake of finishing all the
sections in the current page.

Output Design

The Output from the PC is needed to essentially make a proficient strategy for
correspondence inside the organization fundamentally among the venture chief
& his colleagues, as such, the overseer & the customers. The yield of VPN is
the framework which permits the task chief to deal with his customers as far as
making new customers & appointing new undertakings to them, keeping up a
record of the venture legitimacy & giving envelope level admittance to every
customer on the client side contingent upon the activities apportioned to him.
After consummation of a task, another venture might be doled out to the
customer. Client validation methods are kept up at the underlying stages itself.
Another client might be made by the director himself or a client would himself
be able to enroll as another client yet the errand of allotting ventures &
approving another client rests with the overseer as it were.

The application begins running when it is executed unexpectedly. The worker


must be begun & afterward the web pioneer in utilized as the program. The
undertaking will run on the neighborhood so the worker machine will fill in as
the director while the other associated frameworks can go about as the
customers. The created framework is profoundly easy to understand & can be
effortlessly perceived by anybody utilizing it in any event, unexpectedly.

SYSTEM DESIGN

DATA FLOW DIAGRAM:

1. The DFD is also called as bubble chart. It is a simple graphical


formalism that can be used to represent a system in terms of input data
to the system, various processing carried out on this data, and the
output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling
tools. It is used to model the system components. These components
are the system process, the data used by the process, an external entity
that interacts with the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it
is modified by a series of transformations. It is a graphical technique
that depicts information flow and the transformations that are applied
as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a
system at any level of abstraction. DFD may be partitioned into levels
that represent increasing information flow and functional detail.
UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating
models of object oriented computer software. In its current form UML is
comprised of two major components: a Meta-model and a notation. In the
future, some form of method or process may also be added to; or associated
with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system,
as well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that
have proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented
software and the software development process. The UML uses mostly
graphical notations to express the design of software projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so
that they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
USE CASE DIAGRAM:

A use case diagram in the Unified Modeling Language (UML) is a type


of behavioral diagram defined by and created from a Use-case analysis. Its
purpose is to present a graphical overview of the functionality provided by a
system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor.
Roles of the actors in the system can be depicted.
CLASS DIAGRAM:

In software engineering, a class diagram in the Unified Modeling Language


(UML) is a type of static structure diagram that describes the structure of a
system by showing the system's classes, their attributes, operations (or
methods), and the relationships among the classes. It explains which class
contains information.

Class Diagram

Source iPath Router

Browse, Split, Connect, Receiver,


Methods
Upload, Reset
Switch (), Flow Slice
File Name, Sender Methods (), Alter (), Forward
Name, Rooter (), Assign BW (),
Members
Name, New BW ()
Destination Name
BW, Switched
Members
nodes, status, flow
slice, traffic,
Monitor

Receiver

Merge(),
Receive(), Store()

Methods
Destination,
fname, store

Members
SEQUENCE DIAGRAM:

A sequence diagram in Unified Modeling Language (UML) is a kind of


interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart. Sequence diagrams
are sometimes called event diagrams, event scenarios, and timing diagrams.

Sequence Diagram

Source iPath Router


Destinatio
Browse and Assign Bandwidth to each
n
Spitted file and every node

File Upload View Bandwidth Details

File Receiving
Confirmation Find Congestion

Apply Switching Technique

Select Alternative Path

Apply flow slice technique

Forward Encrypted Data


Receive File
Merge File
File Sending Confirmation
Store File

Store Confirmation
CHAPTER- 4
IMPLEMENTATION:
MODULES:

• Source
• iPath Router
• Receiver

MODULES DESCSRIPTION:

 SOURCE
In this module, service provider browses the file; enter the
file name and sends to the iPath router. Service provider
encrypts the data and send to the router.

iPath ROUTER

 In this module, router receives the file packets from the source,
if packets size is greater than node BW then congestion occurs
and then path inference will take place in order to find an
alternative path. It takes another node and reaches the
destination and load balancing takes place. When congestion
occurs node band width can be increased.
RECEIVER

 In this module, receiver receives the file. Calculates the time


delay to reach the file from source to destination. Receiver
stores the data details.

SOFTWARE ENVIRONMENT

Java Technology

Java technology is both a programming language and a platform.

The Java Programming Language


The Java programming language is a high-level language that can be
characterized by all of the following buzzwords:

 Simple
 Architecture neutral
 Object oriented
 Portable
 Distributed
 High performance
 Interpreted
 Multithreaded
 Robust
 Dynamic
 Secure

With most programming languages, you either compile or interpret a


program so that you can run it on your computer. The Java programming
language is unusual in that a program is both compiled and interpreted. With
the compiler, first you translate a program into an intermediate language called
Java byte codes —the platform-independent codes interpreted by the
interpreter on the Java platform. The interpreter parses and runs each Java byte
code instruction on the computer. Compilation happens just once;
interpretation occurs each time the program is executed. The following figure
illustrates how this works.

You can think of Java byte codes as the machine code instructions for
the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a
development tool or a Web browser that can run applets, is an implementation
of the Java VM. Java byte codes help make “write once, run anywhere”
possible. You can compile your program into byte codes on any platform that
has a Java compiler. The byte codes can then be run on any implementation of
the Java VM. That means that as long as a computer has a Java VM, the same
program written in the Java programming language can run on Windows 2000,
a Solaris workstation, or on an iMac.
The Java Platform
A platform is the hardware or software environment in which a
program runs. We’ve already mentioned some of the most popular
platforms like Windows 2000, Linux, Solaris, and MacOS. Most
platforms can be described as a combination of the operating system
and hardware. The Java platform differs from most other platforms in
that it’s a software-only platform that runs on top of other hardware-
based platforms.
The Java platform has two components:
 The Java Virtual Machine (Java VM)
 The Java Application Programming Interface (Java API)
You’ve already been introduced to the Java VM. It’s the base for
the Java platform and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software
components that provide many useful capabilities, such as graphical
user interface (GUI) widgets. The Java API is grouped into libraries of
related classes and interfaces; these libraries are known as packages.
The next section, What Can Java Technology Do? Highlights what
functionality some of the packages in the Java API provide.
The following figure depicts a program that’s running on the Java
platform. As the figure shows, the Java API and the virtual machine
insulate the program from the hardware.

Native code is code that after you compile it, the compiled code
runs on a specific hardware platform. As a platform-independent
environment, the Java platform can be a bit slower than native code.
However, smart compilers, well-tuned interpreters, and just-in-time
byte code compilers can bring performance close to that of native code
without threatening portability.
What Can Java Technology Do?
The most common types of programs written in the Java programming
language are applets and applications. If you’ve surfed the Web,
you’re probably already familiar with applets. An applet is a program
that adheres to certain conventions that allow it to run within a Java-
enabled browser.
However, the Java programming language is not just for writing cute,
entertaining applets for the Web. The general-purpose, high-level Java
programming language is also a powerful software platform. Using the
generous API, you can write many types of programs.
An application is a standalone program that runs directly on the Java
platform. A special kind of application known as a server serves and
supports clients on a network. Examples of servers are Web servers,
proxy servers, mail servers, and print servers. Another specialized
program is a servlet. A servlet can almost be thought of as an applet
that runs on the server side. Java Servlets are a popular choice for
building interactive web applications, replacing the use of CGI scripts.
Servlets are similar to applets in that they are runtime extensions of
applications. Instead of working in browsers, though, servlets run
within Java Web servers, configuring or tailoring the server.
How does the API support all these kinds of programs? It does so with
packages of software components that provides a wide range of
functionality. Every full implementation of the Java platform gives you
the following features:
 The essentials: Objects, strings, threads, numbers, input and
output, data structures, system properties, date and time, and so
on.
 Applets: The set of conventions used by applets.
 Networking: URLs, TCP (Transmission Control Protocol),
UDP (User Data gram Protocol) sockets, and IP (Internet
Protocol) addresses.
 Internationalization: Help for writing programs that can be
localized for users worldwide. Programs can automatically
adapt to specific locales and be displayed in the appropriate
language.
 Security: Both low level and high level, including electronic
signatures, public and private key management, access control,
and certificates.
 Software components: Known as JavaBeansTM, can plug into
existing component architectures.
 Object serialization: Allows lightweight persistence and
communication via Remote Method Invocation (RMI).
 Java Database Connectivity (JDBCTM): Provides uniform
access to a wide range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility,
servers, collaboration, telephony, speech, animation, and more. The
following figure depicts what is included in the Java 2 SDK.

How Will Java Technology Change My Life?

We can’t promise you fame, fortune, or even a job if you learn the
Java programming language. Still, it is likely to make your programs
better and requires less effort than other languages. We believe that
Java technology will help you do the following:

 Get started quickly: Although the Java programming language


is a powerful object-oriented language, it’s easy to learn,
especially for programmers already familiar with C or C++.
 Write less code: Comparisons of program metrics (class
counts, method counts, and so on) suggest that a program
written in the Java programming language can be four times
smaller than the same program in C++.
 Write better code: The Java programming language
encourages good coding practices, and its garbage collection
helps you avoid memory leaks. Its object orientation, its
JavaBeans component architecture, and its wide-ranging, easily
extendible API let you reuse other people’s tested code and
introduce fewer bugs.
 Develop programs more quickly: Your development time
may be as much as twice as fast versus writing the same
program in C++. Why? You write fewer lines of code and it is
a simpler programming language than C++.
 Avoid platform dependencies with 100% Pure Java: You
can keep your program portable by avoiding the use of libraries
written in other languages. The 100% Pure Java TM Product
Certification Program has a repository of historical process
manuals, white papers, brochures, and similar materials online.
 Write once, run anywhere: Because 100% Pure Java
programs are compiled into machine-independent byte codes,
they run consistently on any Java platform.
 Distribute software more easily: You can upgrade applets
easily from a central server. Applets take advantage of the
feature of allowing new classes to be loaded “on the fly,”
without recompiling the entire program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard
programming interface for application developers and database systems
providers. Before ODBC became a de facto standard for Windows programs
to interface with database systems, programmers had to use proprietary
languages for each database they wanted to connect to. Now, ODBC has made
the choice of the database system almost irrelevant from a coding perspective,
which is as it should be. Application developers have much more important
things to worry about than the syntax that is needed to port their program from
one database to another when business needs suddenly change.
Through the ODBC Administrator in Control Panel, you can specify
the particular database that is associated with a data source that an ODBC
application program is written to use. Think of an ODBC data source as a door
with a name on it. Each door will lead you to a particular database. For
example, the data source named Sales Figures might be a SQL Server
database, whereas the Accounts Payable data source could refer to an Access
database. The physical database referred to by a data source can reside
anywhere on the LAN.
The ODBC system files are not installed on your system by Windows
95. Rather, they are installed when you setup a separate database application,
such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is
installed in Control Panel, it uses a file called ODBCINST.DLL. It is also
possible to administer your ODBC data sources through a stand-alone program
called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this
program and each maintains a separate list of ODBC data sources.

From a programming perspective, the beauty of ODBC is that the


application can be written to use the same set of function calls to interface
with any data source, regardless of the database vendor. The source code of
the application doesn’t change whether it talks to Oracle or SQL Server. We
only mention these two as an example. There are ODBC drivers available for
several dozen popular database systems. Even Excel spreadsheets and plain
text files can be turned into data sources. The operating system uses the
Registry information written by ODBC Administrator to determine which
low-level ODBC drivers are needed to talk to the data source (such as the
interface to Oracle or SQL Server). The loading of the ODBC drivers is
transparent to the ODBC application program. In a client/server environment,
the ODBC API even handles many of the network issues for the application
programmer.
The advantages of this scheme are so numerous that you are probably
thinking there must be some catch. The only disadvantage of ODBC is that it
isn’t as efficient as talking directly to the native database interface. ODBC has
had many detractors make the charge that it is too slow. Microsoft has always
claimed that the critical factor in performance is the quality of the driver
software that is used. In our humble opinion, this is true. The availability of
good ODBC drivers has improved a great deal recently. And anyway, the
criticism about performance is somewhat analogous to those who said that
compilers would never match the speed of pure assembly language. Maybe
not, but the compiler (or ODBC) gives you the opportunity to write cleaner
programs, which means you finish sooner. Meanwhile, computers get faster
every year.

JDBC
In an effort to set an independent database standard API for Java; Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a
generic SQL database access mechanism that provides a consistent interface to
a variety of RDBMSs. This consistent interface is achieved through the use of
“plug-in” database connectivity modules, or drivers. If a database vendor
wishes to have JDBC support, he or she must provide the driver for each
platform that the database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on
ODBC. As you discovered earlier in this chapter, ODBC has widespread
support on a variety of platforms. Basing JDBC on ODBC will allow vendors
to bring JDBC drivers to market much faster than developing a completely
new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day
public review that ended June 8, 1996. Because of user input, the final JDBC
v1.0 specification was released soon after.
The remainder of this section will cover enough information about JDBC for
you to know what it is about and how to use it effectively. This is by no means
a complete overview of JDBC. That would fill an entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC is
one that, because of its many goals, drove the development of the API. These
goals, in conjunction with early reviewer feedback, have finalized the JDBC
class library into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some
insight as to why certain classes and functionalities behave the way they do.
The eight design goals for JDBC are as follows:

1. SQL Level API


The designers felt that their main goal was to define a SQL interface
for Java. Although not the lowest database interface level possible, it is at
a low enough level for higher-level tools and APIs to be created.
Conversely, it is at a high enough level for application programmers to use
it confidently. Attaining this goal allows for future tool vendors to
“generate” JDBC code and to hide many of JDBC’s complexities from the
end user.
2. SQL Conformance
SQL syntax varies as you move from database vendor to database
vendor. In an effort to support a wide variety of vendors, JDBC will allow
any query statement to be passed through it to the underlying database
driver. This allows the connectivity module to handle non-standard
functionality in a manner that is suitable for its users.
3. JDBC must be implemental on top of common database interfaces
The JDBC SQL API must “sit” on top of other common SQL level
APIs. This goal allows JDBC to use existing ODBC level drivers by
the use of a software interface. This interface would translate JDBC
calls to ODBC and vice versa.
4. Provide a Java interface that is consistent with the rest of the Java
system
Because of Java’s acceptance in the user community thus far, the
designers feel that they should not stray from the current design of the core
Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC
is no exception. Sun felt that the design of JDBC should be very simple,
allowing for only one method of completing a task per mechanism.
Allowing duplicate functionality only serves to confuse the users of the
API.
6. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile
time; also, less error appear at runtime.
7. Keep the common cases simple
Because more often than not, the usual SQL calls used by the
programmer are simple SELECT’s, INSERT’s, DELETE’s and
UPDATE’s, these queries should be simple to perform with JDBC.
However, more complex SQL statements should also be possible.
Java ha two things: a programming language and a platform. Java is a
high-level programming language that is all of the following

 Simple
 Architecture-neutral
 Object-oriented
 Portable
 Distributed
 High-performance
 Interpreted
 multithreaded
 Robust
 Dynamic
 Secure

Java is also unusual in that each Java program is both compiled


and interpreted. With a compile you translate a Java program into
an intermediate language called Java byte codes the platform-
independent code instruction is passed and run on the computer.
Compilation happens just once; interpretation occurs each time
the program is executed. The figure illustrates how this works.
Java Program Compilers Interpreter
My Program

You can think of Java byte codes as the machine code


instructions for the Java Virtual Machine (Java VM). Every Java
interpreter, whether it’s a Java development tool or a Web browser
that can run Java applets, is an implementation of the Java VM. The
Java VM can also be implemented in hardware.
Java byte codes help make “write once, run anywhere” possible.
You can compile your Java program into byte codes on my platform
that has a Java compiler. The byte codes can then be run any
implementation of the Java VM. For example, the same Java
program can run Windows NT, Solaris, and Macintosh.

Networking

TCP/IP stack
The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram


Protocol) is a connectionless protocol.
IP datagram’s
The IP layer provides a connectionless and unreliable delivery
system. It considers each datagram independently of the others. Any
association between datagram must be supplied by the higher layers.
The IP layer supplies a checksum that includes its own header. The
header includes the source and destination addresses. The IP layer
handles routing through an Internet. It is also responsible for
breaking up large datagram into smaller ones for transmission and
reassembling them at the other end.
UDP
UDP is also connectionless and unreliable. What it adds to IP is a
checksum for the contents of the datagram and port numbers. These
are used to give a client/server model - see later.

TCP
TCP supplies logic to give a reliable connection-oriented
protocol above IP. It provides a virtual circuit that two processes can
use to communicate.
Internet addresses
In order to use a service, you must be able to find it. The Internet
uses an address scheme for machines so that they can be located. The
address is a 32 bit integer which gives the IP address. This encodes a
network ID and more addressing. The network ID falls into various
classes according to the size of the network address.
Network address
Class A uses 8 bits for the network address with 24 bits left over
for other addressing. Class B uses 16 bit network addressing. Class C
uses 24 bit network addressing and class D uses all 32.
Subnet address
Internally, the UNIX network is divided into sub networks.
Building 11 is currently on one sub network and uses 10-bit
addressing, allowing 1024 different hosts.
Host address
8 bits are finally used for host addresses within our subnet. This
places a limit of 256 machines that can be on the subnet.

Total address

The 32 bit address is usually written as 4 integers separated by


dots.

Port addresses
A service exists on a host, and is identified by its port. This is a
16 bit number. To send a message to a server, you send it to the port
for that service of the host that it is running on. This is not location
transparency! Certain of these ports are "well known".

Sockets
A socket is a data structure maintained by the system to handle
network connections. A socket is created using the call socket. It
returns an integer that is like a file descriptor. In fact, under
Windows, this handle can be used with Read File and Write File
functions.
#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);
Here "family" will be AF_INET for IP communications, protocol
will be zero, and type will depend on whether TCP or UDP is used.
Two processes wishing to communicate over a network create a
socket each. These are similar to two ends of a pipe - but the actual
pipe does not yet exist.
JFree Chart
JFreeChart is a free 100% Java chart library that makes it easy
for developers to display professional quality charts in their
applications. JFreeChart's extensive feature set includes:
A consistent and well-documented API, supporting a wide
range of chart types;
A flexible design that is easy to extend, and targets both server-
side and client-side applications;
Support for many output types, including Swing components,
image files (including PNG and JPEG), and vector graphics file
formats (including PDF, EPS and SVG);
JFreeChart is "open source" or, more specifically, free
software. It is distributed under the terms of the GNU Lesser General
Public Licence (LGPL), which permits use in proprietary applications.
1. Map Visualizations
Charts showing values that relate to geographical areas. Some
examples include: (a) population density in each state of the United
States, (b) income per capita for each country in Europe, (c) life
expectancy in each country of the world. The tasks in this project
include:
Sourcing freely redistributable vector outlines for the countries
of the world, states/provinces in particular countries (USA in
particular, but also other areas);
Creating an appropriate dataset interface (plus default
implementation), a rendered, and integrating this with the existing
XYPlot class in JFreeChart;
Testing, documenting, testing some more, documenting some
more.

2. Time Series Chart Interactivity


Implement a new (to JFreeChart) feature for interactive time series charts
--- to display a separate control that shows a small version of ALL the time
series data, with a sliding "view" rectangle that allows you to select the
subset of the time series data to display in the main chart.
3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible


dashboard mechanism that supports a subset of JFreeChart chart types
(dials, pies, thermometers, bars, and lines/time series) that can be delivered
easily via both Java Web Start and an applet.

4. Property Editors

The property editor mechanism in JFreeChart only handles a small


subset of the properties that can be set for charts. Extend (or
reimplement) this mechanism to provide greater end-user control over
the appearance of the charts.

What is a Java Web Application?


A Java web application generates interactive web pages containing various
types of markup language (HTML, XML, and so on) and dynamic content. It
is typically comprised of web components such as JavaServer Pages (JSP),
servlets and JavaBeans to modify and temporarily store data, interact with
databases and web services, and render content in response to client requests.
Because many of the tasks involved in web application development can be
repetitive or require a surplus of boilerplate code, web frameworks can be
applied to alleviate the overhead associated with common activities. For
example, many frameworks, such as JavaServer Faces, provide libraries for
templating pages and session management, and often promote code reuse.

What is Java EE?


Java EE (Enterprise Edition) is a widely used platform containing a set of
coordinated technologies that significantly reduce the cost and complexity of
developing, deploying, and managing multi-tier, server-centric applications.
Java EE builds upon the Java SE platform and provides a set of APIs
(application programming interfaces) for developing and running portable,
robust, scalable, reliable and secure server-side applications.
Some of the fundamental components of Java EE include:
 Enterprise JavaBeans (EJB): a managed, server-side component
architecture used to encapsulate the business logic of an application.
EJB technology enables rapid and simplified development of
distributed, transactional, secure and portable applications based on
Java technology.
 Java Persistence API (JPA): a framework that allows developers to
manage data using object-relational mapping (ORM) in applications
built on the Java Platform.

JavaScript and Ajax Development


JavaScript is an object-oriented scripting language primarily used in client-
side interfaces for web applications. Ajax (Asynchronous JavaScript and
XML) is a Web 2.0 technique that allows changes to occur in a web page
without the need to perform a page refresh. JavaScript toolkits can be
leveraged to implement Ajax-enabled components and functionality in web
pages.

Web Server and Client


Web Server is a software that can process the client request and send the
response back to the client. For example, Apache is one of the most widely
used web server. Web Server runs on some physical machine and listens to
client request on specific port.
A web client is a software that helps in communicating with the server. Some
of the most widely used web clients are Firefox, Google Chrome, Safari etc.
When we request something from server (through URL), web client takes care
of creating a request and sending it to server and then parsing the server
response and present it to the user.

HTML and HTTP


Web Server and Web Client are two separate softwares, so there should be
some common language for communication. HTML is the common language
between server and client and stands for HyperText Markup Language.
Web server and client needs a common communication protocol, HTTP
(HyperText Transfer Protocol) is the communication protocol between server
and client. HTTP runs on top of TCP/IP communication protocol.
Some of the important parts of HTTP Request are:
 HTTP Method – action to be performed, usually GET, POST, PUT
etc.
 URL – Page to access
 Form Parameters – similar to arguments in a java method, for
example user,password details from login page.
Sample HTTP Request:
1GET /FirstServletProject/jsps/hello.jsp HTTP/1.1
2Host: localhost:8080
3Cache-Control: no-cache
Some of the important parts of HTTP Response are:
 Status Code – an integer to indicate whether the request was success
or not. Some of the well known status codes are 200 for success, 404
for Not Found and 403 for Access Forbidden.
 Content Type – text, html, image, pdf etc. Also known as MIME type
 Content – actual data that is rendered by client and shown to user.

MIME Type or Content Type: If you see above sample HTTP response
header, it contains tag “Content-Type”. It’s also called MIME type and server
sends it to client to let them know the kind of data it’s sending. It helps client
in rendering the data for user. Some of the mostly used mime types are
text/html, text/xml, application/xml etc.

Understanding URL
URL is acronym of Universal Resource Locator and it’s used to locate the
server and resource. Every resource on the web has it’s own unique address.
Let’s see parts of URL with an example.
http://localhost:8080/FirstServletProject/jsps/hello.jsp
http:// – This is the first part of URL and provides the communication
protocol to be used in server-client communication.

localhost – The unique address of the server, most of the times it’s the
hostname of the server that maps to unique IP address. Sometimes multiple
hostnames point to same IP addresses and web server virtual host takes care of
sending request to the particular server instance.

8080 – This is the port on which server is listening, it’s optional and if we
don’t provide it in URL then request goes to the default port of the protocol.
Port numbers 0 to 1023 are reserved ports for well known services, for
example 80 for HTTP, 443 for HTTPS, 21 for FTP etc.

FirstServletProject/jsps/hello.jsp – Resource requested from server. It can be


static html, pdf, JSP, servlets, PHP etc.

Why we need Servlet and JSPs?


Web servers are good for static contents HTML pages but they don’t know
how to generate dynamic content or how to save data into databases, so we
need another tool that we can use to generate dynamic content. There are
several programming languages for dynamic content like PHP, Python, Ruby
on Rails, Java Servlets and JSPs.
Java Servlet and JSPs are server side technologies to extend the capability of
web servers by providing support for dynamic response and data persistence.

Web Container
Tomcat is a web container, when a request is made from Client to web server,
it passes the request to web container and it’s web container job to find the
correct resource to handle the request (servlet or JSP) and then use the
response from the resource to generate the response and provide it to web
server. Then web server sends the response back to the client.
When web container gets the request and if it’s for servlet then container
creates two Objects HTTPServletRequest and HTTPServletResponse. Then it
finds the correct servlet based on the URL and creates a thread for the request.
Then it invokes the servlet service() method and based on the HTTP method
service() method invokes doGet() or doPost() methods. Servlet methods
generate the dynamic page and write it to response. Once servlet thread is
complete, container converts the response to HTTP response and send it back
to client.
Some of the important work done by web container are:
 Communication Support – Container provides easy way of
communication between web server and the servlets and JSPs. Because
of container, we don’t need to build a server socket to listen for any
request from web server, parse the request and generate response. All
these important and complex tasks are done by container and all we
need to focus is on our business logic for our applications.
 Lifecycle and Resource Management – Container takes care of
managing the life cycle of servlet. Container takes care of loading the
servlets into memory, initializing servlets, invoking servlet methods
and destroying them. Container also provides utility like JNDI for
resource pooling and management.
 Multithreading Support – Container creates new thread for every
request to the servlet and when it’s processed the thread dies. So
servlets are not initialized for each request and saves time and memory.
 JSP Support – JSPs doesn’t look like normal java classes and web
container provides support for JSP. Every JSP in the application is
compiled by container and converted to Servlet and then container
manages them like other servlets.
 Miscellaneous Task – Web container manages the resource pool, does
memory optimizations, run garbage collector, provides security
configurations, support for multiple applications, hot deployment and
several other tasks behind the scene that makes our life easier.

Web Application Directory Structure


Java Web Applications are packaged as Web Archive (WAR) and it has a
defined structure. You can export above dynamic web project as WAR file
and unzip it to check the hierarchy. It will be something like below image.
Deployment Descriptor
web.xml file is the deployment descriptor of the web application and contains
mapping for servlets (prior to 3.0), welcome pages, security configurations,
session timeout settings etc.
Thats all for the java web application startup tutorial, we will explore Servlets
and JSPs more in future posts.

MySQL:

MySQL, the most popular Open Source SQL database management system, is
developed, distributed, and supported by Oracle Corporation.
The MySQL Web site (http://www.mysql.com/) provides the latest
information about MySQL software.

 MySQL is a database management system.


A database is a structured collection of data. It may be anything from a
simple shopping list to a picture gallery or the vast amounts of
information in a corporate network. To add, access, and process data
stored in a computer database, you need a database management
system such as MySQL Server. Since computers are very good at
handling large amounts of data, database management systems play a
central role in computing, as standalone utilities, or as parts of other
applications.
 MySQL databases are relational.

A relational database stores data in separate tables rather than putting


all the data in one big storeroom. The database structures are organized
into physical files optimized for speed. The logical model, with objects
such as databases, tables, views, rows, and columns, offers a flexible
programming environment. You set up rules governing the
relationships between different data fields, such as one-to-one, one-to-
many, unique, required or optional, and “pointers” between different
tables. The database enforces these rules, so that with a well-designed
database, your application never sees inconsistent, duplicate, orphan,
out-of-date, or missing data.
The SQL part of “MySQL” stands for “Structured Query Language”.
SQL is the most common standardized language used to access
databases. Depending on your programming environment, you might
enter SQL directly (for example, to generate reports), embed SQL
statements into code written in another language, or use a language-
specific API that hides the SQL syntax.
SQL is defined by the ANSI/ISO SQL Standard. The SQL standard
has been evolving since 1986 and several versions exist. In this
manual, “SQL-92” refers to the standard released in 1992, “SQL:1999”
refers to the standard released in 1999, and “SQL:2003” refers to the
current version of the standard. We use the phrase “the SQL standard”
to mean the current version of the SQL Standard at any time.

 MySQL software is Open Source.


Open Source means that it is possible for anyone to use and modify the
software. Anybody can download the MySQL software from the
Internet and use it without paying anything. If you wish, you may
study the source code and change it to suit your needs. The MySQL
software uses the GPL (GNU General Public License),
http://www.fsf.org/licenses/, to define what you may and may not do
with the software in different situations. If you feel uncomfortable with
the GPL or need to embed MySQL code into a commercial application,
you can buy a commercially licensed version from us. See the MySQL
Licensing Overview for more information
(http://www.mysql.com/company/legal/licensing/).

 The MySQL Database Server is very fast, reliable, scalable, and


easy to use.
If that is what you are looking for, you should give it a try. MySQL
Server can run comfortably on a desktop or laptop, alongside your
other applications, web servers, and so on, requiring little or no
attention. If you dedicate an entire machine to MySQL, you can adjust
the settings to take advantage of all the memory, CPU power, and I/O
capacity available. MySQL can also scale up to clusters of machines,
networked together.
You can find a performance comparison of MySQL Server with other
database managers on our benchmark page.

MySQL Server was originally developed to handle large databases


much faster than existing solutions and has been successfully used in
highly demanding production environments for several years.
Although under constant development, MySQL Server today offers a
rich and useful set of functions. Its connectivity, speed, and security
make MySQL Server highly suited for accessing databases on the
Internet.

 MySQL Server works in client/server or embedded systems.


The MySQL Database Software is a client/server system that consists
of a multi-threaded SQL server that supports different backends,
several different client programs and libraries, administrative tools, and
a wide range of application programming interfaces (APIs).
We also provide MySQL Server as an embedded multi-threaded
library that you can link into your application to get a smaller, faster,
easier-to-manage standalone product.
 A large amount of contributed MySQL software is available.
MySQL Server has a practical set of features developed in close
cooperation with our users. It is very likely that your favorite
application or language supports the MySQL Database Server.

The official way to pronounce “MySQL” is “My Ess Que Ell” (not “my
sequel”), but we do not mind if you pronounce it as “my sequel” or in some
other localized way.

SAMPLE CODE
CHAPTER- 5

TESTING

SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of
trying to discover every conceivable fault or weakness in a work product. It
provides a way to check the functionality of components, sub assemblies,
assemblies and/or a finished product It is the process of exercising software
with the intent of ensuring that the
Software system meets its requirements and user expectations and does not
fail in an unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement.

TYPES OF TESTS

Unit testing
Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow should be
validated. It is the testing of individual software units of the application .it is
done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive.
Unit tests perform basic tests at component level and test a specific business
process, application, and/or system configuration. Unit tests ensure that each
unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.

Integration testing

Integration tests are designed to test integrated software components


to determine if they actually run as one program. Testing is event driven and
is more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct
and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.
Functional test

Functional tests provide systematic demonstrations that functions tested


are available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be


exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on


requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Business process flows; data fields, predefined
processes, and successive processes must be considered for testing. Before
functional testing is complete, additional tests are identified and the effective
value of current tests is determined.

System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results.
An example of system testing is the configuration oriented system integration
test. System testing is based on process descriptions and flows, emphasizing
pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is purpose. It is used to test areas that cannot be reached
from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box
tests, as most other kinds of tests, must be written from a definitive source
document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.

6.1 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit


test phase of the software lifecycle, although it is not uncommon for coding
and unit testing to be conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be
written in detail.

Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

6.2 Integration Testing

Software integration testing is the incremental integration testing of


two or more integrated software components on a single platform to produce
failures caused by interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No
defects encountered.

6.3 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets
the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No
defects encountered.

CHAPTER- 6

RESULTS
SCREEN SHOTS:

SCREEN SHOTS:
CONCLUSION

In this project, we propose iPath, a novel path inference approach to


reconstructing the routing path for each received packet. iPath exploits the
path similarity and uses the iterative boosting algorithm to reconstruct the
routing path effectively. Furthermore, the fast bootstrapping algorithm
provides an initial set of paths for the iterative algorithm. We formally analyze
the reconstruction performance of iPath as well as two related approaches. The
analysis results show that iPath achieves higher reconstruction ratio when the
network setting varies. We also implement iPath and evaluate its performance
by a trace-driven study and extensive simulations. Compared to states of the
art, iPath achieves much higher reconstruction ratio under different network
settings.
REFERENCES:

[1] M. Ceriotti et al., “Monitoring heritage buildings with wireless


sensor networks: The Torre Aquila deployment,” in Proc. IPSN,
2009, pp. 277–288.
[2] L. Mo et al., “Canopy closure estimates with GreenOrbs:
Sustainable sensing in the forest,” in Proc. SenSys, 2009, pp. 99–
112.
[3] X. Mao et al., “CitySee: Urban CO2 monitoring with sensors,”
in Proc. IEEE INFOCOM, 2012, pp. 1611–1619.
[4] O. Gnawali, R. Fonseca, K. Jamieson, D. Moss, and P. Levis,
“Collection tree protocol,” in Proc. SenSys, 2009, pp. 1–14.
[5] D. S. J. D. Couto, D. Aguayo, J. Bicket, and R. Morris, “A
highthroughput path metric for multi-hop wireless routing,” in
Proc. MobiCom, 2003, pp. 134–146.
[6] Z. Li, M. Li, J. Wang, and Z. Cao, “Ubiquitous data collection
for mobile users in wireless sensor networks,” in Proc. IEEE
INFOCOM, 2011, pp. 2246–2254.
[7] X. Lu, D. Dong, Y. Liu, X. Liao, and L. Shanshan, “PathZip:
Packet path tracing in wireless sensor networks,” in Proc. IEEE
MASS, 2012, pp. 380–388.
[8] M. Keller, J. Beutel, and L. Thiele, “How was your journey?
Uncovering routing dynamics in deployed sensor networks with
multi-hop network tomography,” in Proc. SenSys, 2012, pp. 15–28.
[9] Y. Yang, Y. Xu, X. Li, and C. Chen, “A loss inference
algorithm for wireless sensor networks to improve data reliability
of digital ecosystems.,” IEEE Trans. Ind. Electron., vol. 58, no. 6,
pp. 2126–2137, Jun. 2011.
[10] Y. Liu, K. Liu, and M. Li, “Passive diagnosis for wireless
sensor networks,” IEEE/ACM Trans. Netw., vol. 18, no. 4, pp.
1132–1144, Aug. 2010.
[11] W. Dong, Y. Liu, Y. He, T. Zhu, and C. Chen, “Measurement
and analysis on the packet delivery performance in a large-scale
sensor network,” IEEE/ACM Trans. Netw., 2013, to be published.
[12] J. Wang, W. Dong, Z. Cao, and Y. Liu, “On the delay
performance analysis in a large-scale wireless sensor network,” in
Proc. IEEE RTSS, 2012, pp. 305–314.
[13] Y. Liang and R. Liu, “Routing topology inference for wireless
sensor networks,” Comput. Commun. Rev., vol. 43, no. 2, pp. 21–
28, 2013.
[14] Y. Gao et al., “Domo: Passive per-packet delay tomography in
wireless ad-hoc networks,” in Proc. IEEE ICDCS, 2014, pp. 419–
428.
[15] M. Lee, S. Goldberg, R. R. Kompella, and G. Varghese,
“Fine-grained latency and loss measurements in the presence of
reordering,” in Proc. ACM SIGMETRICS, 2011, pp. 329–340.
[16] Y. Shavitt and U. Weinsberg, “Quantifying the importance of
vantage points distribution in internet topology measurements,” in
Proc. IEEE INFOCOM, 2009, pp. 792–800.
[17] M. Latapy, C. Magnien, and F. Oudraogo, “A radar for the
internet,” in Proc. IEEE ICDMW, 2008, pp. 901–908.
[18] I. Cunha, R. Teixeira, D. Veitch, and C. Diot, “Predicting and
tracking internet path changes,” in Proc. SIGCOMM, 2011, pp.
122–133.
[19] A. D. Jaggard, S. Kopparty, V. Ramachandran, and R. N.
Wright, “The design space of probing algorithms for network-
performance measurement,” in Proc. SIGMETRICS, 2013, pp.
105–116.
[20] L. Ma, T. He, K. K. Leung, A. Swami, and D. Towsley,
“Identifiability of link metrics based on end-to-end path
measurements,” in Proc. IMC, 2013, pp. 391–404.
[21] Y. Gao et al., “Pathfinder: Robust path reconstruction in large
scale sensor networks with lossy links,” in Proc. IEEE ICNP,
2013, pp. 1–10.
[22] A. Woo, T. Tong, and D. Culler, “Taming the underlying
challenges of reliable multihop routing in sensor networks,” in
Proc. SenSys, 2003, pp. 14–27.
[23] Y. Gao et al., “iPath: Path inference in wireless sensor
networks,” Tech. Rep., 2014 [Online]. Available:
http://www.emnets.org/pub/ gaoyi/tech-ipath.pdf
[24] A. Liu and P. Ning, “TinyECC: A configurable library for
elliptic curve cryptography in wireless sensor networks,” in Proc.
IPSN, 2008, pp. 245–256.
[25] V. Handziski, A. Köpke, A. Willig, and A. Wolisz, “TWIST:
A scalable and reconfigurable testbed for wireless indoor
experiments with sensor networks,” in Proc. REALMAN, 2006, pp.
63–70.
[26] R. Lim, C. Walser, F. Ferrari, M. Zimmerling, and J. Beutel,
“Distributed and synchronized measurements with FlockLab,” in
Proc. SenSys, 2012, pp. 373–374.
[27] Z. Li, M. Li, and Y. Liu, “Towards energy-fairness in
asynchronous duty-cycling sensor networks,” Trans. Sensor Netw.,
vol. 10, no. 3, pp. 38:1–38:26, 2014.
[28] P. Levis, N. Lee, M. Welsh, and D. Culler, “TOSSIM:
Accurate and scalable simulation of entire TinyOS applications,” in
Proc. SenSys, 2003, pp. 126–137.

You might also like