DOCCUMENT

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 49

1.

1 Motivation
The motivation for doing this project was primarily an interest in undertaking
a challenging project in an interesting area of research (Information Security). The
opportunity to learn about a new area of computing not covered in lectures was
appealing

1.2 Problem definition

Group key management for secured protocols for peer cluster communication
is to collaboratively generate key for peer to peer group communication by using the
queue batch algorithm. This mainly avoids the centralized server. The dynamic nature
allows the existing members to leave while new members are joining instead of
re-keying. The group key is used for future communication among the members of the
group.

1.3 Objective of Project

The main objective of the project is to collaboratively generate a common key


for peer to peer group communication .To dynamically perform re-keying operation
after batch of joins or leaves using Queue Batch algorithm and to share resources
using the generated group key.

1.4 Limitations of Project

• Lacks in implementing fault-tolerance, is the property that enables a system


(often computer-based) to continue operating properly in the event of the
failure of (or one or more faults within) some of its components.

1
1.5 Organization of Document:

The organization of document provides the basic report of the stages


encountered in the project development, in the form of chapters…

Chapter1:

Chapter1 provides the introduction of the project. Here, the basic idea of
the project has been brought up and also the need of the project is clearly defined
which includes the algorithms required for developing the product. Here the
drawbacks of project are also mentioned, in regard to provide the prior information to
user .Because manual provides the detailed information to the user. The group key
management is to allow different user’s to access an account with a public key and
also for security purpose , they also have a private key, In order to access the account
for individually. The group key management is mainly to generate private and public
keys for the user’s .Which provides security to the information from attackers.

Chapter2:
Chapter2 deals with the literature survey, which provides the
information regarding the need of developing the project by comparing it with the
existing one
And on a major defines the overcoming steps taken by the proposed system to
existing system. In regard to this project, here it mainly describes the concept of re-
keying in the proposed system. The re-keying is done by using the queue batch
algorithm. Using re-keying after group of join and leave operations key is generated.
Using this re-keying the resource utilization is reduced to some extent.

Chapter3:
Chapter3 is the real start of our project, known as Analysis, the above
two chapters are Theoretical, where we just assume the things ,and refer the book.
Analysis mainly depends on analyzing the time taken to complete the project in a
periodic manner and also the specifies the software and hardware requirements which
are mainly used for developing the project in further phases. In this we analyze the

2
Time duration that could be taken in ordered to complete the project, in a certain
periodic durations.

Chapter4:
Chapter4 describes the design phase of the project. The whole process is
represented in pictorial format using the Data Flow’s, Entity-Relationship’s and
Unified Modeling Language. The user can get an idea of project by the diagrams
in a easy way. So every project manual provides the pictorial presentation of the
whole project.

Chapter5:
Chapter5 shows the Implementation and Results, implementation
refers to the
Development of the code for the project with the selected software and hardware
requirements .basically the code is generated by modulating the project into required
Modules and then distribute the modules within the group, later with in certain time
the code is generated, if not the time is extended. The result specifies the output of the
code generated. In the manual we never show the code, but provide the output screens
and their functionalities.

Chapter6:
Chapter6 specifies the Testing & Validation, in which Testing refers to
the testing methodologies in order to test the code, to make the code too efficient that
it
is free from errors and bugs. Validation refer to, expect the exisitency of the project
with the new era, which in includes the new upcoming products/projects. Simply can
be said as planning the lifespan of project.

Chapter7:
Chapter7 is the conclusion part, where it provides the glance look out of
the project about it’s functionalities, features and validation.

Chapter8:

3
This chapter8 is the last one , specified as Bibliography which consists of
additional reference sites and books.
2.1 Introduction

Group Key Management Protocol is to collaboratively generate common


keys for p2p communication using Queue-batch Algorithm.

2.2 Existing System

The existing system involves either centralized key server and individual re-
keying is done for join or leave operations in case of distributive key generation
algorithms. In case of individual re-keying, after every join or leave operation each
member individually re-keys’. More resources are used for re-keying because it is
done for each join or leave operations. In case of using a centralized server, the risk of
single point failure is more.
2.3 Disadvantages of Existing System

• Key information depends on centralized key server.


• Computational and Communication cost is more.
• Individual re-keying is done. Whenever a member joins or leaves in the case
of distributed key generation algorithm.
• More resources used for re-keying because it is done for each join or leave
operations.
2.4 Proposed System

The proposed system involves collaborative key agreement in which all


nodes become a part of the secure group key. Moreover, re-keying is done after a
batch of join or leave operations. The protocol remains efficient even when the
occurrences of join/leave events are very frequent. Here Key information does not
depend on centralized key server. So it is free from the problem of single point
failure. Computational and Communication cost is less. Resources used for re-keying
is minimized because it is being done for batch of join/leave operations.
2.5 Conclusion

It can be concluded that the proposed system is the upgraded version of the
existing system, wherein it has capability of re-keying after batch of joins and leaves
and also collaboratively generate common keys for p2p communication in a group.

4
3.1 Introduction

System Analysis is first stage according to System Development Life Cycle


model. This System Analysis is a process that starts with the analyst. Analysis is a
detailed study of the various operations performed by a system and their relationships
within and outside the system. One aspect of analysis is defining the boundaries of
the system and determining whether or not a candidate should consider other related
systems. During analysis, data is collected from the available files, decision points,
and transactions handled by the present system.

3.2 System Requirements Specification

Software Requirement Specification

Definition of SRA:

Software Requirement Specification is the starting point of the software


developing activity. As system grew more complex it became evident that the goal of
the entire system cannot be easily comprehended. Hence the need for the requirement
phase arose. The software project is initiated by the client needs. The SRS is the
means of translating the ideas of the minds of client into a formal document.

Requirement Analysis

This stage is to obtain a clear picture of the needs and requirements of the end-
user and also the organization. Analysis involves interaction between the clients and
the analysis. Usually analysts research a problem from any questions asked and
reading existing documents. The analysts have to uncover the real needs of the user
even if they don’t know them clearly. During analysis it is essential that a complete
and consistent set of specifications emerge for the system.
Each Requirement analysis method has a unique point of view. However all
analysis methods are related by a set of operational principles. They are
• The information domain of the problem must be represented and understood.
• The functions that the software is to perform must be defined.

5
• The behavior of the software as a consequence of external events must be
defined.
• The models that depict information function and behavior must be partitioned
in a hierarchical or layered fashion.
• The analysis process must move from essential information to implementation
detail

Requirement Specification:
Specification Principles:

Software Requirements Specification plays an important role in creating


quality software solutions. Specification is basically a representation process.
Requirements are represented in a manner that ultimately leads to successful software
implementation.
Requirements may be specified in a variety of ways. However there are some
guidelines worth following: -
• Representation format and content should be relevant to the
problem
• Information contained within the specification should be nested
• Diagrams and other notational forms should be restricted in
number and consistent in use.
• Representations should be revisable.

Software Requirement Specification:

The software requirements specification is produced at the culmination of the


analysis task. The function and performance allocated to the software as a part of
system engineering are refined by establishing a complete information description, a
detailed functional and behavioral description, and indication of performance
requirements and design constraints, appropriate validation criteria and other data
pertinent to requirements.

6
External Interface Requirements:

User Interface:

This includes GUI standards, error messages for invalid inputs by users,
standard buttons and functions that will appear on the screen.
Hardware Interface:

We use TCP/IP protocol for establishing connection and transmitting data over
the network. We use Ethernet for LAN.
Software Interface:

We use Oracle for storing database of clients who connects to the server

through JDBC & ODBC.

Security Requirements:

We provide authentication and authorization by passwords for each level of


access.We implement IDEA algorithm for secure data transmission.

Software Quality Attributes:

Product is adaptable to any changes. Such as the product can be modified to transfer
not only text but also image, audio, video files. Product is reliable due to the file
encryption and authentication. That means the data is not lost or goes into wrong
hands. Product is portable i.e. it can run between only two connected systems or a
large network of computers. Product is maintainable i.e. in future the properties of
the product can be changed to meet the requirements.

3.2.1 User Requirements


The user needs to have the basic knowledge of Java and other database
connectivity process. It is easy to interact with the application as it is user feasible

7
3.2.2 Software Requirements

LANGUAGE : JAVA
FRONT END TOOL : SWING
BACK END TOOL : SQL SERVER
OPERATING SYSTEM : WINDOWS 98.

3.2.3 Hardware Requirements

PROCESSOR : PENTIUM III 866 MHz


RAM : 256 MB DD RAM
MONITOR : 15” COLOR
HARD DISK : 20 GB
FLOPPY DRIVE : 1.44 MB
CDDRIVE : LG 52X
KEYBOARD : STANDARD 102 KEYS
MOUSE : 3 BUTTONS

3.3 Content Diagram of Project

A graphical tool used to describe and analyze the moment of data through a
system manual or automated including the process, stores of data, and delays in the
system. Data Flow Diagrams are the central tool and the basis from which other
components are developed. The transformation of data from input to output, through
processes, may be described logically and independently of the physical components
associated with the system. The DFD is also know as a data flow graph or a bubble
chart.

8
Context Diagram:

The top-level diagram is often called a “context diagram”. It contains a single


process, but it plays a very important role in studying the current system. The context
diagram defines the system that will be studied in the sense that it determines the
boundaries. Anything that is not inside the process identified in the context diagram
will not be part of the system study. It represents the entire software element as a
single bubble with input and output data indicated by incoming and outgoing arrows
respectively.
Types of Data Flow Diagrams:

DFDs are two types

1. Physical DFD

Structured analysis states that the current system should be first understand
correctly. The physical DFD is the model of the current system and is used to ensure
that the current system has been clearly understood. Physical DFDs shows actual
devices, departments, people etc., involved in the current system

2. Logical DFD

Logical DFDs are the model of the proposed system. They clearly should
show the requirements on which the new system should be built. Later during design
activity this is taken as the basis for drawing the system’s structure charts.

9
DFD Symbols :

Dataflow:

Data move in a specific direction from an origin to a Destination.

Process:

People, procedures, or devices that use or produce (Transform) Data. The


physical component is not identified.

Source:

External sources or destination of data, which may be People , programs,


organizations or other entities.

Data Store:

Here data are stored or referenced by a process in the System

10
3.4 Feasibility Analysis

Feasibility study is a important phase in the software development process. It


enables the developer to have an assessment of the product being developed. It refers
to the feasibility study of the product in terms of the product, operational use and
technical support required for implementing it.
Feasibility study should be performed on the basis of various criteria
and parameters. The various feasibility studies are:
• Economic feasibility
• Operational feasibility
• Technical feasibility

Economic Feasibility:

This procedure is to determine the benefits and savings that are expected from
a candidate system and compare them with costs. If benefits outweigh costs, then the
decision is made to design and implement the system. Otherwise, further justification
or alterations in proposed system will have to be made if it is to have a chance of
being approved. This is an ongoing effort that improves in accuracy at each phase of
the system life cycle.
Operational Feasibility:

People are inherently resistant to change, and computers have been known to
facilitate change. It is understandable that the introduction of a candidate system
requires special effort to educate, sell, and train the staff on new ways of conducting
business.
3.5 Conclusion

It can be concluded that Analysis state provide the requirements of software,


hardware and also content of diagrams.

4.1 Introduction

11
The whole project has been represented in pictorial format. Which provides an ease of
understanding the project. The representation can be in data flow,entity relation and
uml diagrams.

12
4.2DataFlowDiagrams

13
Fig 4.2.0 Data Flow Diagram

4.3 UML Diagrams:

4.3.1 Introduction:

14
UML is a result of the evolution of object-oriented modeling languages. It was
developed by Rational Software Company by unifying some of the leading object-
oriented modeling methods,

• Booch by Grady Booch,

• OMT (Object Modeling Technique), by Jim Raumbaugh and

• OOSE (Object-Oriented Software Engineering), by Ivar Jacobson.

The authors of these languages are sometimes called the three amigos of software
engineering. They were participating in the around twenty people strong group which
was formed in ’94 and submitted UML 1.0 to the Object Management Group (OMG)
in ’97. The current version of UML is 1.4 (published in Sep 2001) and there is
ongoing work within OMG on a new major version 2.0, planned to be released during
late 2003 or early 2004.

UML is used for modeling software systems; such modeling includes analysis and
design. By an analysis the system is first described by a set of requirements, and then
by identification of system parts on a high level. The design phase is tightly connected
to the analysis phase. It starts from the identified system parts and continues with
detailed specification of these parts and their interaction. For the early phases of
software projects UML provide support for identifying and specifying requirements as
use cases. Class diagrams or component diagrams can be used for identification of
system parts on a high level. During the design phase class diagrams, interaction
diagrams, component diagrams and state chart diagrams can be used for
comprehensive descriptions of the different parts in the system.

4.3.2 Use case Diagram for Total Flow of Actions

15
Edraw Trial
Use Case Version
Diagram

login

view groups

user

view files

Edraw Trial Version

send files

Edraw Trial Ver

Fig 4.3.2.0 Use case Diagram for Total Flow of Actions

4.3.3Sequence Diagram

4.3.3.0 Total Flow of Actions

16
E d ra
U Mw T ria
L S eq u e n cle V
D iaegrs
ramio n
u s er lo gin view g ro u ps vie w file s s e nd file s

u na m e ,p w d va lid at e

vie w

vie w

s en d

E d ra w T ria l V e rs io n

E d ra w T ria l V e rs io n

Fig 4.3.3.0 Sequence Diagram For Total Flow Of Actions

4.3.3.1 Send File

17
E d ra
U Mw T uria
L S eq e n cle V e rs
D iag ramio n
s e n d file s
us er lo g in
file s

u n a m e ,p w d va lid a te

s end

s end

E d ra w T ria l V e rs io n

E d ra w T ria l V e rs io n

Fig 4.3.3.2 Sequence Diagram For Send Files

4.3.3.2 View Group

18
E d ra
U Mw
L ST
e q ria
u en cl e V egrs
D ia ra mio n
g ro u p s
us er lo g in vie w g ro u p s

unam e,pwd va lid a t e

vie w

vie w

E d ra w T ria l V e rs io n

E d ra w T ria l V e rs io n

Fig 4.3.3.3 Sequence Diagram For View Group

19
4.3.3.3 View & Delete File
validate

Edraw Trial Version

Edraw Trial Version


UML Sequence Diagram
user login
view files
open
delete

uname,pwd

send

open

delete

Edraw Trial Version

Fig 4.3.3.4 Sequence Diagram For View File & Delete Files

20
4.3.4 Class Diagram For Total Flow Of Actions:

Sign up

user design
user username
password password

sign in() set()


sign up() reset()

design jframe
member
m em bers s1
view groups s1
view files
s end files set()
reset()
SIGN IN

g ro u p ke y
ke y

o k()

Fig 4.3.2.0 Class Diagram For Total Flow of Actions

21
5.1 Introduction

This shows the result of whole project.Where the user can interact to the model.
5.2 Methods of Implementation

Java is a high-level programming language which consists of the


following features:

 Simple  Architecture-neutral
 Object-oriented  Portable
 Distributed  High-performance
 Interpreted  Multithreaded
 Robust  Dynamic
 Secure

Each Java program is both compiled and interpreted. With a compiler, you
translate a Java program into an intermediate language called Java bytecodes--the
platform-independent codes interpreted by the Java interpreter. With an interpreter, each
Java bytecode instruction is parsed and run on the computer. Compilation happens just
once; interpretation occurs each time the program is executed. This figure illustrates how
this works.

Java Compilation

You can think of Java byte codes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether it's a Java development tool

22
or a Web browser that can run Java applets, is an implementation of the Java VM. The
Java VM can also be implemented in hardware.

Java byte codes help make "write once, run anywhere" possible. You can compile
your Java program into byte codes on any platform that has a Java compiler. The byte
codes can then be run on any implementation of the Java VM. For example, the same
Java program can run on Windows NT, Solaris, and Macintosh.

The Java Platform

A platform is the hardware or software environment in which a program runs. The Java
platform differs from most other platforms in that it's a software-only platform that runs
on top of other, hardware-based platforms. Most other platforms are described as a
combination of hardware and operating system.

The Java platform has two components:

23
• The Java Virtual Machine (Java VM)
• The Java Application Programming Interface (Java API)

SWING

The Swing toolkit includes a rich set of components for building GUIs and
adding interactivity to Java applications. Swing includes all the components you
would expect from a modern toolkit: table controls, list controls, tree controls,
buttons, and labels. Swing is far from a simple component toolkit, however. It
includes rich undo support, a highly customizable text package, integrated
internationalization and accessibility support. To truly leverage the cross-platform
capabilities of the Java platform, Swing supports numerous look and feels, including
the ability to create your own look and feel. The ability to create a custom look and
feel is made easier with Synth, a look and feel specifically designed to be customized.
Swing wouldn't be a component toolkit without the basic user interface primitives
such as drag and drop, event handling, customizable painting, and window
management.

Swing is part of the Java Foundation Classes (JFC). The JFC also include
other features important to a GUI program, such as the ability to add rich graphics
functionality and the ability to create a program that can work in different languages
and by users with different input devices.

The Swing toolkit includes a rich array of components: from basic components, such
as buttons and check boxes, to rich and complex components, such as tables and text.
Even deceptively simple components, such as text fields, offer sophisticated
functionality, such as formatted text input or password field behavior. There are file
browsers and dialogs to suit most needs, and if not, customization is possible. If none
of Swing's provided components are exactly what you need, you can leverage the
basic Swing component functionality to create your own.

24
THE SQL SERVER

Microsoft SQL Server is a relational database management system produced


by Microsoft. It supports a superset of Structured Query Language SQL, the most
common database language. It is commonly used by businesses for small to medium
sized databases, but the past five years have seen greater adoption of the product for
larger enterprise databases.

Microsoft SQL Server uses a variant of SQL called T-SQL, or Transact-SQL,


an implementation of SQL-92 (the ISO standard for SQL, certified in 1992) with
some extensions. T-SQL mainly adds additional syntax for use in stored procedures,
and affects the syntax of transaction support. (Note that SQL standards require
Atomic, Consistent, Isolated, Durable or "ACID" transactions.) Microsoft SQL Server
and Sybase/ASE both communicate over networks using an application-level protocol
called Tabular Data Stream (TDS). The TDS protocol has also been implemented by
[2]
the Free TDS project in order to allow more kinds of client applications to
communicate with Microsoft SQL Server and Sybase databases. Microsoft SQL
Server also supports Open Database Connectivity (ODBC).

Scalability and reliability features:


• Partition workload among multiple servers with distributed partitioned views,
a new data tier "scale-out" feature that distributes data among a group of
servers and coordinates query processing. As your business grows, you can
add servers for even greater scalability.
• Achieve scalability for mission-critical, line-of-business applications with
support for large symmetric multiprocessing (SMP) systems with up to 32
processors and 64 gigabytes (GB) of RAM. Improve overall system
performance with built-in support for a system area network (SAN)
• Grow data warehouses and reporting solutions with indexed (materialized)
views, distributed partitioned cubes, and support for dimensions with hundreds
of millions of members. Indexed views improve performance by storing result
sets so they do not have to be regenerated for future access. Distributed
partitioned cubes improve performance by allowing access to cubes stored on
multiple systems.

25
• Take full advantage of your hardware resources by running multiple, isolated
applications on a single computer using SQL Server 2000 multi-instance
support

Microsoft SQL SERVER 2000 Datasheet

Conclusion

Through the usage of java we have created front end of project where it
describes view of the project to the user through which he/she can write or retrive data
Into/from the application.And through the usage of DATABASE MANAGEMENT
system we have created backend of the project for storing the records and files
regarding details.

5.3 EXPLANATIONS OF KEY FUNCTIONS

26
5.3.1 Module Design & Organization
List of Modules

1. Group Key Generation within the workgroup


2. Rekeying of group key
3. Sharing the resources within the group

MODULE 1:

GROUP KEY GENERATION WITHIN THE WORKGROUP

In this module we implement the Deffie-Hellman tree based protocol to


generate the group key. The tree we describe here is a binary tree where in a parent
node gives rise to 2 child nodes. The private key of the leaf nodes are decided by the
particular group member. The member makes a request for the public key of other
child node. And once it gets it, with the knowledge of the public key of 1 child node
and the private key of the other we can get the private key of its parent node using the
diffie helman algorithm. Similarly on preceding higher the tree we can get the private
key of the root node which is said to be the group key of the work group. In future,
the entire message sent by a member to all others in the peer group is encrypted using
this group key. And this module ends with the generation of the group key.

TREE-BASED GROUP DIFFIE–HELLMAN PROTOCOL

To efficiently maintain the group key in a dynamic peer group with more than
two members, we use the tree-based group Diffie–Hellman (TGDH) protocol .Each
member maintains a set of keys, which are arranged in a hierarchical binary tree. We
assign a node ID to every tree node. For a given node, we associate a secret (or
private)key Kv and a blinded (or public) key BKv . All arithmetic operations are
performed in a cyclic group of prime order with the generator. Therefore, the blinded
key of node can
be generated by

27
Each leaf node in the tree corresponds to the individual secret and blinded keys of a
group member Mi. Every member holds all the secret keys along its key path starting
from its associated leaf node up to the root node. Therefore, the secret key held by the
root node is shared by all the members and is regarded as the group key. The figure
below illustrates a possible key tree with six members M1 to M6. For example,
member M1 and holds the keys at nodes 7, 3, 1, and 0. The secret key at node 0 is the
group key of this peer group.

The node ID of the root node is set to 0. Each non leaf node consists of two child
nodes whose node ID’s are given by 2v+1 and 2v+2 . Based on the Diffie-Hellman
protocol, the secret key of a non leaf node can be generated by the secret key of one
child node of v and the blinded key of another child node of v. Mathematically, we
have

Unlike the keys at non leaf nodes, the secret key at a leaf node is selected by
its corresponding group member through a secure pseudo random number generator.
Since the blinded keys are publicly known, every member can compute the keys along
its key path to the root node based on its individual secret key.

To illustrate, consider the key tree in Fig. 1. Every member Mi generates its own
secret key and all the secret keys along the path to the root node. For example,
member M1 generates the secret key K7 and it can request the

28
Blinded key BK8 from M2, BK4 from M3, and BK2 from either M4, M5, or
M6.Given M1’s secret key K7 and the blinded key BK8, M1 can generate the secret
key K3 according to the above given formula. Given the blinded key BK4 and the
newly generated secret key K3, M1 can generate the secret key K1 based on given
formula. Given the secret key K1 and the blinded key BK2, M1 can generate the
secret key K0 at the root. From that point on, any communication in the group can be
encrypted based on the secret key (or group key) K0.

To provide both backward confidentiality (i.e., joined members cannot access


previous communication data) and forward confidentiality (i.e., left members cannot
access future communication data), re-keying, which means renewing the keys
associated with the nodes of the key tree, is performed when-ever there is any group
membership change including any new member joining or any existing member
leaving the group.

29
FLOW CHART FOR MODULE 1

Finding the primitive root

Assign private keys to leaf nodes using RSA

Public keys obtained for leaf nodes using


DiffieHellman alg
Update

Database

Finding child nodes for the non leaf nodes

The private key of parent is found using


DiffieHellman alg

Repeat the above steps up to root node and


find its private key.

Private Key of root node=group key

Output the group key.

30
MODULE 2:

REKEYING OF GROUP KEY


Queue batch algorithm, an interval based algorithm is used for re-keying at
equal intervals. Queue-batch algorithm performs the best among the interval-based
algorithms. The algorithm reduces the latency and the workload created due to re-
keying operation that is performed at the beginning of the re-keying intervals. In
Queue batch algorithm, as and when members join, they are stored as in a temporary
tree and at the beginning of a re-keying interval this tree is attached to the tree with
existing members. It is attached to the highest departed position, so that the height of
the tree does not increase much.
The Queue- batch algorithm is illustrated in Figure, where members M8, M9, M10
wish to join the communication group, while M2 and M7 wish to leave. Then in the
Queue- subtree phase, the three new members M8, M9, M10 will form a tree. In the
Queue- merge phase, the tree is added at the highest departed position, which is at
node 6. Now group key is computed for the new group structure and the computed
group key is broadcasted to all the members.

31
FLOW CHART FOR MODULE 2

32
MODULE 3: SHARING THE RESOURCES WITHIN THE GROUP

The new group key is been generated after the batch of join and leave using
the Queue Batch algorithm in the 2nd module. From now onwards this new group key
is used for encryption for all data sharing among the members of the peer group. In
this module we would be able to show all the communication and data sharing among
all the members present in our work group.

FLOW CHART FOR MODULE 3

33
5.3.2 Output Screens:
5.3.2.0 Login Window:

34
35
Screen 5.3.2.0: Login

36
5.3.2.1 Sign Up:

Screen 5.3.2.1 Sign Up

5.3.2.2 Group key Display

Screen 5.3.2.2 Group Key Display

37
5.3.2.3 Sign In Window

Screen 5.3.2.3 Sign In

38
5.3.2.4 SQL Server Window

Screen 5.3.2.4 SQL Server

39
5.3.2.5 View Group Window:

Screen 5.3.2.5 View Group

40
5.3.2.6 View Files Window:

Screen 5.3.2.6 View Files

41
5.3.2.7 Send Files Window:

Screen 5.3.2.7 Send File

42
5.3.2.8 After Deletion:

Screen 5.3.2.8 After Deletion

43
6.1 Introduction
Software Testing is a critical element of software quality assurance and
represents the ultimate review of specification, design and coding, Testing presents an
interesting anomaly for the software engineer.

6.2 Design & Test Cases


• Testing is a process of executing a program with the intent of finding an error.
• A good test has a high probability of finding an as yet undiscovered error.
• A successful test is one that uncovers an as yet undiscovered error.
• The objective is to design tests that systematically uncover different classes of
errors and do so with a minimum amount of time and effort. Testing cannot show the
absence of defects, it can only show that software defects are present.
Test approach:

Testing can be done in two ways:


• Bottom up approach
• Top down approach
Bottom up Approach:

Testing can be performed starting from smallest and lowest level


modules and proceeding one at a time. For each module in bottom up testing a short
program executes the module and provides the needed data so that the module is
asked to perform the way it will when embedded within the large system. When
bottom level modules are tested attention turns to those on the next level that use the
lower level once they are tested individually and then liked with the previously
examined lower level modules.
Topdown Approach:

This type of testing starts from upper level modules.Since the detailed
activities usually performed in the lower level routines are not provided stubs are
written.A stub is a module shell called by upper level module and that when reached
properly will return a message to the calling module indicating that proper interaction
occured.No attempt is made to verify the correctness of the lower level module.

44
Testing Strategies:

A Strategy for software testing integrates software test cases into a series of
well planned steps that result in the successful construction of software. Software
testing is a broader topic for what is referred to as Verification and Validation.
Verification refers to the set of activities that ensure that the software correctly
implements a specific function. Validation refers he set of activities that ensure that
the software that has been built is traceable to customer’s requirements
6.2.1 Unit Testing:

• Number of input parameters should be equal to number of arguments


• Parameter and argument attributes must match
• Parameters passed should be in correct order
• Global variable definitions consistent across modules.
6.2.2 Black Box Testing:

In this strategy some test cases are generated as input conditions that fully
execute all functional requirements for the program.This testing has been uses to find
errors in the following categories:
a) Incorrect or missing functions
b) Interface errors
c) Errors in data structure or external database access
d) Performance errors
e) Initialization and termination errors.
6.2.3 White Box Testing:

In this the test cases are generated on the logic of each module by drawing
flow graphs of that module and logical decisions are tested on all the cases.It has been
used to generate the test cases in the following cases:
• Guarantee that all independent paths have been executed.
• Execute all logical decision on their true and false sides.
• Execute all loops at their boundaries and within their operational bounds.
• Execute internal data structures to ensure their validity.

45
6.2.4 Integration Testing:

Top Down Integration:

Modules integrated by moving down the program design hierarchy. Can use depth
first or breadth first top down integration
Verifies major control and decision points early in design process. Top-level
structure tested most. Depth first implementation allows a complete function to be
implemented, tested and demonstrated. Can do depth first implementation of critical
functions early. Top down integration forced (to some extent) by some development tools
in programs with graphical user interfaces.
Bottom-up Integration:

This method as the name suggests, begins construction and testing with atomic modules
i.e., modules at the lowest level. Because the modules are integrated in the bottom up
manner the processing required for the modules subordinate to a given level is always
available and the need for stubs is eliminated

6.2.5 Validation Testing:

Validation testing is aims to demonstrate that the software functions in a manner


that can be reasonably expected by the customer.This tests conformance the software to
the Software Requirements Specification.

6.2.6 System Testing:

Software is only one component of a system. Software will be incorporated with


other system components and system integration and validation test performance

6.2.7 Security Testing:

Attempts to verify the protection mechanisms built into the system.

46
6.2.8 Performance Testing:

This method is designed to test runtime performance of software within the


context of an integrated system…
6.3 Validation
The system has been tested and implemented successfully and thus ensured
that all the requirements as listed in the software requirements specification are
completely fulfilled. In case of erroneous input corresponding error messages are
displayed

47
The key agreement setting is performed in which there is no centralized
key server to maintain or distribute the group key. We show that one can use the
TGDH protocol to achieve such distributive and collaborative key agreement. To
reduce the rekeying complexity, we propose to use an interval-based approach to
carry out rekeying for multiple join and leave requests at the same time.

FUTURE ENHANCEMENT

In future fault tolerance can be implemented in the system. Further


Efficient Authentication protocols can be implemented in the system.

48
Websites:

• http://en.wikipedia.org/wiki/Diffie-Hellman

• http://pajhome.org.uk/crypt/rsa/rsa.html

• http://java.sun.com/j2se/1.4.2/docs/guide/security/jce/JCERefGuide.
html

Books :

• Java 2: The Complete Reference, Fifth Edition by Herbert Schildt .

49

You might also like