0% found this document useful (0 votes)
16 views

Sample Documentation Content

Uploaded by

nanipavan830
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Sample Documentation Content

Uploaded by

nanipavan830
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 86

1.

INTRODUCTION
1.1. Introduction to Project:

Network and computing technology enables many people to easily share their data
with others are using online external storages. People can share their lives with friends
by uploading their private photos or messages into the online social networks; or
upload highly sensitive personal health records (PHRs) into online data servers such as
Microsoft Health Vault, Google Health for ease of sharing with their primary doctors or
for cost saving. As people enjoy the advantages of these new technologies and services,
their concerns about data security and access control also arise. Improper use of the
data by the storage server or unauthorized access by outside users could be potential
threats to their data. People would like to make their sensitive or private data only
accessible to the authorized people with credentials they specified. Attribute-based
encryption (ABE) is a promising cryptographic approach that achieves a fine-grained
data access control. It provides a way of defining access policies based on different
attributes of the requester, environment, or the data object. Especially, cipher text-
policy attribute-based encryption (CP-ABE) enables to encrypt or to define the attribute
set over a universe of attributes that a decrypt or needs to possess in order to decrypt
the cipher text, and enforce it on the contents. Thus, each user with a different set of
attributes is allowed to decrypt different pieces of data per the security policy. This
effectively eliminates the need to rely on the data storage server for preventing
unauthorized data access, which is the traditional access control approach of such as the
reference monitor Nevertheless, applying CP-ABE in the data sharing system has
several challenges. In CP-ABE, the key generation center (KGC) generates private keys
of users by applying the KGC’s master secret keys to users’ associated set of attributes.
Thus, the major benefit of this approach is to largely reduce the need for processing and
storing public key certificates under traditional public key infrastructure (PKI).
However, the advantage of the CP-ABE comes with a major drawback which is known
as a key escrow problem. The KGC can decrypt every cipher text addressed to specific
users by generating their attribute keys. . Another challenge is the key revocation.
Since some users may change their associate attributes at some time, or some private
keys might be compromised, key revocation or update for each attribute is necessary in
order to make systems secure. This issue is even more difficult especially in ABE,
since each attribute is conceivably shared by multiple users (henceforth, we refer to
such a set of users as an attribute group).This implies that revocation of any attribute or
any single user in an attribute group would affect all users in the group. It may result in
bottleneck during rekeying procedure or security degradation due to the vulnerability of
windows.

1
1.1. Purpose of the Project

Recent development of the network and computing technology enables many


people to easily share their data with others uses online external storages. People can
share their lives with friends by uploading their private photos or messages into the
online social networks such as Facebook and MySpace; or upload highly sensitive
personal health records (PHRs) into online data servers such as Microsoft HealthVault,
Google Health for ease of sharing with their primary doctors or for cost saving.

1.2. Proposed systems with Features

First, the key escrow problem is resolved by a key issuing protocol that exploits
the characteristic of the data sharing system architecture. The key issuing protocol
generates and issues user secret keys by performing a secure two-party computation
(2PC) protocol between the KGC and the data-storing center with their own master
secrets. The 2PC protocol deters them from obtaining any master secret information of
each other such that none of them could generate the whole set of user keys alone.
Thus, users are not required to fully trust the KGC and the data-storing center in order
to protect their data to be shared. . Attribute group keys are selectively distributed to
the valid users in each attribute group, which then are used to re-encrypt the cipher-text
encrypted under the CPABE algorithm. The immediate user revocation enhances the
Backward/forward secrecy of the data on any membership changes. In addition, as the
user revocation can be done on each attribute level rather than on system level, more
fineg-rained user access control can be possible. Even if a user is revoked from some
attribute groups, he would still be able to decrypt the shared data as long as the other
attributes that he holds satisfy the access policy of the cipher-text. Data owners need
not be concerned about defining any access policy for users, but just need to define
only the access policy for attributes as in the previous ABE schemes. The proposed
scheme delegates most laborious tasks of membership management and user revocation
to the data-storing center while the KGC is responsible for the attribute key
management as in the previous CP-ABE schemes without leaking any confidential
information to the other parties. Therefore, the proposed scheme is the most suitable for
the data sharing scenarios where users encrypt the data only once and upload it to the
data-storing centers, and leave the rest of the tasks to the data-storing centers such as
re-encryption and revocation.

2. LITERATURE SURVEY
2
2.1. Fuzzy Identity-Based Encryption:
 Identity-Based Encryption (IBE) allows for a sender to encrypt a message to an
identity without access to a public key certificate. The ability to do public key
encryption without certificates has many practical applications. For example, a
user can send an encrypted mail to a recipient, [email protected],
without the requiring either the existence of a Public-Key Infrastructure or that
the recipient be on-line at the time of creation.
 One common feature of all previous Identity-Based Encryption systems is that
they view identities as a string of characters. In this paper we propose a new
type of Identity-Based Encryption that we call Fuzzy Identity-Based Encryption
in which we view identities as a set of descriptive attributes. In a Fuzzy
Identity-Based Encryption scheme, a user with the secret key for the identity ! is
able to decrypt a cipher-text encrypted with the public key !0 if and only if ! and
!0 are within a certain distance of each other as judged by some metric.
Therefore, our system allows for a certain amount of error-tolerance in the
identities.
 Fuzzy-IBE gives rise to two interesting new applications. The first is an
Identity-Based Encryption system that uses biometric identities. That is we can
view a user’s biometric, for example an iris scan, as that user’s identity
described by several attributes and then encrypt to the user using their biometric
identity. Since biometric measurements are noisy, we cannot use existing IBE
systems. However, the error-tolerance property of Fuzzy-IBE allows for a
private key (derived from a measurement of a biometric) to decrypt a cipher-
text encrypted with a slightly different measurement of the same biometric.
Secondly, Fuzzy IBE can be used for an application that we call “attribute-
based encryption”. In this application a party will wish to encrypt a document to
all users that have a certain set of 1 attributes. For example, in a computer
science department, the chairperson might want to encrypt a document to all of
its systems faculty on a hiring committee. In this case it would encrypt to the
identity {“hiring- committee”,“faculty”,“systems”}.

2.2 Attribute-Based Encryption for Fine-Grained Access


Control of Encrypted Data:

There is a trend for sensitive user data to be stored by third parties on the
Internet. For example, personal email, data, and personal preferences are stored on web
portal sites such as Google and Yahoo. The attack correlation center, dshield.org,
presents aggregated views of attacks on the Internet, but stores intrusion reports
individually submitted by users. Given the variety, amount, and importance of
information stored at these sites, there is cause for concern that personal data will be
compromised. This worry is escalated by the surge in recent attacks and legal pressure
3
faced by such services. One method for alleviating some of these problems is to store
data in encrypted form. Thus, if the storage is compromised the amount of information
loss will be limited. One disadvantage of encrypting data is that it severely limits the
ability of users to selectively share their encrypted data at a ¯ne-grained level. Suppose
a particular user wants to grant decryption access to a party to all of its Internet tra±c
logs for all entries on a particular range of dates that had a source IP address from a
particular subnet.

The user either needs to act as an intermediate and decrypt all relevant entries for
the party or must give the party its private decryption key, and thus let it have accesss
to all entries. Neither one of these options is particularly appealing. An important
setting where these issues give rise to serious problems is audit logs.

Sets of descriptive attributes and a particular key can decrypt a particular cipher-
text only if there is a match between the attributes of the cipher-text and the user's key.
The cryptosystem of Sahai and Waters allowed for decryption when at least k attributes
overlapped between a cipher-text and a private key. While this primitive was shown to
be useful for error-tolerant encryption with biometrics, the lack of expressibility seems
to limit its applicability to larger systems.

2.3 Computer Security Planning Study:

The major problems of the USAF stem from the fact that there is a growing
requirement to provide shared use of computer systems containing information of
different classification levels and need-to-know requirements in a user population not
uniformly cleared or access-approved. This problem takes an extreme form in those
several systems currently under development or projected for the near future where
part, or the majority of the user population has no clearance requirement and where
only a very small fraction of the information being processed and stored on the systems
is classified. In a few of the systems examined (see Section II below) the kinds of
actions the user population is able to take are limited by the nature of the application in
such a way as to avoid or reduce the security problem. However, in other systems,
particularly in general use systems such as those found in the USAF Data Services
Center in the Pentagon, the users are permitted and encouraged to directly program the
system for their applications. It is in this latter kind of use of computers that the
weakness of the technical foundation of current systems is most acutely felt.

Another major problem is the fact that there are growing pressures to interlink
separate but related computer systems into increasingly complex networks. Other
problem areas in addition to those noted above generally fall into the category of
techniques and technology available but not implemented in a form suitable for the

4
application to present and projected Air Force computer systems.The technology for
producing such terminals is both easily available and well understood but has not been
clearly developed heretofore as an integrated requirement for the Air Force.

2.4 Existing System:


Nevertheless, applying CP-ABE in the data sharing system has several
challenges. In CP ABE, the key generation center (KGC) generates private keys of
users by applying the KGC’s master secret keys to users’ associated set of
attributes. Thus, the major benefit of this approach is to largely reduce the need for
processing and storing public key certificates under traditional public key
infrastructure (PKI). However, the advantage of the CP-ABE comes with a major
drawback which is known as a key escrow problem. With a major drawback which
is known as a key escrow problem.
The KGC can decrypt every cipher-text addressed to specific users by
generating their attribute keys. This could be a potential threat to the data
confidentiality or privacy in the data sharing systems. Another challenge is the key
revocation. Since some users may change their associate attributes at some time, or
some private keys might be compromised, key revocation or update for each
attribute is necessary in order to make systems secure. This issue is even more
difficult especially in ABE, since each attribute is conceivably shared by multiple
users (henceforth, we refer to such a set of users as an attribute group). This
implies that revocation of any attribute or any single user in an attribute group
would affect all users in the group. It may result in bottleneck during rekeying
procedure or security degradation due to the windows of vulnerability.

3. SOFTWARE REQUIREMENT ANALYSIS

5
3.1. SDLC:
The Systems Development Life Cycle (SDLC) or Software Development Life
Cycle in systems engineering, information systems and software engineering, is the
process of creating or altering systems, and the models and methodologies use to
develop these systems.

Figure 3.1(a): Software Development Life Cycle

Requirement Analysis and Design:

Analysis gathers the requirements for the system. This stage includes a detailed
study of the business needs of the organization. Options for changing the business
process may be considered. Design focuses on high level design like, what programs
are needed and how are they going to interact, low-level design (how the individual
programs are going to work), interface design (what are the interfaces going to look
like) and data design (what data will be required). During these phases, the software's
overall structure is defined. Analysis and Design are very crucial in the whole
development cycle. Any glitch in the design phase could be very expensive to solve in
the later stage of the software development. Much care is taken during this phase. The
logical system of the product is developed in this phase.

Implementation:

6
In this phase the designs are translated into code. Computer programs are
written using a conventional programming language or an application generator.
Programming tools like Compilers, Interpreters, Debuggers are used to generate the
code. Different high level programming languages like C, C++, Pascal, Java, .Net are
used for coding. With respect to the type of application, the right programming
language is chosen.

Testing:

In this phase the system is tested. Normally programs are written as a series of
individual modules, these subject to separate and detailed test. The system is then tested
as a whole. The separate modules are brought together and tested as a complete system.
The system is tested to ensure that interfaces between modules work (integration
testing), the system works on the intended platform and with the expected volume of
data (volume testing) and that the system does what the user requires (acceptance/beta
testing).

Maintenance:

Inevitably the system will need maintenance. Software will definitely undergo
change once it is delivered to the customer. There are many reasons for the change.
Change could happen because of some unexpected input values into the system. In
addition, the changes in the system could directly affect the software operations. The
software should be developed to accommodate changes that could happen during the
post implementation period.

SDLC METHDOLOGIES:

This document play a vital role in the development of life cycle (SDLC) as it
describes the complete requirement of the system. It means for use by developers and
will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.

SPIRAL MODEL:

7
It was defined by Barry Boehm in his 1988 article, “A spiral Model of Software
Development and Enhancement. This model was not the first model to discuss iterative
development, but it was the first model to explain why the iteration models.
As originally envisioned, the iterations were typically 6 months to 2 years long.
Each phase starts with a design goal and ends with a client reviewing the progress thus
far. Analysis and engineering efforts are applied at each phase of the project, with an
eye toward the end goal of the project.

The following diagram shows how a spiral model acts like:

Figure 3.1(b): Spiral Model

The steps for Spiral Model can be generalized as follows:

 The new system requirements are defined in as much details as possible.


This usually involves interviewing a number of users representing all the
external or internal users and other aspects of the existing system.

 A preliminary design is created for the new system.

 A first prototype of the new system is constructed from the preliminary


design. This is usually a scaled-down system, and represents an
approximation of the characteristics of the final product.

 A second prototype is evolved by a fourfold procedure:

1. Evaluating the first prototype in terms of its strengths, weakness, and


risks.
8
2. Defining the requirements of the second prototype.

3. Planning an designing the second prototype.

4. Constructing and testing the second prototype.

 At the customer option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involved development cost overruns,
operating-cost miscalculation, or any other factor that could, in the
customer’s judgment, result in a less-than-satisfactory final product.

 The existing prototype is evaluated in the same manner as was the previous
prototype, and if necessary, another prototype is developed from it
according to the fourfold procedure outlined above.

 The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.

 The final system is constructed, based on the refined prototype.

 The final system is thoroughly evaluated and tested. Routine maintenance


is carried on a continuing basis to prevent large scale failures and to
minimize down time.

3.2.System Study:

In the flexibility of uses the interface has been developed a graphics concepts
in mind, associated through a browser interface. The GUI’s at the top level has been
categorized as follows:
1. Administrative User Interface Design.
2. The Operational and Generic User Interface Design.
The administrative user interface concentrates on the consistent information that is
practically, part of the organizational activities and which needs proper authentication
for the data collection. The Interface helps the administration with all the transactional
states like data insertion, data deletion, and data updating along with executive data
search capabilities. The operational and generic user interface helps the users upon the
system in transactions through the existing data and required services. The operational
user interface also helps the ordinary users in managing their own information helps the
ordinary users in managing their own information in a customized manner as per the
assisted flexibilities.
3.3. Modules and their Functionalities:

1. Key Management: Key management describes the data sharing architecture


and defines the security model. This consists of Key generation center, Data-
storing center, Data owner, and User.
9
Key generation center. It is a key authority that generates public and secret
parameters for CP-ABE. It is in charge of issuing, revoking, and updating
attribute keys for users. It grants differential access rights to individual users
based on their attributes. It is assumed to be honest-but-curious Data-storing
center is an entity provides a data sharing service. It is in charge of controlling
the accesses from outside users to the storing data and providing corresponding
contents services Data owner is a client who owns data, and wishes to upload it
into the external data-storing center for ease of sharing or for cost saving. A
data owner is responsible for defining (attribute-based) access policy, and
enforcing it on its own data by encrypting the data under the policy before
distributing it.

User It is an entity who wants to access the data. If a user possesses a set of
attributes satisfying the access policy of the encrypted data, and is not revoked
in any of the valid attribute groups, then he will be able to decrypt the cipher-
text and obtain the data.

2. Security Requirement:Data confidentiality: Unauthorized users who do not


have enough attributes satisfying the access policy should be prevented from
accessing the plaintext of the data. Additionally, the KGC is no longer fully
trusted in the data sharing system. Thus, unauthorized access from the KGC as
well as the data-storing center to the plaintext of the encrypted data should be
prevented. Collusion resistance is one of the most important security property
required in ABE systems If multiple users collude, they may be able to decrypt
a cipher-text by combining their attributes even if each of the users cannot
decrypt the cipher-text alone. Since we assume the KGC and data-storing center
are honest, we do not consider any active attacks from them by colluding with
revoked users as in Backward and Forward secrecy. In the context of attribute-
based encryption, backward secrecy means that any user who comes to hold an
attribute (that satisfies the access policy) should be prevented from accessing
the plaintext of the previous data distributed before he holds the attribute.

3. Cp-Abe Scheme: In this section, we develop a variation of the CP-ABE


algorithm partially based on (but not limited to) Bethencourt et al.’s
construction in order to enhance the expressiveness of the access control policy
instead of building a new CP-ABE scheme from scratch. Its key generation
procedure is modified for our purpose of removing escrow. The proposed
scheme is then built on this new CP-ABE ariation by further integrating it into
the proxy re-encryption protocol for the user evocation. To handle the fine-
grained user revocation, the data storing center must obtain the user access (or
revocation) list for each attribute group, since otherwise revocation cannot take
effect after all. This setting where the data-storing center knows the revocation
list does not violate the security requirements, because it is only allowed to re-
10
encrypt the cipher texts and can by no means obtain any information about the
attribute keys of users.

4. User: It is an entity who wants to access the data. If a user possesses a set of
attributes satisfying the access policy of the encrypted data, and is not revoked
in any of the valid attribute groups, then he will be able to decrypt the cipher-
text and obtain the data.

5. Data Owner: Administrator can enter in to the website with his credentials. He
allows to upload it into the external data-storing center for ease of sharing or for
cost saving. A data owner is responsible for defining (attribute-based) access
policy, and enforcing it on its own data by encrypting the data under the policy
before distributing it.

3.4. PRESENT WORK AND PROCESS MODEL USED WITH


JUSTIFICATION:

PRESENT WORK:

First, the key escrow problem is resolved by a key issuing protocol that exploits
the characteristic of the data sharing system architecture. The key issuing protocol
generates and issues user secret keys by performing a secure two-party computation
(2PC) protocol between the KGC and the data-storing center with their own master
secrets. The 2PC protocol deters them from obtaining any master secret information of
each other such that none of them could generate the whole set of user keys alone.
Thus, users are not required to fully trust the KGC and the data-storing center in order
to protect their data to be shared. The data confidentiality and privacy can be
cryptographically enforced against any curious KGC or data-storing center in the
proposed scheme. Second, the immediate user revocation can be done via the proxy
encryption mechanism together with the CP-ABE algorithm. Attribute group keys are
selectively distributed to the valid users in each attribute group, which then are used to
re-encrypt the cipher-text encrypted under the CPABE algorithm. The immediate user
revocation enhances the Backward/forward secrecy of the data on any membership
changes. In addition, as the user revocation can be done on each attribute level rather
than on system level, more fine-grained user access control can be possible. Even if a
user is revoked from some attribute groups, he would still be able to decrypt the shared
data as long as the other attributes that he holds satisfy the access policy of the cipher-
text. The proposed scheme delegates most laborious tasks of membership management
and user revocation to the data-storing center while the KGC is responsible for the
attribute key management as in the previous CP-ABE schemes without leaking any
confidential information to the other parties.

PROCESS MODEL USED WITH JUSTIFICATION:

11
The following commands specify access control identifiers and they are typically used
to authorize and authenticate the user (command codes are shown in parentheses)

(i)USER NAME:

The user identification is that which is required by the server for access
to its file system. This command will normally be the first command transmitted
by the user after the control connections are made (some servers may require
this).

(ii)PASSWORD:

This command must be immediately preceded by the user name command


and for some sites, completes the user's identification for access control. Since
password information is quite sensitive, it is desirable in general to "mask" it or
suppress time out.

4. Feasibility Study
4.1 Feasibility Report:

Preliminary investigation examine project feasibility, the likelihood the


system will be useful to the organization. The main objective of the feasibility study is
12
to test the Technical, Operational and Economical feasibility for adding new modules
and debugging old running system. There are aspects in the feasibility study portion of
the preliminary investigation:

 Technical Feasibility
 Economical Feasibility
 Operation Feasibility

4.2 Technical Feasibility:


In the feasibility study first step is that the organization or company has to
decide that what technologies are suitable to develop by considering existing system.
Here in this application used the technologies like Visual Studio 2012 and SqlServer
2012. These are free software that would be downloaded from web.

Visual Studio 2012 –it is tool or technology.

4.3 Operational Feasibility:


Not only must an application make economic and technical sense, it must also
make operational sense.

Issues to consider when determining the operational feasibility of a project.


Operations Issues Support Issues

 What documentation
 What tools are needed to support operations? will users be given?
 What skills will operators need to be trained in?  What training will
users be given?
 What processes need to be created and/or
updated?  How will be change
requests managed?
 What documentation does an operation need?

Table4.3 (a) : Operation Feasibility

Very often you will need to improve the existing operations, maintenance, and support
infrastructure to support the operation of the new application that you intend to
develop. To determine what the impact will be you will need to understand both the
current operations and support infrastructure of your organization and the operations
and support characteristics of your new application.

13
To operate this application END-TO-END VMS. The users no need to require
any technical knowledge that we are used to develop this project is Asp.net, C#.net.
That the application providing rich user interface by user can do the operation in
flexible manner.

4.4.Economical Feasibility:

Assessing the economic feasibility of an implementation by performing a cost/benefit


analysis, which as its name suggests compares the full/real costs of the application to its
full/real financial benefits. The alternatives should be evaluated on the basis of their
contribution to net cash flow, the amount by which the benefits exceed the costs,
because the primary objective of all investments is to improve overall organizational
performance.

Type Potential Costs Potential Benefits

 Hardware/software upgrades  Reduced


operating costs
 Fully-burdened cost of labor (salary
+ benefits)  Reduced
personnel costs
 Support costs for the application from a reduction
Quantitative in staff
 Expected operational costs

 Training costs for users to learn the  Increased revenue


application from additional
sales of your
 Training costs to train developers in organizations
new/updated technologies products/services
Qualitative  Increased employee dissatisfaction  Improved decisions as
from fear of change the result of access to
accurate and timely
information

 Raising of existing, or
introduction of a new,
barrier to entry within
your industry to keep
competition out of
your market

14
 Positive public
perception that your
organization is an
innovator

Table4.4 (a):Economical Feasibility

The table includes both qualitative factors, costs or benefits that are subjective in
nature, and quantitative factors, costs or benefits for which monetary values can easily
be identified. I will discuss the need to take both kinds of factors into account when
performing a cost/benefit analysis

5. SYSTEM REQUIREMENTS
SPECIFICATION
5.1.Requirement Specification:

A requirement specification for a software system is a complete description of


the behavior of a system to be developed. It includes a set of usecases that describe all
the interactions the users will have with the software. In addition to usecases, the SRS
also contains non-functional requirements. Non-functional requirements which impose

15
constraints on the design or implementation such as performance engineering
requirements, quality standards.

System requirement specification is a structured collection of information that


embodies the requirements of a system. A business analyst, sometimes titled system
analyst, is responsible for analysing the business needs of their clients and stakeholders
to help identify the business problems and propose solutions. Within the system
development life cycle domain, the business analyst typically performs a liaison
function between the business side of an enterprise and the information technology
department or external service providers.

5.2. Hardware Requirements:

RAM : 1GB

Processor : Intel Pentium IV (3.0 GHz)

Hard Disk Size : 80 GB and above

Database Server :Microsoft SQL Server-20

5.3. Software Requirements:


Operating System : Windows 7/8/10

Database Server :Microsoft SQL Server-2012

Web server : IIS 7.0/8.0, above

Web Technologies : html, CSS, JavaScript,Asp.net with C#

Client : Microsoft Internet Explorer 6.0/chrome browser

IDE & Tools : Microsoft Visual Studio .Net-2012

5.4 SELECTED SOFTWARE:


1. INTRODUCTION TO .NET FRAMEWORK:

The Microsoft .NET Framework is a software technology that is available with


several Microsoft Windows operating systems. It includes a large library of pre-coded
solutions to common programming problems and a virtual machine that manages the
execution of programs written specifically for the framework. The .NET Framework is
16
a key Microsoft offering and is intended to be used by most new applications created
for the Windows platform.The pre-coded solutions that form the framework's Base
Class Library cover a large range of programming needs in a number of areas,
including user interface, data access, database connectivity, cryptography, web
application development, numeric algorithms, and network communications. The class
library is used by programmers, who combine it with their own code to produce
applications.
Programs written for the .NET Framework execute in a software environment
that manages the program's runtime requirements. Also part of the .NET Framework,
this runtime environment is known as the Common Language Runtime (CLR). The
CLR provides the appearance of an application virtual machine so that programmers
need not consider the capabilities of the specific CPU that will execute the program.
The CLR also provides other important services such as security, memory
management, and exception handling. The class library and the CLR together compose
the .NET Framework.

2. Principal design features:

Interoperability:

Because interaction between new and older applications is commonly required,


the .NET Framework provides means to access functionality that is implemented in
programs that execute outside the .NET environment. Access to COM components is
provided in the System.Runtime.InteropServices and System.EnterpriseServices
namespaces of the framework; access to other functionality is provided using the
P/Invoke feature.

Common Runtime Engine:

The Common Language Runtime (CLR) is the virtual machine component of


the .NET framework. All .NET programs execute under the supervision of the CLR,
guaranteeing certain properties and behaviors in the areas of memory management,
security, and exception handling.

Base Class Library:

The Base Class Library (BCL), part of the Framework Class Library (FCL), is a
library of functionality available to all languages using the .NET Framework. The BCL
provides classes which encapsulate a number of common functions, including file
reading and writing, graphic rendering, database interaction and XML document
manipulation.

17
Simplified Deployment:

Installation of computer software must be carefully managed to ensure that it


does not interfere with previously installed software, and that it conforms to security
requirements. The .NET framework includes design features and tools that help address
these requirements.

Security:

The design is meant to address some of the vulnerabilities, such as buffer


overflows, that have been exploited by malicious software. Additionally, .NET
provides a common security model for all applications.

Portability:

The design of the .NET Framework allows it to theoretically be platform


agnostic, and thus cross-platform compatible. That is, a program written to use the
framework should run without change on any type of system for which the framework
is implemented. Microsoft's commercial implementations of the framework cover
Windows, Windows CE, and the Xbox 360. In addition, Microsoft submits the
specifications for the Common Language Infrastructure (which includes the core class
libraries, Common Type System, and the Common Intermediate Language), the C#
language, and the C++/CLI language to both ECMA and the ISO, making them
available as open standards. This makes it possible for third parties to create compatible
implementations of the framework and its languages on other platforms.

Metadata:

All CLI is self-describing through .NET metadata. The CLR checks the
metadata to ensure that the correct method is called. Metadata is usually generated by
language compilers but developers can create their own metadata through custom
attributes. Metadata contains information about the assembly, and is also used to
implement the reflective programming capabilities of .NET Framework.

Security:

.NET has its own security mechanism with two general features: Code Access
Security (CAS), and validation and verification. Code Access Security is based on
evidence that is associated with a specific assembly. Typically the evidence is the
source of the assembly (whether it is installed on the local machine or has been
downloaded from the intranet or Internet).Other code can demand that calling code is
granted a specified permission. The demand causes the CLR to perform a call stack
18
walk: every assembly of each method in the call stack is checked for the required
permission; if any assembly is not granted the permission a security exception is
thrown.
When an assembly is loaded the CLR performs various tests. Two such tests are
validation and verification. The verification mechanism checks to see if the code does
anything that is 'unsafe'. The algorithm used is quite conservative; hence occasionally
code that is 'safe' does not pass. Unsafe code will only be executed if the assembly has
the 'skip verification' permission, which generally means code that is installed on the
local machine.

.NET Framework uses appdomains as a mechanism for isolating code running


in a process. Appdomains can be created and code loaded into or unloaded from them
independent of other appdomains. Appdomains can also be configured independently
with different security privileges. This can help increase the security of the application
by isolating potentially unsafe code. The developer, however, has to split the
application into sub domains; it is not done by the CLR.

Class library:

Namespaces in the BCL


System
System. CodeDom
System. Collections
System. Diagnostics
System. Globalization
System. IO
System. Resources
System. Text
System.Text.RegularExpressions

Table 5.4(a): Class libraries used in SON’s

Microsoft .NET Framework includes a set of standard class libraries. The class
library is organized in a hierarchy of namespaces. Most of the built in APIs are part of
either System.* or Microsoft.* namespaces. It encapsulates a large number of common
functions, such as file reading and writing, graphic rendering, database interaction, and
XML document manipulation, among others. The .NET class libraries are available to
all .NET languages. The .NET Framework class library is divided into two parts: the
Base Class Library and the Framework Class Library.

19
The Base Class Library (BCL) includes a small subset of the entire class library
and is the core set of classes that serve as the basic API of the Common Language
Runtime. The classes in mscorlib.dll and some of the classes in System.dll and
System.core.dll are considered to be a part of the BCL. The BCL classes are available in
both .NET Framework as well as its alternative implementations including .NET
Compact Framework, Microsoft Silverlight and Mono.

The Framework Class Library (FCL) is a superset of the BCL classes and refers
to the entire class library that ships with .NET Framework. It includes an expanded set
of libraries, including WinForms, ADO.NET, ASP.NET, Language Integrated Query,
Windows Presentation Foundation, Windows Communication Foundation among
others. The FCL is much larger in scope than standard libraries for languages like C++,
and comparable in scope to the standard libraries of Java.

Memory management:

The .NET Framework CLR frees the developer from the burden of managing
memory (allocating and freeing up when done); instead it does the memory
management itself. To this end, the memory allocated to instantiations of .NET types
(objects) is done contiguously from the managed heap, a pool of memory managed by
the CLR. As long as there exists a reference to an object, which might be either a direct
reference to an object or via a graph of objects, the object is considered to be in use by
the CLR. When there is no reference to an object, and it cannot be reached or used, it
becomes garbage. However, it still holds on to the memory allocated to it. .NET
Framework includes a garbage collector which runs periodically, on a separate thread
from the application's thread, that enumerates all the unusable objects and reclaims the
memory allocated to them.

The .NET Garbage Collector (GC) is a non-deterministic, compacting, and


mark-and-sweep garbage collector.Since it is not guaranteed when the conditions to
reclaim memory are reached, the GC runs are non-deterministic. Each .NET application
has a set of roots, which are pointers to objects on the managed heap (managed
objects). These include references to static objects and objects defined as local
variables or method parameters currently in scope, as well as objects referred to by
CPU registers. When the GC runs, it pauses the application, and for each object
referred to in the root, it recursively enumerates all the objects reachable from the root
objects and marks them as reachable. It uses .NET metadata and reflection to discover
the objects encapsulated by an object, and then recursively walk them. It then
enumerates all the objects on the heap (which were initially allocated contiguously)
using reflection.The objects are then compacted together, by using to copy them over to
the free space to make them contiguous again. Any reference to an object invalidated
by moving the object is updated to reflect the new location by the GC. The application
20
is resumed after the garbage collection is over.The GC used by .NET Framework is
actually generational. Objects are assigned a generation; newly created objects belong
to Generation 0. The objects that survive a garbage collection are tagged as Generation
1, and the Generation 1 objects that survive another collection are Generation 2 objects.
The .NET Framework uses up to Generation 2 objects. Higher generation objects are
garbage collected less frequently than lower generation objects. This helps increase the
efficiency of garbage collection, as older objects tend to have a larger lifetime than
newer objects. Thus, by removing older (and thus more likely to survive a collection)
objects from the scope of a collection run, fewer objects need to be checked and
compact

Versions:

Microsoft started development on the .NET Framework in the late 1990s


originally unde the name of Next Generation Windows Services (NGWS). By late 2000
the first beta versions of .NET 1.0 were released.

Version Version Number Release Date


Figure 1.0 1.0.3705.0 2002-01-05 5.4(b):
The .NET
1.1 1.1.4322.573 2003-04-01
2.0 2.0.50727.42 2005-11-07
3.0 3.0.4506.30 2006-11-06
3.5 3.5.21022.8 2007-11-09
4.0 4.0.30319.1 2010-04-12
4.5 4.5.50709.17929 2012-08-15
4.5.1
4.5.50938.18408 2013-10-17

Framework stack

21
Table 5.4(b):.Net Version

Client Application Development:

Client applications are the closest to a traditional style of application in


Windows-based programming. These are the types of applications that display
windows or forms on the desktop, enabling a user to perform a task. Client applications
include applications such as word processors and spreadsheets, as well as custom
business applications such as data-entry tools, reporting tools, and so on. Client
applications usually employ windows, menus, buttons, and other GUI elements, and
they likely access local resources such as the file system and peripherals such as
printers. Another kind of client application is the traditional Activex control (now
replaced by the managed Windows Forms control) deployed over the Internet as a Web
page. This application is much like other client applications: it is executed natively, has
access to local resources, and includes graphical elements.

In the past, developers created such applications using C/C++ in conjunction


with the Microsoft Foundation Classes (MFC) or with a rapid application development
(RAD) environment such as Microsoft® Visual Basic®. The .NET Framework
incorporates aspects of these existing products into a single, consistent development
environment that drastically simplifies the development of client applications.

The Windows Forms classes contained in the .NET Framework are designed to
be used for GUI development. You can easily create command windows, buttons,
menus, toolbars, and other screen elements with the flexibility necessary to
accommodate shifting business needs.

22
For example, the .NET Framework provides simple properties to adjust visual attributes
associated with forms. In some cases the underlying operating system does not support
changing these attributes directly, and in these cases the .NET Framework
automatically recreates the forms. This is one of many ways in which the .NET
Framework integrates the developer interface, making coding simpler and more
consistent.

Server Application Development:

Server-side applications in the managed world are implemented through


runtime hosts. Unmanaged applications host the common language runtime, which
allows your custom managed code to control the behavior of the server.
This model provides you with all the features of the common language runtime
and class library while gaining the performance and scalability of the host server.

Figure 5.4(c): Server-side managed code

ASP.NET is the hosting environment that enables developers to use the .NET
Framework to target Web-based applications. However, ASP.NET is more than just a
runtime host; it is a complete architecture for developing Web sites and Internet-
distributed objects using managed code. Both Web Forms and XML Web services use
IIS and ASP.NET as the publishing mechanism for applications, and both have a
collection of supporting classes in the .NET.

5.4 C#.NET:

23
The Relationship of C# to .NET:
C# is a new programming language, and is significant in two respects.
 It is specifically designed and targeted for use with Microsoft's .NET
Framework (a feature rich platform for the development, deployment, and
execution of distributed applications).
 It is a language based upon the modern object-oriented design methodology,
and when designing it Microsoft has been able to learn from the experience of
all the other similar languages that have been around over the 20 years or so
since object-oriented principles came to prominence
 One important thing to make clear is that C# is a language in its own right.
Although it is designed to generate code that targets the .NET environment, it
is not itself part of .NET. There are some features that are supported by .NET
but not by C#, and you might be surprised to learn that there are actually
features of the C# language that are not supported by .NET like Operator
Overloading.
 However, since the C# language is intended for use with .NET, it is important
for us to have an understanding of this Framework if we wish to develop
applications in C# effectively.

The Common Language Runtime:

Central to the .NET framework is its run-time execution environment, known as


the Common Language Runtime (CLR) or the .NET runtime. Code running under the
control of the CLR is often termed managed code.
However, before it can be executed by the CLR, any source code that we develop (in
C# or some other language) needs to be compiled. Compilation occurs in two steps
in .NET:
1. Compilation of source code to Microsoft Intermediate Language (MS-IL).
2. Compilation of IL to platform-specific code by the CLR.

At first sight this might seem a rather long-winded compilation process. Actually,
this two-stage compilation process is very important, because the existence of the
Microsoft Intermediate Language (managed code) is the key to providing many of the
benefits of .NET. Let's see why.

Advantages of Managed Code:

Microsoft Intermediate Language (often shortened to "Intermediate


Language", or "IL") shares with Java byte code the idea that it is a low-level language
with a simple syntax (based on numeric codes rather than text), which can be very
quickly translated into native machine code. Having this well-defined
Universal syntax for code has significant advantages.

24
Platform Independence:

First, it means that the same file containing byte code instructions can be placed
on any platform; at runtime the final stage of compilation can then be easily
accomplished so that the code will run on that particular platform. In other words, by
compiling to Intermediate Language we obtain platform independence for .NET, in
much the same way as compiling to Java byte code gives Java platform independence.
You should note that the platform independence of .NET is only theoretical at present
because, at the time of writing, .NET is only available for Windows. However,
porting .NET to other platforms is being explored (see for example the Mono project,
an effort to create an open source implementation of .NET, at http://www.go-
mono.com/).

Performance Improvement:

Although we previously made comparisons with Java, IL is actually a bit more


ambitious than Java byte code. Significantly. One of the disadvantages of Java was
that, on execution, the process of translating from Java byte code to native executable
resulted in a loss of performance (apart from in more recent cases, here Java is JIT-
compiled on certain platforms).
Instead of compiling the entire application in one go (which could lead to a slow start-
up time), the JIT compiler simply compiles each portion of code as it is called (just-in-
time). When code has been compiled once, the resultant native executable is stored
until the application exits; so that it does not need to be recompiled the next time that
portion of code is run. Microsoft argues that this process is more efficient than
compiling the entire application code at the start, because of the likelihood those large
portions of any application code will not actually be executed in any given run. Using
the JIT compiler, such code will never get compiled.
This explains why we can expect that execution of managed IL code will be
almost as fast as executing native machine code. What it doesn't explain is why
Microsoft expects that we will get a performance improvement. The reason given for
this is that, since the final stage of compilation takes place at run time, the JIT compiler
will know exactly what processor type the program will run on. This means that it can
optimize the final executable code to take advantage of any features or particular
machine code instructions offered by that particular processor.
Traditional compilers will optimize the code, but they can only perform optimizations
that will be independent of the particular processor that the code will run on. This is
because traditional compilers compile to native executable before the software is
shipped. This means that the compiler doesn't know what type of processor the code
will run on beyond basic generalities, such as that it will be an x86-compatible
processor or an Alpha processor. Visual Studio 6, for example, optimizes for a generic
25
Pentium machine, so the code that it generates cannot take advantages of hardware
features of Pentium III processors. On the other hand, the JIT compiler can do all the
optimizations that Visual Studio 6 can, and in addition to that it will optimize for the
particular processor the code is running on.

Language Interoperability:

How the use of IL enables platform independence, and how JIT compilation
should improve performance. However, IL also facilitates language interoperability.
Simply put, you can compile to IL from one language, and this compiled code should
then be interoperable with code that has been compiled to IL from another language.

Intermediate Language:

From what we learned in the previous section, Intermediate Language


obviously plays a fundamental role in the .NET Framework. As C# developers, we now
understand that our C# code will be compiled into Intermediate Language before it is
executed (indeed, the C# compiler only compiles to managed code). It makes sense,
then, that we should now take a closer look at the main characteristics of IL, since any
language that targets .NET would logically need to support the main characteristics of
IL too.

Here are the important features of the Intermediate Language:


 Object-orientation and use of interfaces
 Strong distinction between value and reference types
 Strong data typing
 Error handling through the use of exceptions
 Use of attributes

Support of Object Orientation and Interfaces:

The language independence of .NET does have some practical limits. In


particular, IL, however it is designed, is inevitably going to implement some particular
programming methodology, which means that languages targeting it are going to have
to be compatible with that methodology. The particular route that Microsoft has chosen
to follow for IL is that of classic object-oriented programming, with single
implementation inheritance of classes. Besides classic object-oriented programming,
Intermediate Language also brings in the idea of interfaces, which saw their first
implementation under Windows with COM. .NET interfaces are not the same as COM
interfaces; they do not need to support any of the COM infrastructure (for example,
they are not derived from I Unknown, and they do not have associated GUIDs).
However, they do share with COM interfaces the idea that they provide a contract, and
26
classes that implement a given interface must provide implementations of the methods
and properties specified by that interface.

Object Orientation and Language Interoperability:

Working with .NET means compiling to the Intermediate Language, and that in
turn means that you will need to be programming using traditional object-oriented
methodologies. That alone is not, however, sufficient to give us language
interoperability. After all, C++ and Java both use the same object-oriented paradigms,
but they are still not regarded as interoperable. We need to look a little more closely at
the concept of language interoperability.
An associated problem was that, when debugging, you would still have to
independently debug components written in different languages. It was not possible to
step between languages in the debugger. So what we really mean by language
interoperability is that classes written in one language should be able to talk directly to
classes written in another language. In particular:
 A class written in one language can inherit from a class written in another
language.
 The class can contain an instance of another class, no matter what the languages
of the two classes
 An object can directly call methods against another object written in another
language.
 Objects (or references to objects) can be passed around between methods.

Strong Data Type:

One very important aspect of IL is that it is based on exceptionally strong data


typing. What we mean by that is that all variables are clearly marked as being of a
particular, specific data type (there is no room in IL, for example, for the Variant data
type recognized by Visual Basic and scripting languages). In particular, IL does not
normally permit any operations that result in ambiguous data types.
For instance, VB developers will be used to being able to pass variables around without
worrying too much about their types, because VB automatically performs type
conversion. C++ developers will be used to routinely casting pointers between different
types. Being able to perform this kind of operation can be great for performance, but it
breaks type safety. Hence, it is permitted only in very specific circumstances in some of
the languages that compile to managed code. Indeed, pointers (as opposed to
references) are only permitted in marked blocks of code in C#, and not at all in VB
(although they are allowed as normal in managed C++). Using pointers in your code
will immediately cause it to fail the memory type safety checks performed by the CLR.

27
You should note that some languages compatible with .NET, such as VB.NET,
still allow some laxity in typing, but that is only possible because the compilers behind
the scenes ensure the type safety is enforced in the emitted IL.
Although enforcing type safety might initially appear to hurt performance, in many
cases this is far outweighed by the benefits gained from the services provided by .NET
that rely on type safety. Such services include:
 Language Interoperability
 Garbage Collection
 Security
 Application Domains

Common Type System (CTS):

This data type problem is solved in .NET through the use of the Common Type
System (CTS). The CTS defines the predefined data types that are available in IL, so
that all languages that target the .NET framework will produce compiled code that is
ultimately based on these types.The CTS doesn't merely specify primitive data types,
but a rich hierarchy of types, which includes well-defined points in the hierarchy at
which code is permitted to define its own types. The hierarchical structure of the
Common Type System reflects the single-inheritance object-oriented methodology of
IL, and looks like this:

Figure 5.4(d): Common Language Specification

The Common Language Specification works with the Common Type System to
ensure language interoperability. The CLS is a set of minimum standards that all
compilers targeting .NET must support. Since IL is a very rich language, writers of
most compilers will prefer to restrict the capabilities of a given compiler to only

28
support a subset of the facilities offered CTS. That is fine, by IL and the as long
as the compiler supports everything that is defined in the CLS.

Collection:
The garbage collector is .NET's answer to memory management and in particular to
the question of what to do about reclaiming memory that running applications ask for.
Up until now there have been two techniques used on Windows platform for deal
locating memory that processes have dynamically requested from the system
 Make the application code do it all manually
 Make objects maintain reference counts

The .NET runtime relies on the garbage collector instead. This is a program whose
purpose is to clean up memory. The idea is that all dynamically requested memory is
allocated on the heap (that is true for all languages, although in the case of .NET, the
CLR maintains its own managed heap for .NET applications to use). Every so often,
when .NET detects that the managed heap for a given process is becoming full and
therefore needs tidying up, it calls the garbage collector. The garbage collector runs
through variables currently in scope in your code, examining references to objects
stored on the heap to identify which ones are accessible from your code – that is to say
which objects have references that refer to them. Any objects that are not referred to are
deemed to be no longer accessible from your code and can therefore be removed. Java
uses a similar system of garbage collection to this.

Security:

.NET can really excel in terms of complementing the security mechanisms


provided by Windows because it can offer code-based security, whereas Windows only
really offers role-based security.

Role-based security is based on the identity of the account under which the
process is running, in other words, who owns and is running the process. Code-based
security on the other hand is based on what the code actually does and on how much
the code is trusted. Thanks to the strong type safety of IL, the CLR is able to inspect
code before running it in order to determine required security permissions. .NET also
offers a mechanism by which code can indicate in advance what security permissions it
will require to run.
The importance of code-based security is that it reduces the risks associated with
running code of dubious origin. For example, even if code is running under the
administrator account, it is possible to use code-based security to indicate that that code
should still not be permitted to perform certain types of operation that the administrator
29
account would normally be allowed to do, such as read or write to environment
variables, read or write to the registry, or to access the .NET reflection features.

.Net Framework Classes:

The .NET base classes are a massive collection of managed code classes that
have been written by Microsoft, and which allow you to do almost any of the tasks that
were previously available through the Windows API. This means that you can either
instantiate objects of whichever .NET base class is appropriate, or you can derive your
own classes from them.

Name Spaces:

Namespaces are the way that .NET avoids name clashes between classes. They
are designed, for example, to avoid the situation in which you define a class to
represent a customer, name your class Customer, and then someone else does the same
thing (quite a likely scenario – the proportion of businesses that have customers seems
to be quite high).

A namespace is no more than a grouping of data types, but it has the effect that
the names of all data types within a namespace automatically get prefixed with the
name of the namespace .NET base classes are in a namespace called System. The base
class Array is in this namespace, so its full name is System. If a namespace is explicitly
supplied, then the type will be added to a nameless global namespace.

Creating .Net Application using C#:

C# can be used to create console applications: text-only applications that run in


a DOS window. You'll probably use console applications when unit testing class
libraries, and for creating Unix/Linux daemon processes. However, more often you'll
use C# to create applications that use many of the technologies associated with .NET.
In this section, we'll give you an overview of the different types of application that you
can write in C#.

Creating Windows Forms:

Although C# and .NET are particularly suited to web development, they still
offer splendid support for so-called "fat client" apps, applications that have to be
installed on the end-user's machine where most of the processing takes place. This
support is from Windows Forms.

Windows Control:

30
Although Web Forms and Windows Forms are developed in much the same
way, you use different kinds of controls to populate them. Web Forms use Web
Controls, and Windows Forms use Windows Controls.

Windows Services:

A Windows Service (originally called an NT Service) is a program that is


designed to run in the background in Windows NT/2000/XP (but not Windows 9x).
Services are useful where you want a program to be running continuously and ready to
respond to events without having been explicitly started by the user. A good example
would be the World Wide Web Service on web servers, which listens out for web
requests from clients. There are .NET Framework base classes available in the System.

The Role of C# in .Net Enterprise Architecture:

C# requires the presence of the .NET runtime, and it will probably be a few
years before most clients – particularly most home machines – have .NET installed. In
the meantime, installing a C# application is likely to mean also installing the .NET
redistributable components. Because of that, it is likely that the first place we will see
many C# applications is in the enterprise environment. Indeed, C# arguably presents an
outstanding opportunity for organizations that are interested in building robust, n-tiered
client-server application

Figure 5.4(e): C# in .Net Enterprise Architecture


6. SOFTWARE DESIGN
System Architecture:
The nodes involved are admin and clients which stands as UI for the system. The
deployment is performed as per the requirements of Hardware and software specified in the
requirements phase.

31
Figure 6.1(a) shows the architecture of the

Key generation center is a key authority that generates public and secret parameters for
CPABE. It is in charge of issuing, revoking, and updating attribute keys for users. It grants
differential access rights to individual users based on their attributes. Data storing center is an
entity that provides a data sharing service. It is in charge of controlling the accesses from
outside users to the storing data and providing corresponding contents services. The data
storing center is another key authority that generates personalized user key with the KGC, and
issues and revokes attribute group keys to valid users per each attribute, which are used to
enforce a fine-grained user access control. It is a client who owns data, and wishes to upload it
into the external data storing center for ease of sharing or for cost saving. A data owner is
responsible for defining (attribute based) access policy, and enforcing it on its own data by
encrypting the data under the policy before distributing it. User is an entity who wants to access
the data. If a user possesses a set of attributes satisfying the access policy of the encrypted data,
and is not revoked in any of the valid attribute groups, then he will be able to decrypt the cipher
text and obtain the data.

32
6.1.1 List Of Tables:

Tables:Table 6.1.1(a): Registration

Table 6.1.1(b): User Login

Table 6.1.1(c): Upload7File

Table 6.1.1(d): User DownloadFiles

33
Table 6.1.1(e): Login History

6.1.2 Data Flow Diagrams (DFD):


DFD SYMBOLS:
In the DFD, there are four symbols

1. A square defines a source(originator) or destination of system data


2. An arrow identifies data flow. It is the pipeline through which the information
flows
3. A circle or a bubble represents a process that transforms incoming data flow into
outgoing data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data.

Process that transforms data flow.

Source or Destination of data

Data flow

Data Store

34
CONSTRUCTING A DFD:

Several rules of thumb are used in drawing DFD’S:


1. Process should be named and numbered for an easy reference. Each name should
be representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data traditionally
flow from source to the destination although they may flow back to the source.
One way to indicate this is to draw long flow line back to a source. An alternative
way is to repeat the source symbol as a destination. Since it is used more than once
in the DFD it is marked with a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
4. The names of data stores and destinations are written in capital letters. Process and
dataflow names have the first letter of each work capitalized
A DFD typically shows the minimum contents of data store. Each data store
should contain all the data elements that flow in and out.

Questionnaires should contain all the data elements that flow in and out.
Missing interfaces redundancies and like is then accounted for often through
interviews.

SAILENT FEATURES OF DFD’S:

1. The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether the
dataflow take place daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.

TYPES OF DATA FLOW DIAGRAMS:


1. Current Physical
2. Current Logical
3. New Logical
4. New Physical

CURRENT PHYSICAL:
In Current Physical DFD process label include the name of people or their
positions or the names of computer systems that might provide some of the overall
system-processing label includes an identification of the technology used to process the
data. Similarly data flows and data stores are often labels with the names of the actual
physical media on which data are stored such as file folders, computer files, business
forms or computer tapes.

35
CURRENT LOGICAL:

The physical aspects at the system are removed as mush as possible so that the current
system is reduced to its essence to the data and the processors that transforms them
regardless of actual physical form.

NEW LOGICAL:

This is exactly like a current logical model if the user were completely happy with he
user were completely happy with the functionality of the current system but had
problems with how it was implemented typically through the new logical model will
differ from current logical model while having additional functions, absolute function
removal and inefficient flows recognized.

NEW PHYSICAL:

The new physical represents only the physical implementation of the new
system.

RULES GOVERNING THE DFD’S:

PROCESS:
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a sink.
3) A process has a verb phrase label.

DATA STORE:

1) Data cannot move directly from one data store to another data store, a process must
move data.
2) Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store
3) A data store has a noun phrase label.

SOURCE OR SINK:

The origin and / or destination of data.

1) Data cannot move direly from a source to sink it must be moved by a process
2) A source and /or sink has a noun phrase land

36
DATA FLOW:

1) A Data Flow has only one direction of flow between symbols. It may flow in both
directions between a process and a data store to show a read before an update. The
later is usually indicated however by two separate arrows since these happen at
different type.
2) A join in DFD means that exactly the same data comes from any of two or more
different processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be
atleast one other process that handles the data flow produce some other data flow
returns the original data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.
A data flow has a noun phrase label more than one data flow noun phrase can appear on
a single arrow as long as all of the flows on the same arrow move together as one
package.

DFD Diagrams:
Context Level Diagram (O level)

Figure 6.1.2(a): Overview DFD

37
Login DFD :

Figure 6.1.2(b): Login DFD

Admin DFD:

Figure 6.1.2(c): Admin DFD

38
User Activity DFD:

Figure 6.1.2(d): User Activity DFD

6.2. UML Diagrams:


UML is a standard language for specifying, visualizing, constructing, and
documenting the artifacts of software systems.

UML was created by Object Management Group (OMG) and UML 1.0 specification
draft was proposed to the OMG in January 1997.

OMG is continuously putting effort to make a truly industry standard.

 UML stands for Unified Modeling Language.


 UML is a pictorial language used to make software blue prints.

UML Modeling Types:

It is very important to distinguish between the UML model. Different diagrams are
used for different type of UML modeling. There are three important type of UML
modelings:

39
6.2.1 Structural Things:
Structural things are classified into seven types those are as follows:

Class diagram:
Class diagrams are the most common diagrams used in UML. Class diagram
consists of classes, interfaces, associations and collaboration. Class diagrams basically
represent the object oriented view of a system which is static in nature. Active class is
used in a class diagram to represent the concurrency of the system.

Class diagram represents the object orientation of a system. So it is generally


used for development purpose. This is the most widely used diagram at the time of
system construction.

The purpose of the class diagram is to model the static view of an application.
The class diagrams are the only diagrams which can be directly mapped with object
oriented languages and thus widely used at the time of construction.

40
Figure 6.2.1 (a) : Class Diagram

Collaboration Diagrams:

A collaboration diagram, also called a communication diagram or


interaction diagram, is an illustration of the relationships and interactions
among software objects in the Unified Modeling Language (UML).

A collaboration diagram resembles a flowchart that portrays the


roles, functionality and behavior of individual objects as well as the overall
operation of the system in real time. The relationships between the objects
are shown as lines connecting the rectangles. The messages between
objects are shown as arrows connecting the relevant rectangles along with
labels that define the message sequencing.

Login Collaboration:

DataBase
4 : ExecuteDataSet()

DAL:SqlHelper

5 : Results()

3 : CheckUser()

BAL:LoginClass

2 : Enter Uname()
6 : Return Result()
7 : Show Result()

Default.aspx

1 : Open Form()

41
User
Data base 4 : ExecuteNonQuery()

DAL:SqlHelper

5 : Results()

3 : UploadFile()

BAL:UploadClass

2 : Enter Details()
6 : Return Result()
7 : Show Result()

Updatefilefrm.a
spx

1 : Open Form()

Admin

Figure 6.2.1(b): Login Collaboration Diagram

Upload File:

Figure 6.2.1(c): Upload File Collaboration Diagram

User Search:

42
DataBase
4 : ExecuteDataSet()

DAL:SqlHelper

5 : Results()

3 : SearchFile()

BAL:UserClass

2 : Enter Details()
6 : Return Result()
7 : Show Result()

Search.aspx

1 : Open Form()

User

Figure 6.2.1(d): User Search Collaboration Diagram

43
User Registration:

Figure 6.2.1(e): User Registration Collaboration Diagram

DataBase
4 : ExecutenonQuery()

DAL:SqlHelper

5 : Results()

3 : SendData()

BAL:UserClass

2 : Enter Details()
6 : Return Result()
7 : Show Result()

Registrationfrm.
aspx

1 : Open Form()

User

Use Case Diagram:


Use case diagrams are considered for high level requirement analysis of a
system. So when the requirements of a system are analyzed the functionalities are
captured in use cases.So we can say that uses cases are nothing but the system
functionalities written in an organized manner. Now the second things which are
relevant to the use cases are the actors. Actors can be defined as something that
interacts with the system.

The actors can be human user, some internal applications or may be some external
applications. So in a brief when we are planning to draw an use case diagram we should
have the following items identified.

 Functionalities to be represented as an use case


 Actors

44
 Relationships among the use cases and actors.

Use case diagrams are drawn to capture the functional requirements of a system. So
after identifying the above items we have to follow the following guidelines to draw an
efficient use case diagram.

 The name of a use case is very important. So the name should be chosen in such
a way so that it can identify the functionalities performed.
 Give a suitable name for actors.

 Show relationships and dependencies clearly in the diagram.

 Do not try to include all types of relationships. Because the main purpose of the
diagram is to identify requirements.

 Use note when ever required to clarify some important points…

Use Case:

Registratio System
n
Login

Upload file

User login
Histories

User file download history


User

Admin
Search file

Download

Logout

45
Figure 6.2.1(f): Use Case Diagram

6.2.2 Behavioral Things

Behavioural things are considered as verbs of a model.These are the ‘dynamic '
parts which describes how the model carry out its functionality with respect to time and
space. Behavioral things are classified into two types:

From the term Interaction, it is clear that the diagram is used to describe some
type of interactions among the different elements in the model. This interaction is a
part of dynamic behavior of the system.

Purpose of Interaction Diagrams

The purpose of interaction diagrams is to visualize the interactive behavior of the


system. Visualizing the interaction is a difficult task. Hence, the solution is to use
different types of models to capture the different aspects of the interaction.

Sequence and collaboration diagrams are used to capture the dynamic nature but from
a different angle.

The purpose of interaction diagram is −

 To capture the dynamic behaviour of a system.

 To describe the message flow in the system.

 To describe the structural organization of the objects.

 To describe the interaction among objects.

How to Draw an Interaction Diagram?


As we have already discussed, the purpose of interaction diagrams is to capture
the dynamic aspect of a system. So to capture the dynamic aspect, we need to
understand what a dynamic aspect is and how it is visualized. Dynamic aspect can be
defined as the snapshot of the running system at a particular moment

We have two types of interaction diagrams in UML. One is the sequence


diagram and the other is the collaboration diagram. The sequence diagram captures the
time sequence of the message flow from one object to another and the collaboration
diagram describes the organization of objects in a system taking part in the message
flow.

Following things are to be identified clearly before drawing the interaction diagram

 Objects taking part in the interaction.

46
 Message flows among the objects.

 The sequence in which the messages are flowing.

 Object organization.

Following are two interaction diagrams modeling the order management system. The
first diagram is a sequence diagram and the second is a collaboration diagram

The Sequence Diagram


The sequence diagram has four objects (Customer, Order, SpecialOrder and
NormalOrder).
The following diagram shows the message sequence for SpecialOrder object
and the same can be used in case of NormalOrder object. It is important to understand
the time sequence of message flows.

Figure 6.2.2(a): Login sequence diagram

1.The Collaboration Diagram


The second interaction diagram is the collaboration diagram. It shows the
object organization as seen in the following diagram. In the collaboration diagram, the
method call sequence is indicated by some numbering technique. The number
indicates how the methods are called one after another. We have taken the same order
management system to describe the collaboration diagram.

47
Method calls are similar to that of a sequence diagram. However, difference
being the sequence diagram does not describe the object organization, whereas the
collaboration diagram shows the object organization.

To choose between these two diagrams, emphasis is placed on the type of


requirement. If the time sequence is important, then the sequence diagram is used. If
organization is required, then collaboration diagram is used.

Figure 6.2.2(b): Upload File Sequence Diagram


Where to Use Interaction Diagrams?
We have already discussed that interaction diagrams are used to describe the
dynamic nature of a system. Now, we will look into the practical scenarios where
these diagrams are used. To understand the practical application, we need to
understand the basic nature of sequence and collaboration diagram.

The main purpose of both the diagrams are similar as they are used to capture
the dynamic behavior of a system. However, the specific purpose is more important to
clarify and understand.

Sequence diagrams are used to capture the order of messages flowing from one
object to another. Collaboration diagrams are used to describe the structural
organization of the objects taking part in the interaction. A single diagram is not
sufficient to describe the dynamic aspect of an entire system, so a set of diagrams are
used to capture it as a whole.

Interaction diagrams are used when we want to understand the message flow
and the structural organization. Message flow means the sequence of control flow

48
from one object to another. Structural organization means the visual organization of
the elements in a system.

Interaction diagrams can be used −

 To model the flow of control by time sequence.

 To model the flow of control by structural organizations.

 For forward engineering.

 For reverse engineering.

2.State chart diagram

The name of the diagram itself clarifies the purpose of the diagram and other
details. It describes different states of a component in a system. The states are specific
to a component/object of a system.

A Statechart diagram describes a state machine. State machine can be defined


as a machine which defines different states of an object and these states are controlled
by external or internal events.

Activity diagram explained in the next chapter, is a special kind of a Statechart


diagram. As Statechart diagram defines the states, it is used to model the lifetime of an
object.

Purpose of Statechart Diagrams


Statechart diagram is one of the five UML diagrams used to model the dynamic
nature of a system. They define different states of an object during its lifetime and
these states are changed by events. Statechart diagrams are useful to model the
reactive systems. Reactive systems can be defined as a system that responds to
external or internal events.

Statechart diagram describes the flow of control from one state to another state.
States are defined as a condition in which an object exists and it changes when some
event is triggered. The most important purpose of Statechart diagram is to model
lifetime of an object from creation to termination.

Statechart diagrams are also used for forward and reverse engineering of a
system. However, the main purpose is to model the reactive system.

49
Following are the main purposes of using Statechart diagrams −

 To model the dynamic aspect of a system.

 To model the life time of a reactive system.

 To describe different states of an object during its life time.

 Define a state machine to model the states of an object.

6.2.3 Grouping Things:

Grouping things can be defined as a mechanism to group elements of a UML


model together. There is only one grouping thing available −

Package − Package is the only one grouping thing available for gathering structural
and behavioral things.

Figure 6.2.3(a):Packge Diagram

6.2.4 Annotational Things:

Annotational things can be defined as a mechanism to capture remarks,


descriptions, and comments of UML model elements. Note - It is the only one
Annotational thing available. A note is used to render comments, constraints, etc. of an
UML element.

6.2.5 Relationships In The Uml:

In UML modeling, a relationship is a connection between two or more UML


model elements that adds semantic information to a model.
In the product, you can use several UML relationships to define the structure between
model elements. Examples of relationships include associations, dependencies,
generalizations, realizations, and transitions.

50
Relationship Description
Abstraction An abstraction relationship is a dependency between
model elements that represent the same concept at
different levels of abstraction or from different
viewpoints. You can add abstraction relationships to a
model in several diagrams, including use-case, class,
and component diagrams.
Aggregation An aggregation relationship depicts a classifier as a part
of, or as subordinate to, another classifier.
Association An association relationship is a structural relationship
between two model elements that shows that objects of
one classifier (actor, use case, class, interface, node, or
component) connect and can navigate to objects of
another classifier. Even in bidirectional relationships, an
association connects two classifiers, the primary
(supplier) and secondary (client),
Binding A binding relationship is a dependency relationship that
assigns values to template parameters and generates a
new model element from the template.
Communication path A communication path is a type of association between
nodes in a deployment diagram that shows how the
nodes exchange messages and signals.
Composition A composition relationship represents a whole–part
relationship and is a type of aggregation. A composition
relationship specifies that the lifetime of the part
classifier is dependent on the lifetime of the whole
classifier.
Control flow A control flow is a type of activity edge that models the
movement of control from one activity node to another.
Dependency A dependency relationship indicates that changes to one
model element (the supplier or independent model
element) can cause changes in another model element
(the client or dependent model element). The supplier
model element is independent because a change in the

51
client does not affect it. The client model element
depends on the supplier because a change to the
supplier affects the client.
Deploy A deploy relationship shows the specific component
that an instance of a single node uses. In a UML model,
a deploy relationship typically appears in deployment
diagrams.
Directed association A directed association relationship is an association that
is navigable in only one direction and in which the
control flows from one classifier to another (for
example, from an actor to a use case). Only one of the
association ends specifies navigability.
Extend An extend relationship between use cases indicates that
one use case, the extended use case, can extend another
use case, the base use case. An extend relationship has
the option of using the extended use case.
Generalization A generalization relationship indicates that a specialized
(child) model element is based on a general (parent)
model element. Although the parent model element can
have one or more children, and any child model element
can have one or more parents, typically a single parent
has multiple children. In UML 2.0, several classes can
constitute a generalization set of another class.
Generalization relationships appear in class, component,
and use-case diagrams.
Interface realization An interface realization relationship is a specialized
type of implementation relationship between a classifier
and a provided interface. The interface realization
relationship specifies that the realizing classifier must
conform to the contract that the provided interface
specifies.
Include An include relationship between use cases specifies that
an including (or base) use case requires the behavior
from another use case (the included use case). In an
include relationship, a use case must use the included
use case.
Manifestation A manifestation relationship shows which model
elements, such as components or classes, are manifested
in an artifact. The artifact manifests, or includes, a
specific implementation for, the features of one or
several physical software components.

52
Note attachment A note attachment relationship connects a note or text
box to a connector or shape. A note attachment
indicates that the note or text box contains information
that is relevant to the attached connector or shape.
Object flow An object flow is a type of activity edge that models the
flow of objects and data from one activity node to
another.
Realization A realization relationship exists between two model
elements when one of them must realize, or implement,
the behavior that the other specifies. The model element
that specifies the behavior is the supplier, and the model
element that implements the behavior is the client. In
UML 2.0, this relationship is normally used to specify
those elements that realize or implement the behavior of
a component.
Usage A usage relationship is a dependency relationship in
which one model element requires the presence of
another model element (or set of model elements) for its
full implementation or operation. The model element
that requires the presence of another model element is
the client, and the model element whose presence is
required is the supplier. Although a usage relationship
indicates an ongoing requirement, it also indicates that
the connection between the two model elements is not
always meaningful or present.

Table 6.2.5(a) Relationships In the Uml

6.2.6 Diagrams In Uml:

Structural UML diagrams


 Class diagram
 Package diagram
 Object diagram
Component diagram The key to making a UML diagram is connecting shapes that
represent an object or class with other shapes to illustrate relationships and the flow of
information and data.

53
Types of UML Diagrams
The current UML standards call for 13 different types of diagrams: class, activity,
object, use case, sequence, package, state, component, communication, composite
structure, interaction overview, timing, and deployment.

These diagrams are organized into two distinct groups: structural diagrams and
behavioral or interaction diagrams.
 Composite structure diagram
 Deployment diagram

Behavioral UML diagrams


 Activity diagram
 Sequence diagram
 Use case diagram
 State diagram
 Communication diagram
 Interaction overview diagram
 Timing diagram

Class Diagram:

Class diagrams are the backbone of almost every object-oriented method,


including UML. They describe the static structure of a system.

Object Diagram:

Object diagrams describe the static structure of a system at a particular time.


They can be used to test class diagrams for accuracy.

54
Figure 6.2.6(b): Object Diagram

Use case Diagram:

Use case diagrams model the functionality of a system using actors and use cases.

Figure 6.2.6(c): Use case Diagram

Activity Diagram:

Activity diagrams illustrate the dynamic nature of a system by modeling the


flow of control from activity to activity. An activity represents an operation on some
class in the system that results in a change in the state of the system. Typically, activity
diagrams are used to model workflow or business processes and internal operation.

55
Figure 6.2.6(d): website activity Diagram

Sequence diagrams:

Sequence diagrams describe interactions among classes in terms of an exchange


of messages over time.

Figure 6.2.6(e): Sequence Diagram

State Diagram:

Statechart diagram, now known as state machine diagrams and state diagrams
describe the dynamic behavior of a system in response to external stimuli. State
diagrams are especially useful in modeling reactive objects whose states are triggerby
specific events.

56
Figure 6.2.6(f): State chart Diagram

Component Diagram:

Component diagrams describe the organization of physical software


components, including source code, run-time (binary) code, and executables.

Figure 6.2.6(g): Component Diagram

6.3 RSA Algorithm:

RSA algorithm is asymmetric cryptography algorithm. Asymmetric actually


means that it works on two different keys i.e. Public Key and Private Key. As the
name describes that the Public Key is given to everyone and Private key is kept private.
An example of asymmetric cryptography :
1. A client (for example browser) sends its public key to the server and requests
for some data.
2. The server encrypts the data using client’s public key and sends the encrypted
data.
3. Client receives this data and decrypts it.
Since this is asymmetric, nobody else except browser can decrypt the data even if a
third party has public key of browser.

The idea! :

57
The idea of RSA is based on the fact that it is difficult to factorize a large
integer. The public key consists of two numbers where one number is multiplication of
two large prime numbers. And private key is also derived from the same two prime
numbers. So if somebody can factorize the large number, the private key is
compromised. Therefore encryption strength totally lies on the key size and if we
double or triple the key size, the strength of encryption increases exponentially. RSA
keys can be typically 1024 or 2048 bits long, but experts believe that 1024 bit keys
could be broken in the near future. But till now it seems to be an infeasible task

Key generation
Step 1: Select two large prime numbers p and q, where p!=q

Calculate n,where n=p*q

n is taken as modulus for encryption and decryption

Step 2: Calculate “Eulers quotient” Φ(n)=(p-1)(q-1)

Step 3: Select integer e that is relatively prime to Φ(n)

Example
An example of generating RSA Key pair is given below. (For ease of understanding,
the primes p & q taken here are small values. Practically, these values are very high).

 Let two primes be p = 7 and q = 13. Thus, modulus n = pq = 7 x 13


= 91.

 Select e = 5, which is a valid choice since there is no number that is


common factor of 5 and (p − 1)(q − 1) = 6 × 12 = 72, except for 1.

 The pair of numbers (n, e) = (91, 5) forms the public key and can be
made available to anyone whom we wish to be able to send us
encrypted messages.

 Input p = 7, q = 13, and e = 5 to the Extended Euclidean Algorithm.


The output will be d = 29.

 Check that the d calculated is correct by computing −

de = 29 × 5 = 145 = 1 mod 72

 Hence, public key is (91, 5) and private keys is (91, 29).

58
7. CODING AND IMPLEMEMTATION

7.1. Sample code:

7.1.1 Masterpagr.master:

<%@ Master Language="C#" AutoEventWireup="true" CodeFile="MasterPage.master.cs"


Inherits="MasterPage" %>

<%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit"


TagPrefix="asp" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Enhancing Privacy and Reliability in Attribute-Based Data
Sharing</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<link rel="stylesheet" href="style.css" type="text/css" media="screen" />
<link href="Js/jquery-ui-1.10.4.custom%20orange%20theme.css"
rel="stylesheet" />
</head>
<body>
<form id="frm" runat="server">
<asp:ScriptManager ID="ScriptManager1"
runat="server"></asp:ScriptManager>
<div id="wrapper">
<div id="header">
<div id="logo">
<center style="width: 921px">
<strong style="font-size: x-large; font-family: Verdana,
Geneva, Tahoma, sans-serif; font-weight: bold; font-style: normal; font-variant:
small-caps; text-transform: capitalize; color: #800080;">Enhancing Privacy and
Reliability in Attribute-Based<br />
&nbsp;Data Sharing </strong></center>
</div>
<div>
&nbsp;</div>
</div>
<div id="Div1">
<center>
<asp:Menu ID="UserMenu" runat="server"
Orientation="Horizontal" Font-Bold="True"
ForeColor="#800080" Height="16px" Width="691px"
RenderingMode="Table">

59
<DynamicMenuItemStyle BackColor="#B9A3B7" Font-
Bold="True" ForeColor="White" />
<Items>
<asp:MenuItem Text="|||" Value="|||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem NavigateUrl="~/Home.aspx" Text="Home"
Value="Home"></asp:MenuItem>
<asp:MenuItem Text="||" Value="||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem Text="AboutUs" Value="AboutUs"
NavigateUrl="~/AboutUs.aspx"></asp:MenuItem>
<asp:MenuItem Text="||" Value="||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem Text="Registration"
Value="Registration" NavigateUrl="~/frmRegistration.aspx"></asp:MenuItem>
<asp:MenuItem Text="||" Value="||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem Text="Request" Value="Request
Password" Selectable="False">
<asp:MenuItem Text="Password" Value="Password"
NavigateUrl="~/frmRequestPassword.aspx"></asp:MenuItem>
<asp:MenuItem Text="Decrypt" Value="Decrypt"
NavigateUrl="~/frmDecryptPassword.aspx"></asp:MenuItem>
</asp:MenuItem>
<asp:MenuItem Text="||" Value="||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem Text="Login" Value="Login"
Selectable="False">
<asp:MenuItem
NavigateUrl="~/Admin/frmAdminHome.aspx" Text="Admin"
Value="Admin"></asp:MenuItem>
<asp:MenuItem
NavigateUrl="~/User/frmUserHomePage.aspx" Text="User"
Value="User"></asp:MenuItem>
<asp:MenuItem
NavigateUrl="~/frmForgotPassword.aspx" Text="forgot pwd" Value="forgot
pwd"></asp:MenuItem>
</asp:MenuItem>
<asp:MenuItem Text="|||" Value="|||"></asp:MenuItem>
</Items>
</asp:Menu>
</center>
</div>
<div id="content">
<center>
<asp:ContentPlaceHolder ID="ContentPlaceHolder1"
runat="server"></asp:ContentPlaceHolder></center>
</div>
<div id="footer">
<center>Copyright 2017-2018 All Rights Reserved</center>
</div>
</div>
60
<div align="center"></div>
</form>
</body></html>
7.1.2. Home.aspx:

<%@ Page Title="" Language="C#" MasterPageFile="~/MasterPage.master"


AutoEventWireup="true" CodeFile="Home.aspx.cs" Inherits="Home" %>
<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1"
Runat="Server">
<p>
/&nbsp;</p>
<p style="font-size: large">
Welcome</p>
<p>
<marquee direction="left" behaviour="scroll" scroll amount="5" loop="2"
bgcolor="#fff"></p>
<p>
<img src="Images/hos1-2.png" width="600px" height="400px">
<img src="Images/host2.jpg" width="600px" height="400px">
<img src="Images/744277741-liposuction-plastic-surgeon-theatre-nurse-surgery-
team.jpg" width="600px" height="400px">
<img src="Images/Banner_2000x800_1.jpg" width="600px" height="400px">
<img src="images/1.jpg" width="600px" height="400px">
<img src="Images/Elekta_VersaHD_29_RKP.jpg" width="600px" height="400px"></p>
</marquee>
<p>
&nbsp;</p>
<p>
&nbsp;</p>
</asp:Content>

7.1.3. FrmDecryptPassword.aspx:

<%@ Page Title="" Language="C#" MasterPageFile="~/MasterPage.master"


AutoEventWireup="true" CodeFile="frmDecryptPassword.aspx.cs"
Inherits="frmDecryptPassword" %>

<%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit"


tagprefix="cc1" %>

<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1"


Runat="Server">
<center></center>
<br />
<br />
<br />
<br /> <br />
<br />
<asp:Panel ID="pnl" runat="server" Width="383px" Height="185px">

61
<center style="font-size: medium"><strong>&nbsp;Decrypt Password<br />
<br />
</strong></center>
<table>
<tr>
<td>
<table cellpadding="0" cellspacing="0"
style="border:#627AAD;background-color:#E4E8F1;">
<tr>
<td align="right">&nbsp;</td>
<td align="right">&nbsp;</td>
<td>&nbsp;</td>
<td style="width: 44px">&nbsp;</td>
</tr>
<tr>
<td align="right">&nbsp;</td>
<td align="right"><strong>Enter Id&nbsp;</strong></td>
<td>
<asp:TextBox ID="txtUserName"
runat="server"></asp:TextBox>
</td>
<td style="width: 44px">
<asp:RequiredFieldValidator ID="RFVUserName"
runat="server" ControlToValidate="txtUserName" ErrorMessage="*" ForeColor="Red"
style="font-weight: bold"></asp:RequiredFieldValidator>
</td>
</tr>
<tr>
<td align="right" style="height: 40px">&nbsp;</td>
<td align="right" style="height: 40px"><strong>Enter
Password&nbsp;</strong></td>
<td style="height: 40px">
<asp:TextBox ID="txtPassword" runat="server"
TextMode="Password"></asp:TextBox>
</td>
<td style="width: 44px; height: 40px;">
<asp:RequiredFieldValidator ID="RequiredFieldValidator1"
runat="server" ControlToValidate="txtPassword" ErrorMessage="*" ForeColor="Red"
style="font-weight: bold"></asp:RequiredFieldValidator>
</td>
</tr>
<tr>

<td>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs
p; &nbsp;</td>
<td colspan="3">
<asp:ImageButton ID="btnSubmit" runat="server"
Height="21px" ImageUrl="~/Images/submit.jpg" OnClick="btnSubmit_Click1"
Width="101px" />
&nbsp;&nbsp;&nbsp;

62
<asp:ImageButton ID="btnReset" runat="server"
CausesValidation="False" Height="23px" ImageUrl="~/Images/clear.jpg"
OnClick="btnReset_Click" Width="101px" />
</td>
</tr>
<tr>
<td>&nbsp;</td>
<td colspan="3">
<asp:Label ID="lblMsg" runat="server" style="font-
family: Verdana; font-weight: 700;" Visible="false"></asp:Label>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>&nbsp;</td>
</tr>
</table>
<center>&nbsp;</center>
</asp:Panel>
<cc1:RoundedCornersExtender ID="RoundedCornersExtender1" runat="server"
BorderColor="Black" Radius="15" Corners="All" TargetControlID="pnl">
</cc1:RoundedCornersExtender>
<br />
<br />
</asp:Content>

7.1.4. Login.asp:
<%@ Page Title="" Language="C#" MasterPageFile="~/MasterPage.master"
AutoEventWireup="true" CodeFile="login.aspx.cs" Inherits="login" %>

<%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit"


tagprefix="cc1" %>

<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1"


Runat="Server">
<center>&nbsp;</center>
<center>&nbsp;</center>
<center>&nbsp;</center>
<center>&nbsp;</center>
<center>
&nbsp;</center>
<center>&nbsp;</center>
<center>&nbsp;<asp:Panel ID="pnl" runat="server" Width="383px" Height="185px">
<center>
<center style="font-size: medium">
<strong><font face="verdana,arial" size="2"><b>

63
<asp:Label ID="lblLogin" runat="server" CssClass="style5" Font-
Size="Medium" ForeColor="Black" style="color: #000000" Text="lblLogin"
Visible="False"></asp:Label>
</b></font>
<br />
</strong>
</center>
<table align="center" border="0" cellpadding="0" cellspacing="0">
<tr>
<%--#ADB9CD--%>
<td bgcolor="#E4E8F1"><font face="verdana,arial" size="2">
<br>
<table border="0" cellpadding="0" cellspacing="0">
<table id="Table2" border="0" cellpadding="2"
cellspacing="0" height="0" style="width: 104%">
<tr>
<td align="right" style="width: 35%; height:
27px;"><b><font face="Verdana"
size="2">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; User
Id</font></b></td>
<td align="left" style="height: 27px"
width="60%"><font face="Verdana" size="2"><b>
<asp:TextBox ID="txtLoginId"
runat="server"></asp:TextBox>
<asp:RequiredFieldValidator
ID="RequiredFieldValidator1" runat="server" ControlToValidate="txtLoginId"
ErrorMessage="*"></asp:RequiredFieldValidator>
</b></font></td>
<tr>
<td align="right" style="width: 35%"><b><font
face="Verdana" size="2">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Password</font></b></td>
<td align="left" width="60%"><b><font
face="Verdana" size="2">
<asp:TextBox ID="txtPassWord" runat="server"
TextMode="Password"></asp:TextBox>
</font></b></td>
</tr>
<tr>
<td style="width: 35%">&nbsp;</td>
<td>&nbsp;</td>
</tr>
<tr>
<td align="center" colspan="2" width="40%">
<asp:ImageButton ID="ImageButton1"
runat="server" Height="28px" ImageUrl="~/Images/login.jpg"
OnClick="ImageButton1_Click" />
<b><font face="Verdana"
size="1">&nbsp;&nbsp;<font face="verdana,arial" size="2"><asp:ImageButton
ID="ImageButton2" runat="server" CausesValidation="False" Height="29px"
ImageUrl="~/Images/Reset.jpg" OnClick="ImageButton2_Click" Width="93px" />
</font>&nbsp;</font>&nbsp;</b></td>
<tr>
64
<td colspan="2">
<center>
<asp:LinkButton ID="LinkButton1"
runat="server" Font-Bold="True" OnClick="LinkButton1_Click"
CausesValidation="False">Forgot Password?</asp:LinkButton>
</center>
</td>
</tr>
</tr>
</tr>
</table>
</div>
</table>
<asp:Label ID="lblMsg" runat="server" ForeColor="Red"
Text="lblMsg" Visible="false"></asp:Label>
<br>
<br></br>
<br></br>
</br>
</br>
</font></td>
</tr>
</table>
</center>
</asp:Panel>
<cc1:RoundedCornersExtender ID="RoundedCornersExtender1" runat="server"
BorderColor="Black" Radius="15" Corners="All" TargetControlID="pnl">
</cc1:RoundedCornersExtender>
</center>
<center>&nbsp;</center>
<%--#B9A3B7
#ADB9CD--%>
<center></center>

</asp:Content>

7.1.5. FrmFileUpload.aspx:

<%@ Page Title="" Language="C#" MasterPageFile="~/User/UserMasterPage.master"


AutoEventWireup="true" CodeFile="frmFileUpload.aspx.cs"
Inherits="User_frmFileUpload" %>

<%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit"


tagprefix="cc1" %>

<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1"


runat="Server">
<br />

65
<br />
<br />
<br />
<br />
<center>
<table align="center">
<tr>
<td
style="font-family: 'Times New Roman', Times, serif; font-
size: x-large; font-weight: bold; color: #293955; height: 30px;"><asp:Panel
ID="pnl" runat="server" Height="241px" Width="383px">
<center style="font-size: medium">
Upload a File</center>
<center>
<center>
<table align="right">
<tr>
<td colspan="2" style="height: 5px"></td>
</tr>
<tr>
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">Attribute</span></td>
<td align="left" style="width: 239px">
<asp:DropDownList ID="ddlAttribute"
runat="server" Height="23px" Width="124px">
</asp:DropDownList>
<asp:CompareValidator ID="CompareValidator1"
runat="server" ControlToValidate="ddlAttribute" ErrorMessage="Select"
ForeColor="Red" Operator="NotEqual" ValueToCompare="Select Atribute"
style="font-size: small"></asp:CompareValidator>
</td>
</tr>
<tr>
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">File Name</span></td>
<td align="left" style="width: 239px">
<asp:TextBox ID="txtFName" runat="server"
Height="23px"></asp:TextBox>
</td>
</tr>
<tr>
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">File Description</span></td>
<td align="left" style="width: 239px">
<asp:TextBox ID="txtDesc" runat="server"
Height="23px"></asp:TextBox>
<%--<cc1:CalendarExtender ID="CalendarExtender2"
Format="dd/MM/yy"
runat="server" TargetControlID="txtDOB" />--%></td>
</tr>
<tr>

66
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">Upload File</span></td>
<td align="left" style="width: 239px">
<asp:FileUpload ID="FileUpload1" runat="server"
Height="23px" />
</td>
</tr>
<tr>
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">E-mail</span></td>
<td align="left" style="width: 239px">
<asp:TextBox ID="txtEmail" runat="server"
Height="23px"></asp:TextBox>
</td>
</tr>
<tr>
<td style="width: 157px; font-size:
small;">&nbsp;</td>
<td align="left" style="width: 239px">&nbsp;</td>
</tr>
<tr>
<td colspan="2" style="height: 35px">
<asp:ImageButton ID="ImageButton1"
runat="server" Height="25px" ImageUrl="~/Images/submit.jpg"
OnClick="ImageButton1_Click" />
&nbsp;<asp:ImageButton ID="ImageButton2"
runat="server" CausesValidation="False" Height="25px"
ImageUrl="~/Images/clear.jpg" OnClick="ImageButton2_Click" />
<%--<input type="submit" id="btn"
value="Register" onclick="return Button1_onclick()" />--%>
</td>
</tr>
<tr>
<td colspan="2" style="height: 35px">
<asp:Label ID="lblMsg" runat="server"
Text="Label" Visible="false" style="font-size: medium"></asp:Label>
</td>
</tr>
</table>
</center>
</center>
</asp:Panel>
<cc1:RoundedCornersExtender ID="RoundedCornersExtender1" runat="server"
BorderColor="Black" Corners="All" Radius="15" TargetControlID="pnl">
</cc1:RoundedCornersExtender>
<br />
</td>
</tr>
</table>
</center>

</asp:Content>
67
8. SYSTEM TESTING
8.1 Testing Strategies:

Software testing is a critical element of software quality assurance and


represents the ultimate review of specification, design and coding. In fact, testing is the
one step in the software engineering process that could be viewed as destructive rather
than constructive.A strategy for software testing integrates software test case design
methods into a well-planned series of steps that result in the successful construction of
software.

8.2 STRATEGIC APPROACH TO SOFTWARE TESTING:

The software engineering process can be viewed as a spiral. Initially system


engineering defines the role of software and leads to software requirement analysis
where the information domain, functions, behavior, performance, constraints and
validation criteria for software are established . Testing progress by moving outward
along the spiral to integration testing, where the focus is on the design and the
construction of the software architecture.

UNIT TESTING

MODULE
TESTING

SUB-SYSTEM
Component
TESING
Testing

SYSTEM
TESTING
Integration
Testing 68
ACCEPTANCE
TESTING
User
Testing
Figure 8.2(a): Testing life cycle
Unit testing focuses verification effort on the smallest unit of software design,
the module. The unit testing we have is white box oriented and some modules the steps
are conducted in parallel.

WHITE BOX TESTING:


This type of testing ensures that

 All independent paths have been exercised at least once


 All logical decisions have been exercised on their true and false sides
 All loops are executed at their boundaries and within their operational bounds
 All internal data structures have been exercised to assure their validity.
To follow the concept of white box testing we have tested each form .we have created
independently to verify that Data flow is correct, All conditions are exercised to check
their validity, All loops are executed on their boundaries.

BASIC PATH TESTING:

Established technique of flow graph with Cyclomatic complexity was used to


derive test cases for all the functions. The main steps in deriving test cases were:

Use the design of the code and draw correspondent flow graph.

Determine the Cyclomatic complexity of resultant flow graph, using formula:

V(G)=E-N+2 or

V(G)=P+1 or

V (G) =Number Of Regions

Where V (G) is Cyclomatic complexity,

E is the number of edges,

N is the number of flow graph nodes,

P is the number of predicate nodes.

69
Determine the basis of set of linearly independent paths.

CONDITIONAL TESTING:

In this part of the testing each of the conditions were tested to both true and
false aspects. And all the resulting paths were tested. So that each path that may be
generate on particular condition is traced to uncover any possible errors.

DATA FLOW TESTING:

This type of testing selects the path of the program according to the location of
definition and use of variables. This kind of testing was used only when some local
variable were declared. The definition-use chain method was used in this type of
testing. These were particularly useful in nested statements.

LOOP TESTING:

In this type of testing all the loops are tested to all the limits possible. The following
exercise was adopted for all loops:
 All the loops were tested at their limits, just above them and just below them.
 All the loops were skipped at least once.
 For nested loops test the inner most loop first and then work outwards.
 For concatenated loops the values of dependent loops were set with the help of
connected loop.
 Unstructured loops were resolved into nested loops or concatenated loops and
tested as above.

70
9 EXPERIMENTAL RESULTS
9.1 EFFICIENCY COMPARISION RESULTS:
In this section, we analyze and compare the efficiency of the proposed scheme with the
previous CP-ABE schemes (that is, Bethen courtetal.’s scheme (BSW), Attrapadung’s
scheme (BCP-ABE2), and Yu et al.’s scheme (YWRL) in theoretical and practical
aspects. Then, the efficiency of the proposed scheme is demonstrated in the network
simulation in terms of the communication cost. We also discuss its efficiency when
implemented with specific parameters and compare these results with those obtained by
the other schemes.

Efficiency Comparison

71
Table 9.1 (a): Efficiency Comparison

The number of users in an attribute group:

Figure 9.1(a): Communication cost of system (user side)

Communication cost in the system:

72
Figure 9.1 (b): Communication cost of system sid

9.2 Screenshots:

Figure 9.2(a): Home Page


This home page consist of several pages like About us, Registration , Request, Login
pages.

Figure 9.2(b): About us Page

This About us page says Information regarding the functioning and enhancement of the
website.
73
Figure 9.2(c): Registration Form

This is the registration form page where user has to fill the details in it. Here user
means doctor.

Figure 9.2(d): Registration Form

In this page user has entering all his details as shown in above figure.

74
Figure 9.2(e): Registration Form submitted successfully

In this page you can see registration form has successfully submitted and then a unique
id number is generated to the user.

Figure 9.2(f): Request Password

In this request password page, User has to enter his unique id number in order to get his
password through the registered mail Id/database.

75
Figure 9.2(g): Decrypt Password

In this decrypt password page, User has to enter his unique id & generated password
and then click submit.

Figure 9.2(h): Displays Decrypt Password generates

In this page you can see a password is generated to the user (doctor).

User:

76
Figure 9.2(i): User Login Page

In this page User has to login with his unique id and generated password.

Figure 9.2(j): User Login page Display

This is the User (doctor) page. In this some features are there like file upload (patient),
secret key search, download file (patient), manage password (change password) and
logout options

77
Figure 9.2(k): Change Password of user

This page is user (doctor) page. User can change password.

Admin:

Figure 9.2(l): Admin Login Page

This is admin login page. Admin should enter his unique id and password to login.

78
Figure 9.2(m): Admin login Display

This is the Admin page. In this some features are there like file upload (patient),
user (login and download history) and logout.

Figure 9.2(n): Admin Upload File

This page is file upload. Admin should upload patient file to doctors.

79
USER:

Figure 9.2(o): User Search files Page

This page is user page. User (doctor) search files by patient name and unique mail id
and then click search.

Figure 9.2(p): Generated key for user search file Page

In this when user entered credentials are matched, a search file key is generated which
is required to download the files.

80
Figure 9.2(q): User File download Page

This page is user file download page. User will download file by patient name and the
generated search file key.

Figure 9.2(r): User file Already Downloaded

In user file download page if you try to download the file once again then it will shows
the notification that file is already downloaded.

81
Figure 9.2(s): User Upload File

This is upload file page. User (doctor) will study patients file and then uploads file to
admin.

Figure 9.2(t): Upload file successfully

This page shows the patients file uploaded successfully.

Admin:

82
Figure 9.2(u): User Download history

This is admin page. This page shows whether the user download the file or not. If user (doctor)
downloads then it will show in the users file download history.

Figure 9.2(v): User Login History in Admin

This is admin page. This page shows whether the user login or not. If user (doctor) login then
it will show in the user’s login history.

10. FUTURE ENHANCEMENTS

83
First, the key escrow problem is resolved by a key issuing protocol that exploits the
characteristic of the data sharing system architecture. The key issuing protocol
generates and issues user secret keys by performing a secure two-party computation
(2PC) protocol between the KGC and the data-storing center with their own master
secrets. The 2PC protocol deters them from obtaining any master secret information of
each other such that none of them could generate the whole set of user keys alone.

Thus, users are not required to fully trust the KGC and the datastoring center in
order to protect their data to be shared. The data confidentiality and privacy can be
cryptographically enforced against any curious KGC or data-storing center in the
proposed scheme. Second, the immediate user revocation can be done via the proxy
encryption mechanism together with the CP-ABE algorithm. Attribute group keys are
selectively distributed to the valid users in each attribute group, which then are used to
reencrypt the ciphe rtext encrypted under the CPABE algorithm

11. CONCLUSION

84
The enforcement of access policies and the support of policy updates are
important challenging issues in the data sharing systems. In this study, we proposed an
attribute based data sharing scheme to enforce a fine-grained data access control by
exploiting the characteristic of the data sharing system. The proposed scheme features a
key issuing mechanism that removes key escrow during the key generation. The user
secret keys are generated through a secure two-party computation such that any curious
key generation center or data-storing center cannot derive the private keys individually.
Thus, the proposed scheme enhances data privacy and confidentiality in the data
sharing system against any system managers as well as adversial outsiders without
corresponding credentials. The proposed scheme can do an immediate user revocation
on each attribute set while taking full advantage of the scalable access control provided
by the cipher-text policy attribute-based encryption. Therefore, the proposed scheme
achieves more secure and fine-grained data access control in the data sharing system.
We demonstrated that the proposed scheme is efficient and scalable to securely manage
user data in the data sharing system.

12. BIBLIOGRAPHY AND REFERENCES


85
L. Ibraimi, M. Petkovic, S. Nikova, P. Hartel, and W. Jonker,“Mediated Ciphertext-
Policy Attribute-Based Encryption and Its Application,” Proc. Int’l Workshop
Information Security Applications(WISA ’09), pp. 309-323, 2009. A. Lewko, A. Sahai,
and B. Waters, “Revocation Systems withVery Small Private Keys,” Proc. IEEE Symp.
Security and Privacy,pp. 273-285, 2010.N. Attrapadung and H. Imai, “Conjunctive
Broadcast andAttribute-Based Encryption,” Proc. Int’l Conf. Palo Alto onPairing-
Based Cryptography (Pairing), pp. 248-265, 2009.S. Yu, C. Wang, K. Ren, and W.
Lou, “Attribute Based DataSharing with Attribute Revocation,” Proc. ACM Symp.
Information,Computer and Comm. Security (ASIACCS ’10), 2010 X. Liang, Z. Cao,
H. Lin, and D. Xing, “Provably Secure and Efficient Bounded Ciphertext Policy
Attribute Based Encryption,”Proc. Int’l Symp. Information, Computer, and Comm.
Security (ASIACCS), pp. 343-352, 2009.
The Pairing-Based Cryptography Library, http://crypto.stanford.edu/pbc/,
2012.M. Chase and S.S.M. Chow, “Improving Privacy and Security in Multi-Authority
Attribute-Based Encryption.

86

You might also like