Sample Documentation Content
Sample Documentation Content
INTRODUCTION
1.1. Introduction to Project:
Network and computing technology enables many people to easily share their data
with others are using online external storages. People can share their lives with friends
by uploading their private photos or messages into the online social networks; or
upload highly sensitive personal health records (PHRs) into online data servers such as
Microsoft Health Vault, Google Health for ease of sharing with their primary doctors or
for cost saving. As people enjoy the advantages of these new technologies and services,
their concerns about data security and access control also arise. Improper use of the
data by the storage server or unauthorized access by outside users could be potential
threats to their data. People would like to make their sensitive or private data only
accessible to the authorized people with credentials they specified. Attribute-based
encryption (ABE) is a promising cryptographic approach that achieves a fine-grained
data access control. It provides a way of defining access policies based on different
attributes of the requester, environment, or the data object. Especially, cipher text-
policy attribute-based encryption (CP-ABE) enables to encrypt or to define the attribute
set over a universe of attributes that a decrypt or needs to possess in order to decrypt
the cipher text, and enforce it on the contents. Thus, each user with a different set of
attributes is allowed to decrypt different pieces of data per the security policy. This
effectively eliminates the need to rely on the data storage server for preventing
unauthorized data access, which is the traditional access control approach of such as the
reference monitor Nevertheless, applying CP-ABE in the data sharing system has
several challenges. In CP-ABE, the key generation center (KGC) generates private keys
of users by applying the KGC’s master secret keys to users’ associated set of attributes.
Thus, the major benefit of this approach is to largely reduce the need for processing and
storing public key certificates under traditional public key infrastructure (PKI).
However, the advantage of the CP-ABE comes with a major drawback which is known
as a key escrow problem. The KGC can decrypt every cipher text addressed to specific
users by generating their attribute keys. . Another challenge is the key revocation.
Since some users may change their associate attributes at some time, or some private
keys might be compromised, key revocation or update for each attribute is necessary in
order to make systems secure. This issue is even more difficult especially in ABE,
since each attribute is conceivably shared by multiple users (henceforth, we refer to
such a set of users as an attribute group).This implies that revocation of any attribute or
any single user in an attribute group would affect all users in the group. It may result in
bottleneck during rekeying procedure or security degradation due to the vulnerability of
windows.
1
1.1. Purpose of the Project
First, the key escrow problem is resolved by a key issuing protocol that exploits
the characteristic of the data sharing system architecture. The key issuing protocol
generates and issues user secret keys by performing a secure two-party computation
(2PC) protocol between the KGC and the data-storing center with their own master
secrets. The 2PC protocol deters them from obtaining any master secret information of
each other such that none of them could generate the whole set of user keys alone.
Thus, users are not required to fully trust the KGC and the data-storing center in order
to protect their data to be shared. . Attribute group keys are selectively distributed to
the valid users in each attribute group, which then are used to re-encrypt the cipher-text
encrypted under the CPABE algorithm. The immediate user revocation enhances the
Backward/forward secrecy of the data on any membership changes. In addition, as the
user revocation can be done on each attribute level rather than on system level, more
fineg-rained user access control can be possible. Even if a user is revoked from some
attribute groups, he would still be able to decrypt the shared data as long as the other
attributes that he holds satisfy the access policy of the cipher-text. Data owners need
not be concerned about defining any access policy for users, but just need to define
only the access policy for attributes as in the previous ABE schemes. The proposed
scheme delegates most laborious tasks of membership management and user revocation
to the data-storing center while the KGC is responsible for the attribute key
management as in the previous CP-ABE schemes without leaking any confidential
information to the other parties. Therefore, the proposed scheme is the most suitable for
the data sharing scenarios where users encrypt the data only once and upload it to the
data-storing centers, and leave the rest of the tasks to the data-storing centers such as
re-encryption and revocation.
2. LITERATURE SURVEY
2
2.1. Fuzzy Identity-Based Encryption:
Identity-Based Encryption (IBE) allows for a sender to encrypt a message to an
identity without access to a public key certificate. The ability to do public key
encryption without certificates has many practical applications. For example, a
user can send an encrypted mail to a recipient, [email protected],
without the requiring either the existence of a Public-Key Infrastructure or that
the recipient be on-line at the time of creation.
One common feature of all previous Identity-Based Encryption systems is that
they view identities as a string of characters. In this paper we propose a new
type of Identity-Based Encryption that we call Fuzzy Identity-Based Encryption
in which we view identities as a set of descriptive attributes. In a Fuzzy
Identity-Based Encryption scheme, a user with the secret key for the identity ! is
able to decrypt a cipher-text encrypted with the public key !0 if and only if ! and
!0 are within a certain distance of each other as judged by some metric.
Therefore, our system allows for a certain amount of error-tolerance in the
identities.
Fuzzy-IBE gives rise to two interesting new applications. The first is an
Identity-Based Encryption system that uses biometric identities. That is we can
view a user’s biometric, for example an iris scan, as that user’s identity
described by several attributes and then encrypt to the user using their biometric
identity. Since biometric measurements are noisy, we cannot use existing IBE
systems. However, the error-tolerance property of Fuzzy-IBE allows for a
private key (derived from a measurement of a biometric) to decrypt a cipher-
text encrypted with a slightly different measurement of the same biometric.
Secondly, Fuzzy IBE can be used for an application that we call “attribute-
based encryption”. In this application a party will wish to encrypt a document to
all users that have a certain set of 1 attributes. For example, in a computer
science department, the chairperson might want to encrypt a document to all of
its systems faculty on a hiring committee. In this case it would encrypt to the
identity {“hiring- committee”,“faculty”,“systems”}.
There is a trend for sensitive user data to be stored by third parties on the
Internet. For example, personal email, data, and personal preferences are stored on web
portal sites such as Google and Yahoo. The attack correlation center, dshield.org,
presents aggregated views of attacks on the Internet, but stores intrusion reports
individually submitted by users. Given the variety, amount, and importance of
information stored at these sites, there is cause for concern that personal data will be
compromised. This worry is escalated by the surge in recent attacks and legal pressure
3
faced by such services. One method for alleviating some of these problems is to store
data in encrypted form. Thus, if the storage is compromised the amount of information
loss will be limited. One disadvantage of encrypting data is that it severely limits the
ability of users to selectively share their encrypted data at a ¯ne-grained level. Suppose
a particular user wants to grant decryption access to a party to all of its Internet tra±c
logs for all entries on a particular range of dates that had a source IP address from a
particular subnet.
The user either needs to act as an intermediate and decrypt all relevant entries for
the party or must give the party its private decryption key, and thus let it have accesss
to all entries. Neither one of these options is particularly appealing. An important
setting where these issues give rise to serious problems is audit logs.
Sets of descriptive attributes and a particular key can decrypt a particular cipher-
text only if there is a match between the attributes of the cipher-text and the user's key.
The cryptosystem of Sahai and Waters allowed for decryption when at least k attributes
overlapped between a cipher-text and a private key. While this primitive was shown to
be useful for error-tolerant encryption with biometrics, the lack of expressibility seems
to limit its applicability to larger systems.
The major problems of the USAF stem from the fact that there is a growing
requirement to provide shared use of computer systems containing information of
different classification levels and need-to-know requirements in a user population not
uniformly cleared or access-approved. This problem takes an extreme form in those
several systems currently under development or projected for the near future where
part, or the majority of the user population has no clearance requirement and where
only a very small fraction of the information being processed and stored on the systems
is classified. In a few of the systems examined (see Section II below) the kinds of
actions the user population is able to take are limited by the nature of the application in
such a way as to avoid or reduce the security problem. However, in other systems,
particularly in general use systems such as those found in the USAF Data Services
Center in the Pentagon, the users are permitted and encouraged to directly program the
system for their applications. It is in this latter kind of use of computers that the
weakness of the technical foundation of current systems is most acutely felt.
Another major problem is the fact that there are growing pressures to interlink
separate but related computer systems into increasingly complex networks. Other
problem areas in addition to those noted above generally fall into the category of
techniques and technology available but not implemented in a form suitable for the
4
application to present and projected Air Force computer systems.The technology for
producing such terminals is both easily available and well understood but has not been
clearly developed heretofore as an integrated requirement for the Air Force.
5
3.1. SDLC:
The Systems Development Life Cycle (SDLC) or Software Development Life
Cycle in systems engineering, information systems and software engineering, is the
process of creating or altering systems, and the models and methodologies use to
develop these systems.
Analysis gathers the requirements for the system. This stage includes a detailed
study of the business needs of the organization. Options for changing the business
process may be considered. Design focuses on high level design like, what programs
are needed and how are they going to interact, low-level design (how the individual
programs are going to work), interface design (what are the interfaces going to look
like) and data design (what data will be required). During these phases, the software's
overall structure is defined. Analysis and Design are very crucial in the whole
development cycle. Any glitch in the design phase could be very expensive to solve in
the later stage of the software development. Much care is taken during this phase. The
logical system of the product is developed in this phase.
Implementation:
6
In this phase the designs are translated into code. Computer programs are
written using a conventional programming language or an application generator.
Programming tools like Compilers, Interpreters, Debuggers are used to generate the
code. Different high level programming languages like C, C++, Pascal, Java, .Net are
used for coding. With respect to the type of application, the right programming
language is chosen.
Testing:
In this phase the system is tested. Normally programs are written as a series of
individual modules, these subject to separate and detailed test. The system is then tested
as a whole. The separate modules are brought together and tested as a complete system.
The system is tested to ensure that interfaces between modules work (integration
testing), the system works on the intended platform and with the expected volume of
data (volume testing) and that the system does what the user requires (acceptance/beta
testing).
Maintenance:
Inevitably the system will need maintenance. Software will definitely undergo
change once it is delivered to the customer. There are many reasons for the change.
Change could happen because of some unexpected input values into the system. In
addition, the changes in the system could directly affect the software operations. The
software should be developed to accommodate changes that could happen during the
post implementation period.
SDLC METHDOLOGIES:
This document play a vital role in the development of life cycle (SDLC) as it
describes the complete requirement of the system. It means for use by developers and
will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.
SPIRAL MODEL:
7
It was defined by Barry Boehm in his 1988 article, “A spiral Model of Software
Development and Enhancement. This model was not the first model to discuss iterative
development, but it was the first model to explain why the iteration models.
As originally envisioned, the iterations were typically 6 months to 2 years long.
Each phase starts with a design goal and ends with a client reviewing the progress thus
far. Analysis and engineering efforts are applied at each phase of the project, with an
eye toward the end goal of the project.
At the customer option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involved development cost overruns,
operating-cost miscalculation, or any other factor that could, in the
customer’s judgment, result in a less-than-satisfactory final product.
The existing prototype is evaluated in the same manner as was the previous
prototype, and if necessary, another prototype is developed from it
according to the fourfold procedure outlined above.
The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.
3.2.System Study:
In the flexibility of uses the interface has been developed a graphics concepts
in mind, associated through a browser interface. The GUI’s at the top level has been
categorized as follows:
1. Administrative User Interface Design.
2. The Operational and Generic User Interface Design.
The administrative user interface concentrates on the consistent information that is
practically, part of the organizational activities and which needs proper authentication
for the data collection. The Interface helps the administration with all the transactional
states like data insertion, data deletion, and data updating along with executive data
search capabilities. The operational and generic user interface helps the users upon the
system in transactions through the existing data and required services. The operational
user interface also helps the ordinary users in managing their own information helps the
ordinary users in managing their own information in a customized manner as per the
assisted flexibilities.
3.3. Modules and their Functionalities:
User It is an entity who wants to access the data. If a user possesses a set of
attributes satisfying the access policy of the encrypted data, and is not revoked
in any of the valid attribute groups, then he will be able to decrypt the cipher-
text and obtain the data.
4. User: It is an entity who wants to access the data. If a user possesses a set of
attributes satisfying the access policy of the encrypted data, and is not revoked
in any of the valid attribute groups, then he will be able to decrypt the cipher-
text and obtain the data.
5. Data Owner: Administrator can enter in to the website with his credentials. He
allows to upload it into the external data-storing center for ease of sharing or for
cost saving. A data owner is responsible for defining (attribute-based) access
policy, and enforcing it on its own data by encrypting the data under the policy
before distributing it.
PRESENT WORK:
First, the key escrow problem is resolved by a key issuing protocol that exploits
the characteristic of the data sharing system architecture. The key issuing protocol
generates and issues user secret keys by performing a secure two-party computation
(2PC) protocol between the KGC and the data-storing center with their own master
secrets. The 2PC protocol deters them from obtaining any master secret information of
each other such that none of them could generate the whole set of user keys alone.
Thus, users are not required to fully trust the KGC and the data-storing center in order
to protect their data to be shared. The data confidentiality and privacy can be
cryptographically enforced against any curious KGC or data-storing center in the
proposed scheme. Second, the immediate user revocation can be done via the proxy
encryption mechanism together with the CP-ABE algorithm. Attribute group keys are
selectively distributed to the valid users in each attribute group, which then are used to
re-encrypt the cipher-text encrypted under the CPABE algorithm. The immediate user
revocation enhances the Backward/forward secrecy of the data on any membership
changes. In addition, as the user revocation can be done on each attribute level rather
than on system level, more fine-grained user access control can be possible. Even if a
user is revoked from some attribute groups, he would still be able to decrypt the shared
data as long as the other attributes that he holds satisfy the access policy of the cipher-
text. The proposed scheme delegates most laborious tasks of membership management
and user revocation to the data-storing center while the KGC is responsible for the
attribute key management as in the previous CP-ABE schemes without leaking any
confidential information to the other parties.
11
The following commands specify access control identifiers and they are typically used
to authorize and authenticate the user (command codes are shown in parentheses)
(i)USER NAME:
The user identification is that which is required by the server for access
to its file system. This command will normally be the first command transmitted
by the user after the control connections are made (some servers may require
this).
(ii)PASSWORD:
4. Feasibility Study
4.1 Feasibility Report:
Technical Feasibility
Economical Feasibility
Operation Feasibility
What documentation
What tools are needed to support operations? will users be given?
What skills will operators need to be trained in? What training will
users be given?
What processes need to be created and/or
updated? How will be change
requests managed?
What documentation does an operation need?
Very often you will need to improve the existing operations, maintenance, and support
infrastructure to support the operation of the new application that you intend to
develop. To determine what the impact will be you will need to understand both the
current operations and support infrastructure of your organization and the operations
and support characteristics of your new application.
13
To operate this application END-TO-END VMS. The users no need to require
any technical knowledge that we are used to develop this project is Asp.net, C#.net.
That the application providing rich user interface by user can do the operation in
flexible manner.
4.4.Economical Feasibility:
Raising of existing, or
introduction of a new,
barrier to entry within
your industry to keep
competition out of
your market
14
Positive public
perception that your
organization is an
innovator
The table includes both qualitative factors, costs or benefits that are subjective in
nature, and quantitative factors, costs or benefits for which monetary values can easily
be identified. I will discuss the need to take both kinds of factors into account when
performing a cost/benefit analysis
5. SYSTEM REQUIREMENTS
SPECIFICATION
5.1.Requirement Specification:
15
constraints on the design or implementation such as performance engineering
requirements, quality standards.
RAM : 1GB
Interoperability:
The Base Class Library (BCL), part of the Framework Class Library (FCL), is a
library of functionality available to all languages using the .NET Framework. The BCL
provides classes which encapsulate a number of common functions, including file
reading and writing, graphic rendering, database interaction and XML document
manipulation.
17
Simplified Deployment:
Security:
Portability:
Metadata:
All CLI is self-describing through .NET metadata. The CLR checks the
metadata to ensure that the correct method is called. Metadata is usually generated by
language compilers but developers can create their own metadata through custom
attributes. Metadata contains information about the assembly, and is also used to
implement the reflective programming capabilities of .NET Framework.
Security:
.NET has its own security mechanism with two general features: Code Access
Security (CAS), and validation and verification. Code Access Security is based on
evidence that is associated with a specific assembly. Typically the evidence is the
source of the assembly (whether it is installed on the local machine or has been
downloaded from the intranet or Internet).Other code can demand that calling code is
granted a specified permission. The demand causes the CLR to perform a call stack
18
walk: every assembly of each method in the call stack is checked for the required
permission; if any assembly is not granted the permission a security exception is
thrown.
When an assembly is loaded the CLR performs various tests. Two such tests are
validation and verification. The verification mechanism checks to see if the code does
anything that is 'unsafe'. The algorithm used is quite conservative; hence occasionally
code that is 'safe' does not pass. Unsafe code will only be executed if the assembly has
the 'skip verification' permission, which generally means code that is installed on the
local machine.
Class library:
Microsoft .NET Framework includes a set of standard class libraries. The class
library is organized in a hierarchy of namespaces. Most of the built in APIs are part of
either System.* or Microsoft.* namespaces. It encapsulates a large number of common
functions, such as file reading and writing, graphic rendering, database interaction, and
XML document manipulation, among others. The .NET class libraries are available to
all .NET languages. The .NET Framework class library is divided into two parts: the
Base Class Library and the Framework Class Library.
19
The Base Class Library (BCL) includes a small subset of the entire class library
and is the core set of classes that serve as the basic API of the Common Language
Runtime. The classes in mscorlib.dll and some of the classes in System.dll and
System.core.dll are considered to be a part of the BCL. The BCL classes are available in
both .NET Framework as well as its alternative implementations including .NET
Compact Framework, Microsoft Silverlight and Mono.
The Framework Class Library (FCL) is a superset of the BCL classes and refers
to the entire class library that ships with .NET Framework. It includes an expanded set
of libraries, including WinForms, ADO.NET, ASP.NET, Language Integrated Query,
Windows Presentation Foundation, Windows Communication Foundation among
others. The FCL is much larger in scope than standard libraries for languages like C++,
and comparable in scope to the standard libraries of Java.
Memory management:
The .NET Framework CLR frees the developer from the burden of managing
memory (allocating and freeing up when done); instead it does the memory
management itself. To this end, the memory allocated to instantiations of .NET types
(objects) is done contiguously from the managed heap, a pool of memory managed by
the CLR. As long as there exists a reference to an object, which might be either a direct
reference to an object or via a graph of objects, the object is considered to be in use by
the CLR. When there is no reference to an object, and it cannot be reached or used, it
becomes garbage. However, it still holds on to the memory allocated to it. .NET
Framework includes a garbage collector which runs periodically, on a separate thread
from the application's thread, that enumerates all the unusable objects and reclaims the
memory allocated to them.
Versions:
Framework stack
21
Table 5.4(b):.Net Version
The Windows Forms classes contained in the .NET Framework are designed to
be used for GUI development. You can easily create command windows, buttons,
menus, toolbars, and other screen elements with the flexibility necessary to
accommodate shifting business needs.
22
For example, the .NET Framework provides simple properties to adjust visual attributes
associated with forms. In some cases the underlying operating system does not support
changing these attributes directly, and in these cases the .NET Framework
automatically recreates the forms. This is one of many ways in which the .NET
Framework integrates the developer interface, making coding simpler and more
consistent.
ASP.NET is the hosting environment that enables developers to use the .NET
Framework to target Web-based applications. However, ASP.NET is more than just a
runtime host; it is a complete architecture for developing Web sites and Internet-
distributed objects using managed code. Both Web Forms and XML Web services use
IIS and ASP.NET as the publishing mechanism for applications, and both have a
collection of supporting classes in the .NET.
5.4 C#.NET:
23
The Relationship of C# to .NET:
C# is a new programming language, and is significant in two respects.
It is specifically designed and targeted for use with Microsoft's .NET
Framework (a feature rich platform for the development, deployment, and
execution of distributed applications).
It is a language based upon the modern object-oriented design methodology,
and when designing it Microsoft has been able to learn from the experience of
all the other similar languages that have been around over the 20 years or so
since object-oriented principles came to prominence
One important thing to make clear is that C# is a language in its own right.
Although it is designed to generate code that targets the .NET environment, it
is not itself part of .NET. There are some features that are supported by .NET
but not by C#, and you might be surprised to learn that there are actually
features of the C# language that are not supported by .NET like Operator
Overloading.
However, since the C# language is intended for use with .NET, it is important
for us to have an understanding of this Framework if we wish to develop
applications in C# effectively.
At first sight this might seem a rather long-winded compilation process. Actually,
this two-stage compilation process is very important, because the existence of the
Microsoft Intermediate Language (managed code) is the key to providing many of the
benefits of .NET. Let's see why.
24
Platform Independence:
First, it means that the same file containing byte code instructions can be placed
on any platform; at runtime the final stage of compilation can then be easily
accomplished so that the code will run on that particular platform. In other words, by
compiling to Intermediate Language we obtain platform independence for .NET, in
much the same way as compiling to Java byte code gives Java platform independence.
You should note that the platform independence of .NET is only theoretical at present
because, at the time of writing, .NET is only available for Windows. However,
porting .NET to other platforms is being explored (see for example the Mono project,
an effort to create an open source implementation of .NET, at http://www.go-
mono.com/).
Performance Improvement:
Language Interoperability:
How the use of IL enables platform independence, and how JIT compilation
should improve performance. However, IL also facilitates language interoperability.
Simply put, you can compile to IL from one language, and this compiled code should
then be interoperable with code that has been compiled to IL from another language.
Intermediate Language:
Working with .NET means compiling to the Intermediate Language, and that in
turn means that you will need to be programming using traditional object-oriented
methodologies. That alone is not, however, sufficient to give us language
interoperability. After all, C++ and Java both use the same object-oriented paradigms,
but they are still not regarded as interoperable. We need to look a little more closely at
the concept of language interoperability.
An associated problem was that, when debugging, you would still have to
independently debug components written in different languages. It was not possible to
step between languages in the debugger. So what we really mean by language
interoperability is that classes written in one language should be able to talk directly to
classes written in another language. In particular:
A class written in one language can inherit from a class written in another
language.
The class can contain an instance of another class, no matter what the languages
of the two classes
An object can directly call methods against another object written in another
language.
Objects (or references to objects) can be passed around between methods.
27
You should note that some languages compatible with .NET, such as VB.NET,
still allow some laxity in typing, but that is only possible because the compilers behind
the scenes ensure the type safety is enforced in the emitted IL.
Although enforcing type safety might initially appear to hurt performance, in many
cases this is far outweighed by the benefits gained from the services provided by .NET
that rely on type safety. Such services include:
Language Interoperability
Garbage Collection
Security
Application Domains
This data type problem is solved in .NET through the use of the Common Type
System (CTS). The CTS defines the predefined data types that are available in IL, so
that all languages that target the .NET framework will produce compiled code that is
ultimately based on these types.The CTS doesn't merely specify primitive data types,
but a rich hierarchy of types, which includes well-defined points in the hierarchy at
which code is permitted to define its own types. The hierarchical structure of the
Common Type System reflects the single-inheritance object-oriented methodology of
IL, and looks like this:
The Common Language Specification works with the Common Type System to
ensure language interoperability. The CLS is a set of minimum standards that all
compilers targeting .NET must support. Since IL is a very rich language, writers of
most compilers will prefer to restrict the capabilities of a given compiler to only
28
support a subset of the facilities offered CTS. That is fine, by IL and the as long
as the compiler supports everything that is defined in the CLS.
Collection:
The garbage collector is .NET's answer to memory management and in particular to
the question of what to do about reclaiming memory that running applications ask for.
Up until now there have been two techniques used on Windows platform for deal
locating memory that processes have dynamically requested from the system
Make the application code do it all manually
Make objects maintain reference counts
The .NET runtime relies on the garbage collector instead. This is a program whose
purpose is to clean up memory. The idea is that all dynamically requested memory is
allocated on the heap (that is true for all languages, although in the case of .NET, the
CLR maintains its own managed heap for .NET applications to use). Every so often,
when .NET detects that the managed heap for a given process is becoming full and
therefore needs tidying up, it calls the garbage collector. The garbage collector runs
through variables currently in scope in your code, examining references to objects
stored on the heap to identify which ones are accessible from your code – that is to say
which objects have references that refer to them. Any objects that are not referred to are
deemed to be no longer accessible from your code and can therefore be removed. Java
uses a similar system of garbage collection to this.
Security:
Role-based security is based on the identity of the account under which the
process is running, in other words, who owns and is running the process. Code-based
security on the other hand is based on what the code actually does and on how much
the code is trusted. Thanks to the strong type safety of IL, the CLR is able to inspect
code before running it in order to determine required security permissions. .NET also
offers a mechanism by which code can indicate in advance what security permissions it
will require to run.
The importance of code-based security is that it reduces the risks associated with
running code of dubious origin. For example, even if code is running under the
administrator account, it is possible to use code-based security to indicate that that code
should still not be permitted to perform certain types of operation that the administrator
29
account would normally be allowed to do, such as read or write to environment
variables, read or write to the registry, or to access the .NET reflection features.
The .NET base classes are a massive collection of managed code classes that
have been written by Microsoft, and which allow you to do almost any of the tasks that
were previously available through the Windows API. This means that you can either
instantiate objects of whichever .NET base class is appropriate, or you can derive your
own classes from them.
Name Spaces:
Namespaces are the way that .NET avoids name clashes between classes. They
are designed, for example, to avoid the situation in which you define a class to
represent a customer, name your class Customer, and then someone else does the same
thing (quite a likely scenario – the proportion of businesses that have customers seems
to be quite high).
A namespace is no more than a grouping of data types, but it has the effect that
the names of all data types within a namespace automatically get prefixed with the
name of the namespace .NET base classes are in a namespace called System. The base
class Array is in this namespace, so its full name is System. If a namespace is explicitly
supplied, then the type will be added to a nameless global namespace.
Although C# and .NET are particularly suited to web development, they still
offer splendid support for so-called "fat client" apps, applications that have to be
installed on the end-user's machine where most of the processing takes place. This
support is from Windows Forms.
Windows Control:
30
Although Web Forms and Windows Forms are developed in much the same
way, you use different kinds of controls to populate them. Web Forms use Web
Controls, and Windows Forms use Windows Controls.
Windows Services:
C# requires the presence of the .NET runtime, and it will probably be a few
years before most clients – particularly most home machines – have .NET installed. In
the meantime, installing a C# application is likely to mean also installing the .NET
redistributable components. Because of that, it is likely that the first place we will see
many C# applications is in the enterprise environment. Indeed, C# arguably presents an
outstanding opportunity for organizations that are interested in building robust, n-tiered
client-server application
31
Figure 6.1(a) shows the architecture of the
Key generation center is a key authority that generates public and secret parameters for
CPABE. It is in charge of issuing, revoking, and updating attribute keys for users. It grants
differential access rights to individual users based on their attributes. Data storing center is an
entity that provides a data sharing service. It is in charge of controlling the accesses from
outside users to the storing data and providing corresponding contents services. The data
storing center is another key authority that generates personalized user key with the KGC, and
issues and revokes attribute group keys to valid users per each attribute, which are used to
enforce a fine-grained user access control. It is a client who owns data, and wishes to upload it
into the external data storing center for ease of sharing or for cost saving. A data owner is
responsible for defining (attribute based) access policy, and enforcing it on its own data by
encrypting the data under the policy before distributing it. User is an entity who wants to access
the data. If a user possesses a set of attributes satisfying the access policy of the encrypted data,
and is not revoked in any of the valid attribute groups, then he will be able to decrypt the cipher
text and obtain the data.
32
6.1.1 List Of Tables:
33
Table 6.1.1(e): Login History
Data flow
Data Store
34
CONSTRUCTING A DFD:
Questionnaires should contain all the data elements that flow in and out.
Missing interfaces redundancies and like is then accounted for often through
interviews.
1. The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether the
dataflow take place daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.
CURRENT PHYSICAL:
In Current Physical DFD process label include the name of people or their
positions or the names of computer systems that might provide some of the overall
system-processing label includes an identification of the technology used to process the
data. Similarly data flows and data stores are often labels with the names of the actual
physical media on which data are stored such as file folders, computer files, business
forms or computer tapes.
35
CURRENT LOGICAL:
The physical aspects at the system are removed as mush as possible so that the current
system is reduced to its essence to the data and the processors that transforms them
regardless of actual physical form.
NEW LOGICAL:
This is exactly like a current logical model if the user were completely happy with he
user were completely happy with the functionality of the current system but had
problems with how it was implemented typically through the new logical model will
differ from current logical model while having additional functions, absolute function
removal and inefficient flows recognized.
NEW PHYSICAL:
The new physical represents only the physical implementation of the new
system.
PROCESS:
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a sink.
3) A process has a verb phrase label.
DATA STORE:
1) Data cannot move directly from one data store to another data store, a process must
move data.
2) Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store
3) A data store has a noun phrase label.
SOURCE OR SINK:
1) Data cannot move direly from a source to sink it must be moved by a process
2) A source and /or sink has a noun phrase land
36
DATA FLOW:
1) A Data Flow has only one direction of flow between symbols. It may flow in both
directions between a process and a data store to show a read before an update. The
later is usually indicated however by two separate arrows since these happen at
different type.
2) A join in DFD means that exactly the same data comes from any of two or more
different processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be
atleast one other process that handles the data flow produce some other data flow
returns the original data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.
A data flow has a noun phrase label more than one data flow noun phrase can appear on
a single arrow as long as all of the flows on the same arrow move together as one
package.
DFD Diagrams:
Context Level Diagram (O level)
37
Login DFD :
Admin DFD:
38
User Activity DFD:
UML was created by Object Management Group (OMG) and UML 1.0 specification
draft was proposed to the OMG in January 1997.
It is very important to distinguish between the UML model. Different diagrams are
used for different type of UML modeling. There are three important type of UML
modelings:
39
6.2.1 Structural Things:
Structural things are classified into seven types those are as follows:
Class diagram:
Class diagrams are the most common diagrams used in UML. Class diagram
consists of classes, interfaces, associations and collaboration. Class diagrams basically
represent the object oriented view of a system which is static in nature. Active class is
used in a class diagram to represent the concurrency of the system.
The purpose of the class diagram is to model the static view of an application.
The class diagrams are the only diagrams which can be directly mapped with object
oriented languages and thus widely used at the time of construction.
40
Figure 6.2.1 (a) : Class Diagram
Collaboration Diagrams:
Login Collaboration:
DataBase
4 : ExecuteDataSet()
DAL:SqlHelper
5 : Results()
3 : CheckUser()
BAL:LoginClass
2 : Enter Uname()
6 : Return Result()
7 : Show Result()
Default.aspx
1 : Open Form()
41
User
Data base 4 : ExecuteNonQuery()
DAL:SqlHelper
5 : Results()
3 : UploadFile()
BAL:UploadClass
2 : Enter Details()
6 : Return Result()
7 : Show Result()
Updatefilefrm.a
spx
1 : Open Form()
Admin
Upload File:
User Search:
42
DataBase
4 : ExecuteDataSet()
DAL:SqlHelper
5 : Results()
3 : SearchFile()
BAL:UserClass
2 : Enter Details()
6 : Return Result()
7 : Show Result()
Search.aspx
1 : Open Form()
User
43
User Registration:
DataBase
4 : ExecutenonQuery()
DAL:SqlHelper
5 : Results()
3 : SendData()
BAL:UserClass
2 : Enter Details()
6 : Return Result()
7 : Show Result()
Registrationfrm.
aspx
1 : Open Form()
User
The actors can be human user, some internal applications or may be some external
applications. So in a brief when we are planning to draw an use case diagram we should
have the following items identified.
44
Relationships among the use cases and actors.
Use case diagrams are drawn to capture the functional requirements of a system. So
after identifying the above items we have to follow the following guidelines to draw an
efficient use case diagram.
The name of a use case is very important. So the name should be chosen in such
a way so that it can identify the functionalities performed.
Give a suitable name for actors.
Do not try to include all types of relationships. Because the main purpose of the
diagram is to identify requirements.
Use Case:
Registratio System
n
Login
Upload file
User login
Histories
Admin
Search file
Download
Logout
45
Figure 6.2.1(f): Use Case Diagram
Behavioural things are considered as verbs of a model.These are the ‘dynamic '
parts which describes how the model carry out its functionality with respect to time and
space. Behavioral things are classified into two types:
From the term Interaction, it is clear that the diagram is used to describe some
type of interactions among the different elements in the model. This interaction is a
part of dynamic behavior of the system.
Sequence and collaboration diagrams are used to capture the dynamic nature but from
a different angle.
Following things are to be identified clearly before drawing the interaction diagram
46
Message flows among the objects.
Object organization.
Following are two interaction diagrams modeling the order management system. The
first diagram is a sequence diagram and the second is a collaboration diagram
47
Method calls are similar to that of a sequence diagram. However, difference
being the sequence diagram does not describe the object organization, whereas the
collaboration diagram shows the object organization.
The main purpose of both the diagrams are similar as they are used to capture
the dynamic behavior of a system. However, the specific purpose is more important to
clarify and understand.
Sequence diagrams are used to capture the order of messages flowing from one
object to another. Collaboration diagrams are used to describe the structural
organization of the objects taking part in the interaction. A single diagram is not
sufficient to describe the dynamic aspect of an entire system, so a set of diagrams are
used to capture it as a whole.
Interaction diagrams are used when we want to understand the message flow
and the structural organization. Message flow means the sequence of control flow
48
from one object to another. Structural organization means the visual organization of
the elements in a system.
The name of the diagram itself clarifies the purpose of the diagram and other
details. It describes different states of a component in a system. The states are specific
to a component/object of a system.
Statechart diagram describes the flow of control from one state to another state.
States are defined as a condition in which an object exists and it changes when some
event is triggered. The most important purpose of Statechart diagram is to model
lifetime of an object from creation to termination.
Statechart diagrams are also used for forward and reverse engineering of a
system. However, the main purpose is to model the reactive system.
49
Following are the main purposes of using Statechart diagrams −
Package − Package is the only one grouping thing available for gathering structural
and behavioral things.
50
Relationship Description
Abstraction An abstraction relationship is a dependency between
model elements that represent the same concept at
different levels of abstraction or from different
viewpoints. You can add abstraction relationships to a
model in several diagrams, including use-case, class,
and component diagrams.
Aggregation An aggregation relationship depicts a classifier as a part
of, or as subordinate to, another classifier.
Association An association relationship is a structural relationship
between two model elements that shows that objects of
one classifier (actor, use case, class, interface, node, or
component) connect and can navigate to objects of
another classifier. Even in bidirectional relationships, an
association connects two classifiers, the primary
(supplier) and secondary (client),
Binding A binding relationship is a dependency relationship that
assigns values to template parameters and generates a
new model element from the template.
Communication path A communication path is a type of association between
nodes in a deployment diagram that shows how the
nodes exchange messages and signals.
Composition A composition relationship represents a whole–part
relationship and is a type of aggregation. A composition
relationship specifies that the lifetime of the part
classifier is dependent on the lifetime of the whole
classifier.
Control flow A control flow is a type of activity edge that models the
movement of control from one activity node to another.
Dependency A dependency relationship indicates that changes to one
model element (the supplier or independent model
element) can cause changes in another model element
(the client or dependent model element). The supplier
model element is independent because a change in the
51
client does not affect it. The client model element
depends on the supplier because a change to the
supplier affects the client.
Deploy A deploy relationship shows the specific component
that an instance of a single node uses. In a UML model,
a deploy relationship typically appears in deployment
diagrams.
Directed association A directed association relationship is an association that
is navigable in only one direction and in which the
control flows from one classifier to another (for
example, from an actor to a use case). Only one of the
association ends specifies navigability.
Extend An extend relationship between use cases indicates that
one use case, the extended use case, can extend another
use case, the base use case. An extend relationship has
the option of using the extended use case.
Generalization A generalization relationship indicates that a specialized
(child) model element is based on a general (parent)
model element. Although the parent model element can
have one or more children, and any child model element
can have one or more parents, typically a single parent
has multiple children. In UML 2.0, several classes can
constitute a generalization set of another class.
Generalization relationships appear in class, component,
and use-case diagrams.
Interface realization An interface realization relationship is a specialized
type of implementation relationship between a classifier
and a provided interface. The interface realization
relationship specifies that the realizing classifier must
conform to the contract that the provided interface
specifies.
Include An include relationship between use cases specifies that
an including (or base) use case requires the behavior
from another use case (the included use case). In an
include relationship, a use case must use the included
use case.
Manifestation A manifestation relationship shows which model
elements, such as components or classes, are manifested
in an artifact. The artifact manifests, or includes, a
specific implementation for, the features of one or
several physical software components.
52
Note attachment A note attachment relationship connects a note or text
box to a connector or shape. A note attachment
indicates that the note or text box contains information
that is relevant to the attached connector or shape.
Object flow An object flow is a type of activity edge that models the
flow of objects and data from one activity node to
another.
Realization A realization relationship exists between two model
elements when one of them must realize, or implement,
the behavior that the other specifies. The model element
that specifies the behavior is the supplier, and the model
element that implements the behavior is the client. In
UML 2.0, this relationship is normally used to specify
those elements that realize or implement the behavior of
a component.
Usage A usage relationship is a dependency relationship in
which one model element requires the presence of
another model element (or set of model elements) for its
full implementation or operation. The model element
that requires the presence of another model element is
the client, and the model element whose presence is
required is the supplier. Although a usage relationship
indicates an ongoing requirement, it also indicates that
the connection between the two model elements is not
always meaningful or present.
53
Types of UML Diagrams
The current UML standards call for 13 different types of diagrams: class, activity,
object, use case, sequence, package, state, component, communication, composite
structure, interaction overview, timing, and deployment.
These diagrams are organized into two distinct groups: structural diagrams and
behavioral or interaction diagrams.
Composite structure diagram
Deployment diagram
Class Diagram:
Object Diagram:
54
Figure 6.2.6(b): Object Diagram
Use case diagrams model the functionality of a system using actors and use cases.
Activity Diagram:
55
Figure 6.2.6(d): website activity Diagram
Sequence diagrams:
State Diagram:
Statechart diagram, now known as state machine diagrams and state diagrams
describe the dynamic behavior of a system in response to external stimuli. State
diagrams are especially useful in modeling reactive objects whose states are triggerby
specific events.
56
Figure 6.2.6(f): State chart Diagram
Component Diagram:
The idea! :
57
The idea of RSA is based on the fact that it is difficult to factorize a large
integer. The public key consists of two numbers where one number is multiplication of
two large prime numbers. And private key is also derived from the same two prime
numbers. So if somebody can factorize the large number, the private key is
compromised. Therefore encryption strength totally lies on the key size and if we
double or triple the key size, the strength of encryption increases exponentially. RSA
keys can be typically 1024 or 2048 bits long, but experts believe that 1024 bit keys
could be broken in the near future. But till now it seems to be an infeasible task
Key generation
Step 1: Select two large prime numbers p and q, where p!=q
Example
An example of generating RSA Key pair is given below. (For ease of understanding,
the primes p & q taken here are small values. Practically, these values are very high).
The pair of numbers (n, e) = (91, 5) forms the public key and can be
made available to anyone whom we wish to be able to send us
encrypted messages.
de = 29 × 5 = 145 = 1 mod 72
58
7. CODING AND IMPLEMEMTATION
7.1.1 Masterpagr.master:
59
<DynamicMenuItemStyle BackColor="#B9A3B7" Font-
Bold="True" ForeColor="White" />
<Items>
<asp:MenuItem Text="|||" Value="|||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem NavigateUrl="~/Home.aspx" Text="Home"
Value="Home"></asp:MenuItem>
<asp:MenuItem Text="||" Value="||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem Text="AboutUs" Value="AboutUs"
NavigateUrl="~/AboutUs.aspx"></asp:MenuItem>
<asp:MenuItem Text="||" Value="||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem Text="Registration"
Value="Registration" NavigateUrl="~/frmRegistration.aspx"></asp:MenuItem>
<asp:MenuItem Text="||" Value="||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem Text="Request" Value="Request
Password" Selectable="False">
<asp:MenuItem Text="Password" Value="Password"
NavigateUrl="~/frmRequestPassword.aspx"></asp:MenuItem>
<asp:MenuItem Text="Decrypt" Value="Decrypt"
NavigateUrl="~/frmDecryptPassword.aspx"></asp:MenuItem>
</asp:MenuItem>
<asp:MenuItem Text="||" Value="||"
Selectable="False"></asp:MenuItem>
<asp:MenuItem Text="Login" Value="Login"
Selectable="False">
<asp:MenuItem
NavigateUrl="~/Admin/frmAdminHome.aspx" Text="Admin"
Value="Admin"></asp:MenuItem>
<asp:MenuItem
NavigateUrl="~/User/frmUserHomePage.aspx" Text="User"
Value="User"></asp:MenuItem>
<asp:MenuItem
NavigateUrl="~/frmForgotPassword.aspx" Text="forgot pwd" Value="forgot
pwd"></asp:MenuItem>
</asp:MenuItem>
<asp:MenuItem Text="|||" Value="|||"></asp:MenuItem>
</Items>
</asp:Menu>
</center>
</div>
<div id="content">
<center>
<asp:ContentPlaceHolder ID="ContentPlaceHolder1"
runat="server"></asp:ContentPlaceHolder></center>
</div>
<div id="footer">
<center>Copyright 2017-2018 All Rights Reserved</center>
</div>
</div>
60
<div align="center"></div>
</form>
</body></html>
7.1.2. Home.aspx:
7.1.3. FrmDecryptPassword.aspx:
61
<center style="font-size: medium"><strong> Decrypt Password<br />
<br />
</strong></center>
<table>
<tr>
<td>
<table cellpadding="0" cellspacing="0"
style="border:#627AAD;background-color:#E4E8F1;">
<tr>
<td align="right"> </td>
<td align="right"> </td>
<td> </td>
<td style="width: 44px"> </td>
</tr>
<tr>
<td align="right"> </td>
<td align="right"><strong>Enter Id </strong></td>
<td>
<asp:TextBox ID="txtUserName"
runat="server"></asp:TextBox>
</td>
<td style="width: 44px">
<asp:RequiredFieldValidator ID="RFVUserName"
runat="server" ControlToValidate="txtUserName" ErrorMessage="*" ForeColor="Red"
style="font-weight: bold"></asp:RequiredFieldValidator>
</td>
</tr>
<tr>
<td align="right" style="height: 40px"> </td>
<td align="right" style="height: 40px"><strong>Enter
Password </strong></td>
<td style="height: 40px">
<asp:TextBox ID="txtPassword" runat="server"
TextMode="Password"></asp:TextBox>
</td>
<td style="width: 44px; height: 40px;">
<asp:RequiredFieldValidator ID="RequiredFieldValidator1"
runat="server" ControlToValidate="txtPassword" ErrorMessage="*" ForeColor="Red"
style="font-weight: bold"></asp:RequiredFieldValidator>
</td>
</tr>
<tr>
<td> &nbs
p; </td>
<td colspan="3">
<asp:ImageButton ID="btnSubmit" runat="server"
Height="21px" ImageUrl="~/Images/submit.jpg" OnClick="btnSubmit_Click1"
Width="101px" />
62
<asp:ImageButton ID="btnReset" runat="server"
CausesValidation="False" Height="23px" ImageUrl="~/Images/clear.jpg"
OnClick="btnReset_Click" Width="101px" />
</td>
</tr>
<tr>
<td> </td>
<td colspan="3">
<asp:Label ID="lblMsg" runat="server" style="font-
family: Verdana; font-weight: 700;" Visible="false"></asp:Label>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td> </td>
</tr>
</table>
<center> </center>
</asp:Panel>
<cc1:RoundedCornersExtender ID="RoundedCornersExtender1" runat="server"
BorderColor="Black" Radius="15" Corners="All" TargetControlID="pnl">
</cc1:RoundedCornersExtender>
<br />
<br />
</asp:Content>
7.1.4. Login.asp:
<%@ Page Title="" Language="C#" MasterPageFile="~/MasterPage.master"
AutoEventWireup="true" CodeFile="login.aspx.cs" Inherits="login" %>
63
<asp:Label ID="lblLogin" runat="server" CssClass="style5" Font-
Size="Medium" ForeColor="Black" style="color: #000000" Text="lblLogin"
Visible="False"></asp:Label>
</b></font>
<br />
</strong>
</center>
<table align="center" border="0" cellpadding="0" cellspacing="0">
<tr>
<%--#ADB9CD--%>
<td bgcolor="#E4E8F1"><font face="verdana,arial" size="2">
<br>
<table border="0" cellpadding="0" cellspacing="0">
<table id="Table2" border="0" cellpadding="2"
cellspacing="0" height="0" style="width: 104%">
<tr>
<td align="right" style="width: 35%; height:
27px;"><b><font face="Verdana"
size="2"> User
Id</font></b></td>
<td align="left" style="height: 27px"
width="60%"><font face="Verdana" size="2"><b>
<asp:TextBox ID="txtLoginId"
runat="server"></asp:TextBox>
<asp:RequiredFieldValidator
ID="RequiredFieldValidator1" runat="server" ControlToValidate="txtLoginId"
ErrorMessage="*"></asp:RequiredFieldValidator>
</b></font></td>
<tr>
<td align="right" style="width: 35%"><b><font
face="Verdana" size="2"> Password</font></b></td>
<td align="left" width="60%"><b><font
face="Verdana" size="2">
<asp:TextBox ID="txtPassWord" runat="server"
TextMode="Password"></asp:TextBox>
</font></b></td>
</tr>
<tr>
<td style="width: 35%"> </td>
<td> </td>
</tr>
<tr>
<td align="center" colspan="2" width="40%">
<asp:ImageButton ID="ImageButton1"
runat="server" Height="28px" ImageUrl="~/Images/login.jpg"
OnClick="ImageButton1_Click" />
<b><font face="Verdana"
size="1"> <font face="verdana,arial" size="2"><asp:ImageButton
ID="ImageButton2" runat="server" CausesValidation="False" Height="29px"
ImageUrl="~/Images/Reset.jpg" OnClick="ImageButton2_Click" Width="93px" />
</font> </font> </b></td>
<tr>
64
<td colspan="2">
<center>
<asp:LinkButton ID="LinkButton1"
runat="server" Font-Bold="True" OnClick="LinkButton1_Click"
CausesValidation="False">Forgot Password?</asp:LinkButton>
</center>
</td>
</tr>
</tr>
</tr>
</table>
</div>
</table>
<asp:Label ID="lblMsg" runat="server" ForeColor="Red"
Text="lblMsg" Visible="false"></asp:Label>
<br>
<br></br>
<br></br>
</br>
</br>
</font></td>
</tr>
</table>
</center>
</asp:Panel>
<cc1:RoundedCornersExtender ID="RoundedCornersExtender1" runat="server"
BorderColor="Black" Radius="15" Corners="All" TargetControlID="pnl">
</cc1:RoundedCornersExtender>
</center>
<center> </center>
<%--#B9A3B7
#ADB9CD--%>
<center></center>
</asp:Content>
7.1.5. FrmFileUpload.aspx:
65
<br />
<br />
<br />
<br />
<center>
<table align="center">
<tr>
<td
style="font-family: 'Times New Roman', Times, serif; font-
size: x-large; font-weight: bold; color: #293955; height: 30px;"><asp:Panel
ID="pnl" runat="server" Height="241px" Width="383px">
<center style="font-size: medium">
Upload a File</center>
<center>
<center>
<table align="right">
<tr>
<td colspan="2" style="height: 5px"></td>
</tr>
<tr>
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">Attribute</span></td>
<td align="left" style="width: 239px">
<asp:DropDownList ID="ddlAttribute"
runat="server" Height="23px" Width="124px">
</asp:DropDownList>
<asp:CompareValidator ID="CompareValidator1"
runat="server" ControlToValidate="ddlAttribute" ErrorMessage="Select"
ForeColor="Red" Operator="NotEqual" ValueToCompare="Select Atribute"
style="font-size: small"></asp:CompareValidator>
</td>
</tr>
<tr>
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">File Name</span></td>
<td align="left" style="width: 239px">
<asp:TextBox ID="txtFName" runat="server"
Height="23px"></asp:TextBox>
</td>
</tr>
<tr>
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">File Description</span></td>
<td align="left" style="width: 239px">
<asp:TextBox ID="txtDesc" runat="server"
Height="23px"></asp:TextBox>
<%--<cc1:CalendarExtender ID="CalendarExtender2"
Format="dd/MM/yy"
runat="server" TargetControlID="txtDOB" />--%></td>
</tr>
<tr>
66
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">Upload File</span></td>
<td align="left" style="width: 239px">
<asp:FileUpload ID="FileUpload1" runat="server"
Height="23px" />
</td>
</tr>
<tr>
<td style="width: 157px; font-size: small;"><span
style="font-weight: normal">E-mail</span></td>
<td align="left" style="width: 239px">
<asp:TextBox ID="txtEmail" runat="server"
Height="23px"></asp:TextBox>
</td>
</tr>
<tr>
<td style="width: 157px; font-size:
small;"> </td>
<td align="left" style="width: 239px"> </td>
</tr>
<tr>
<td colspan="2" style="height: 35px">
<asp:ImageButton ID="ImageButton1"
runat="server" Height="25px" ImageUrl="~/Images/submit.jpg"
OnClick="ImageButton1_Click" />
<asp:ImageButton ID="ImageButton2"
runat="server" CausesValidation="False" Height="25px"
ImageUrl="~/Images/clear.jpg" OnClick="ImageButton2_Click" />
<%--<input type="submit" id="btn"
value="Register" onclick="return Button1_onclick()" />--%>
</td>
</tr>
<tr>
<td colspan="2" style="height: 35px">
<asp:Label ID="lblMsg" runat="server"
Text="Label" Visible="false" style="font-size: medium"></asp:Label>
</td>
</tr>
</table>
</center>
</center>
</asp:Panel>
<cc1:RoundedCornersExtender ID="RoundedCornersExtender1" runat="server"
BorderColor="Black" Corners="All" Radius="15" TargetControlID="pnl">
</cc1:RoundedCornersExtender>
<br />
</td>
</tr>
</table>
</center>
</asp:Content>
67
8. SYSTEM TESTING
8.1 Testing Strategies:
UNIT TESTING
MODULE
TESTING
SUB-SYSTEM
Component
TESING
Testing
SYSTEM
TESTING
Integration
Testing 68
ACCEPTANCE
TESTING
User
Testing
Figure 8.2(a): Testing life cycle
Unit testing focuses verification effort on the smallest unit of software design,
the module. The unit testing we have is white box oriented and some modules the steps
are conducted in parallel.
Use the design of the code and draw correspondent flow graph.
V(G)=E-N+2 or
V(G)=P+1 or
69
Determine the basis of set of linearly independent paths.
CONDITIONAL TESTING:
In this part of the testing each of the conditions were tested to both true and
false aspects. And all the resulting paths were tested. So that each path that may be
generate on particular condition is traced to uncover any possible errors.
This type of testing selects the path of the program according to the location of
definition and use of variables. This kind of testing was used only when some local
variable were declared. The definition-use chain method was used in this type of
testing. These were particularly useful in nested statements.
LOOP TESTING:
In this type of testing all the loops are tested to all the limits possible. The following
exercise was adopted for all loops:
All the loops were tested at their limits, just above them and just below them.
All the loops were skipped at least once.
For nested loops test the inner most loop first and then work outwards.
For concatenated loops the values of dependent loops were set with the help of
connected loop.
Unstructured loops were resolved into nested loops or concatenated loops and
tested as above.
70
9 EXPERIMENTAL RESULTS
9.1 EFFICIENCY COMPARISION RESULTS:
In this section, we analyze and compare the efficiency of the proposed scheme with the
previous CP-ABE schemes (that is, Bethen courtetal.’s scheme (BSW), Attrapadung’s
scheme (BCP-ABE2), and Yu et al.’s scheme (YWRL) in theoretical and practical
aspects. Then, the efficiency of the proposed scheme is demonstrated in the network
simulation in terms of the communication cost. We also discuss its efficiency when
implemented with specific parameters and compare these results with those obtained by
the other schemes.
Efficiency Comparison
71
Table 9.1 (a): Efficiency Comparison
72
Figure 9.1 (b): Communication cost of system sid
9.2 Screenshots:
This About us page says Information regarding the functioning and enhancement of the
website.
73
Figure 9.2(c): Registration Form
This is the registration form page where user has to fill the details in it. Here user
means doctor.
In this page user has entering all his details as shown in above figure.
74
Figure 9.2(e): Registration Form submitted successfully
In this page you can see registration form has successfully submitted and then a unique
id number is generated to the user.
In this request password page, User has to enter his unique id number in order to get his
password through the registered mail Id/database.
75
Figure 9.2(g): Decrypt Password
In this decrypt password page, User has to enter his unique id & generated password
and then click submit.
In this page you can see a password is generated to the user (doctor).
User:
76
Figure 9.2(i): User Login Page
In this page User has to login with his unique id and generated password.
This is the User (doctor) page. In this some features are there like file upload (patient),
secret key search, download file (patient), manage password (change password) and
logout options
77
Figure 9.2(k): Change Password of user
Admin:
This is admin login page. Admin should enter his unique id and password to login.
78
Figure 9.2(m): Admin login Display
This is the Admin page. In this some features are there like file upload (patient),
user (login and download history) and logout.
This page is file upload. Admin should upload patient file to doctors.
79
USER:
This page is user page. User (doctor) search files by patient name and unique mail id
and then click search.
In this when user entered credentials are matched, a search file key is generated which
is required to download the files.
80
Figure 9.2(q): User File download Page
This page is user file download page. User will download file by patient name and the
generated search file key.
In user file download page if you try to download the file once again then it will shows
the notification that file is already downloaded.
81
Figure 9.2(s): User Upload File
This is upload file page. User (doctor) will study patients file and then uploads file to
admin.
Admin:
82
Figure 9.2(u): User Download history
This is admin page. This page shows whether the user download the file or not. If user (doctor)
downloads then it will show in the users file download history.
This is admin page. This page shows whether the user login or not. If user (doctor) login then
it will show in the user’s login history.
83
First, the key escrow problem is resolved by a key issuing protocol that exploits the
characteristic of the data sharing system architecture. The key issuing protocol
generates and issues user secret keys by performing a secure two-party computation
(2PC) protocol between the KGC and the data-storing center with their own master
secrets. The 2PC protocol deters them from obtaining any master secret information of
each other such that none of them could generate the whole set of user keys alone.
Thus, users are not required to fully trust the KGC and the datastoring center in
order to protect their data to be shared. The data confidentiality and privacy can be
cryptographically enforced against any curious KGC or data-storing center in the
proposed scheme. Second, the immediate user revocation can be done via the proxy
encryption mechanism together with the CP-ABE algorithm. Attribute group keys are
selectively distributed to the valid users in each attribute group, which then are used to
reencrypt the ciphe rtext encrypted under the CPABE algorithm
11. CONCLUSION
84
The enforcement of access policies and the support of policy updates are
important challenging issues in the data sharing systems. In this study, we proposed an
attribute based data sharing scheme to enforce a fine-grained data access control by
exploiting the characteristic of the data sharing system. The proposed scheme features a
key issuing mechanism that removes key escrow during the key generation. The user
secret keys are generated through a secure two-party computation such that any curious
key generation center or data-storing center cannot derive the private keys individually.
Thus, the proposed scheme enhances data privacy and confidentiality in the data
sharing system against any system managers as well as adversial outsiders without
corresponding credentials. The proposed scheme can do an immediate user revocation
on each attribute set while taking full advantage of the scalable access control provided
by the cipher-text policy attribute-based encryption. Therefore, the proposed scheme
achieves more secure and fine-grained data access control in the data sharing system.
We demonstrated that the proposed scheme is efficient and scalable to securely manage
user data in the data sharing system.
86