0% found this document useful (0 votes)
184 views47 pages

A Secure Dynamic Multi-Keyword Ranked Search Scheme Over Encrypted Cloud Data

This document describes a project report submitted by four students for their Bachelor of Technology degree in Computer Science Engineering. The report details their work on developing a secure dynamic multi-keyword ranked search scheme over encrypted cloud data. It includes an abstract, introduction on cloud computing, and sections on the structure and characteristics of cloud computing as well as different cloud service models. The report was completed under the guidance of their project advisor.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
184 views47 pages

A Secure Dynamic Multi-Keyword Ranked Search Scheme Over Encrypted Cloud Data

This document describes a project report submitted by four students for their Bachelor of Technology degree in Computer Science Engineering. The report details their work on developing a secure dynamic multi-keyword ranked search scheme over encrypted cloud data. It includes an abstract, introduction on cloud computing, and sections on the structure and characteristics of cloud computing as well as different cloud service models. The report was completed under the guidance of their project advisor.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

www.SeminarsTopics.

com

A Secure Dynamic Multi-keyword Ranked


Search Scheme Over Encrypted Cloud Data
A Project Report
Submitted on the Faculty of Engineering of

JAWAHARLAL NEHRU TECHNOLOGY UNIVERSITY


KAKINADA
In partial fulfillment of the requirement for the awards of degree of

BACHELOR OF TECHNOLOGY IN
COMPUTER SCIENCE ENGINEERING
By
M.GNANESH (167Z1A0516)
P.SRAVAN (167Z1A0521)
N.RUBIKA (167Z1A0519)
M.PALLAVI (167Z1A0514)
Under the esteemed Guidance of

Mr. G.KIRAN KUMAR


M. Tech

DEPARTMENT OF COMPUTER SCIENCE ENGINEERING


INDIRA INSTITUTE OF TECHNOLOGY &
SCIENCES
www.SeminarsTopics.com
(Approved By AICTE and Affiliated To J.N.T.U, KAKINADA) Darimadugu, Markapur -
523316, Prakasam Dist. A.P
2019-2020

INDIRA INSTITUTE OF TECHNOLOGY & SCIENCES,


MARKAPUR
(Approved By AICTE and Affiliated To J.N.T.U, KAKINADA)
Darimadugu, Markapur - 523316, Prakasam Dist. A.P
DEPARTMENT OF COMPUTER SCIENCE ENGINEERING

CERTIFICATE
This is to certify that the project entitled ‘‘ A Secure Dynamic Multi-keyword
Ranked Search Scheme Over Encrypted Cloud Data’’ is a bonafide work done

By
M.GNANESH (167Z1A0516)

P.SRAVAN (167Z1A0521)

N.RUBIKA (167Z1A0519)

M.PALLAVI (167Z1A0514)

Under my graduate and supervision and submitted in practical fulfillment of the


requirements of carried out by them in academic year 2017-2018 under my graduate and
supervision.

INTERNAL GUIDE Head Of the Department

Ass. Prof. Mr. G.KIRAN KUMAR Ass. Prof. Mr. K .SURENDRA REDDY
M. Tech M. Tech
www.SeminarsTopics.com

External Examiner

ACKNOWLEDGEMENT
It is my privilege to express my profound gratitude to my Guide Mr. G.KIRAN
KUMAR M. Tech and head of the department of Computer Science and Engineering.
Mr. K.SURENDRA REDDY M. Tech in Computer Engineering, INDIRA
INSTITUTE OF TECHNOLOGY AND SCIENCE, for his valuable
guidance and encouragement throughout the course of this project work and
providing all the lab facilities of Computer Science and Engineering Mr.
K.SURENDRA REDDY M.Tech, Mr.K.V.H.N VISHNU VARDHAN M.Tech
,G.HARA RANI M.Tech and all kinds of support during the course of this study,
indeed his personal involvement only makes complete this study.

I take this opportunity to express my sincere thanks to Principal Dr.


P.KALPANA Ph.D, INDIRA INSTITUTE OF TECHNOLOGY AND SCIENCE,
MARKAPUR for cooperation throughout this project.

My special thanks to our chairman V.HANUMA REDDY M.B.A chairman of


the college for providing all amenities to complete our project work
.
Finally I am thankful to one and all who gave their whole hearted co-
operation to complete this project work.

M.GNANESH (167Z1A0516)

P.SRAVAN (167Z1A0521)
N.RUBIKA (167Z1A0519)
M.PALLAVI (167Z1A0514)
ABSTRACT

Due to the increasing popularity of cloud computing, more and more data owners are
motivated to outsource their data to cloud servers for great convenience and reduced cost in
data management. However, sensitive data should be encrypted before outsourcing for privacy
requirements, which obsoletes data utilization like keyword-based document retrieval. In this
paper, we present a secure multi-keyword ranked search scheme over encrypted cloud data,
which simultaneously supports dynamic update operations like deletion and insertion of
documents. Specifically, the vector space model and the widely-used TF_IDF model are
combined in the index construction and query generation. We construct a special tree-based
index structure and propose a “Greedy Depth-First Search” algorithm to provide efficient
multi-keyword ranked search. The secure KNN algorithm is utilized to encrypt the index and
query vectors, and meanwhile ensure accurate relevance score calculation between encrypted
index and query vectors. In order to resist statistical attacks, phantom terms are added to the
index vector for blinding search results. Due to the use of our special tree-based index
structure, the proposed scheme can achieve sub-linear search time and deal with the deletion
and insertion of documents flexibly. Extensive experiments are conducted to demonstrate the
efficiency of the proposed scheme.

Page 1
CHAPTER-1
INTRODUCTION
What is cloud computing?
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service
over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as
an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts
remote services with a user's data, software and computation. Cloud computing consists of hardware and
software resources made available on the Internet as managed third-party services. These services typically
provide access to advanced software applications and high-end networks of server computers.

2.1 Structure of cloud computing

How Cloud Computing Works?


The goal of cloud computing is to apply traditional supercomputing, or high-performance
computing power, normally used by military and research facilities, to perform tens of trillions
of computations per second, in consumer-oriented applications such as financial portfolios, to
deliver personalized information, to provide data storage or to power large, immersive
computer games.
The cloud computing uses networks of large groups of servers typically running low-cost
consumer PC technology with specialized connections to spread data-processing chores across
them. This shared IT infrastructure contains large pools of systems that are linked together.
Often, virtualization techniques are used to maximize the power of cloud computing.

Characteristics and Services Models:

The salient characteristics of cloud computing based on the definitions provided


by the National Institute of Standards and Terminology (NIST) are outlined below:

 On-demand self-service: A consumer can unilaterally provision computing

Page 2
capabilities, such as server time and network storage, as needed automatically without
requiring human interaction with each service’s provider.
 Broad network access: Capabilities are available over the network and accessed
through standard mechanisms that promote use by heterogeneous thin or thick client
platforms (e.g., mobile phones, laptops, and PDAs).
 Resource pooling: The provider’s computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. There is a sense
of location-independence in that the customer generally has no control or knowledge
over the exact location of the provided resources but may be able to specify location at
a higher level of abstraction (e.g., country, state, or data center). Examples of resources
include storage, processing, memory, network bandwidth, and virtual machines.
 Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases
automatically, to quickly scale out and rapidly released to quickly scale in. To the
consumer, the capabilities available for provisioning often appear to be unlimited and
can be purchased in any quantity at any time.
 Measured service: Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of
service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage
can be managed, controlled, and reported providing transparency for both the provider
and consumer of the utilized service.

2.2 Characteristics of cloud computing

Services Models:

Cloud Computing comprises three different service models, namely Infrastructure-as-


a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). The three
service models or layer are completed by an end user layer that encapsulates the end user
perspective on cloud services. The model is shown in figure below. If a cloud user accesses
services on the infrastructure layer, for instance, she can run her own applications on the
resources of a cloud infrastructure and remain responsible for the support, maintenance, and
security of these applications herself. If she accesses a service on the application layer, these
tasks are normally taken care of by the cloud service provider.

Page 3
2.3 Structure of service models

Benefits of cloud computing:

1. Achieve economies of scale – increase volume output or productivity with fewer


people. Your cost per unit, project or product plummets.
2. Reduce spending on technology infrastructure. Maintain easy access to your
information with minimal upfront spending. Pay as you go (weekly, quarterly or
yearly), based on demand.
3. Globalize your workforce on the cheap. People worldwide can access the cloud,
provided they have an Internet connection.
4. Streamline processes. Get more work done in less time with less people.
5. Reduce capital costs. There’s no need to spend big money on hardware, software or
licensing fees.
6. Improve accessibility. You have access anytime, anywhere, making your life so much
easier!
7. Monitor projects more effectively. Stay within budget and ahead of completion cycle
times.
8. Less personnel training is needed. It takes fewer people to do more work on a cloud,
with a minimal learning curve on hardware and software issues.
9. Minimize licensing new software. Stretch and grow without the need to buy
expensive software licenses or programs.
10. Improve flexibility. You can change direction without serious “people” or “financial”
issues at stake.
Advantages:

1. Price: Pay for only the resources used.


2. Security: Cloud instances are isolated in the network from other instances for

Page 4
improved security.
3. Performance: Instances can be added instantly for improved performance. Clients
have access to the total resources of the Cloud’s core hardware.
4. Scalability: Auto-deploy cloud instances when needed.
5. Uptime: Uses multiple servers for maximum redundancies. In case of server failure,
instances can be automatically created on another server.
6. Control: Able to login from any location. Server snapshot and a software library lets
you deploy custom instances.
7. Traffic: Deals with spike in traffic with quick deployment of additional instances to
handle the load.

Page 5
CHAPTER-2
LITERATURE SURVEY

3.1.1 Security challenges for the public cloud


AUTHORS: K. Ren, C.Wang, Q.Wang et al.,Cloud computing represents today's
most exciting computing paradigm shift in information technology. However, security and
privacy are perceived as primary obstacles to its wide adoption. Here, the authors outline
several critical security challenges and motivate further investigation of security solutions for a
trustworthy public cloud environment.

2) A fully homomorphic encryption scheme


AUTHORS: C. Gentry
We propose the first fully homomorphic encryption scheme, solving an old open problem.
Such a scheme allows one to compute arbitrary functions over encrypted data without the
decryption key—i.e., given encryptions E(m1), ..., E( mt) of m1, ..., m t, one can efficiently
compute a compact ciphertext that encrypts f(m1, ..., m t) for any efficiently computable
function f.
Fully homomorphic encryption has numerous applications. For example, it enables encrypted
search engine queries—i.e., a search engine can give you a succinct encrypted answer to your
(Boolean) query without even knowing what your query was. It also enables searching on
encrypted data; you can store your encrypted data on a remote server, and later have the server
retrieve only files that (when decrypted) satisfy some Boolean constraint, even though the
server cannot decrypt the files on its own. More broadly, it improves the efficiency of secure
multiparty computation.
In our solution, we begin by designing a somewhat homomorphic "boostrappable" encryption
scheme that works when the function f is the scheme's own decryption function. We then show
how, through recursive self-embedding, bootstrappable encryption gives fully homomorphic
encryption.
3) Public key encryption with keyword search
AUTHORS: D. Boneh, G. Di Crescenzo, R. Ostrovsky, and G. Persiano

We study the problem of searching on data that is encrypted using a public key system.
Consider user Bob who sends email to user Alice encrypted under Alice's public key. An email
gateway wants to test whether the email contains the keyword "urgent" so that it could route
the email accordingly. Alice, on the other hand does not wish to give the gateway the ability to
decrypt all her messages. We define and construct a mechanism that enables Alice to provide a
key to the gateway that enables the gateway to test whether the word "urgent" is a keyword in
the email without learning anything else about the email. We refer to this mechanism as Public
Key Encryption with keyword Search. As another example, consider a mail server that stores
various messages publicly encrypted for Alice by others. Using our mechanism Alice can send
the mail server a key that will enable the server to identify all messages containing some
specific keyword, but learn nothing else. We define the concept of public key encryption with
keyword search and give several constructions.
4) Practical techniques for searches on encrypted data
AUTHORS: D. X. Song, D. Wagner, and A. Prig
It is desirable to store data on data storage servers such as mail servers and file servers in
encrypted form to reduce security and privacy risks. But this usually implies that one has to
sacrifice functionality for security. For example, if a client wishes to retrieve only documents
containing certain words, it was not previously known how to let the data storage server
perform the search and answer the query, without loss of data confidentiality. We describe our
cryptographic schemes for the problem of searching on encrypted data and provide proofs of
security for the resulting crypto systems. Our techniques have a number of crucial advantages.
They are provably secure: they provide provable secrecy for encryption, in the sense that the
un-trusted server cannot learn anything about the plaintext when only given the cipher text;
they provide query isolation for searches, meaning that the un-trusted server cannot learn
anything more about the plaintext than the search result; they provide controlled searching, so
that the un-trusted server cannot search for an arbitrary word without the user's authorization;
they also support hidden queries, so that the user may ask the un-trusted server to search for a
secret word without revealing the word to the server. The algorithms presented are simple, fast
(for a document of length n, the encryption and search algorithms only need
O(n) stream cipher and block cipher operations), and introduce almost no space and
communication overhead, and hence are practical to use today .

5) Privacy preserving keyword searches on remote encrypted data


AUTHORS: Y.-C. Chang and M. MitzenmacherWe consider the following problem: a
user U wants to store his files in an encrypted form on a remote file server S. Later the user U
wants to efficiently retrieve some of the encrypted files containing (or indexed by) specific
keywords, keeping the keywords themselves secret and not jeopardizing the security of the
remotely stored files. For example, a user may want to store old e-mail messages encrypted on
a server managed by Yahoo or another large vendor, and later retrieve certain messages while
travelling with a mobile device.
In this paper, we offer solutions for this problem under well-defined security requirements.
Our schemes are efficient in the sense that no public-key cryptosystem is involved. Indeed, our
approach is independent of the encryption method chosen for the remote files. They are also
incremental, in that U can submit new files which are secure against previous queries but still
searchable against future queries.
CHAPTER-3
SYSTEM ANALYSIS
EXISTING SYSTEM:
 A general approach to protect the data confidentiality is to encrypt the data before
outsourcing.
 Searchable encryption schemes enable the client to store the encrypted data to the cloud
and execute keyword search over cipher-text domain. So far, abundant works have
been proposed under different threat models to achieve various search functionality,
such as single keyword search, similarity search, multi-keyword boolean search, ranked
search, multi-keyword ranked search, etc. Among them, multi- keyword ranked search
achieves more and more attention for its practical applicability. Recently, some
dynamic schemes have been proposed to support inserting and deleting operations on
document collection. These are significant works as it is highly possible that the data
owners need to update their data on the cloud server.

DISADVANTAGES OF EXISTING SYSTEM:


 Huge cost in terms of data usability. For example, the existing techniques on
keyword-based information retrieval, which are widely used on the plaintext data,
cannot be directly applied on the encrypted data. Downloading all the data from the
cloud and decrypt locally is obviously impractical.
 Existing System methods not practical due to their high computational overhead for
both the cloud sever and user.

PROPOSED SYSTEM:
 This paper proposes a secure tree-based search scheme over the encrypted cloud data,
which supports multi-keyword ranked search and dynamic operation on the document
collection. Specifically, the vector space model and the widely-used “term frequency
(TF) × inverse document frequency (IDF)” model are combined in the index
construction and query generation to provide multi-keyword ranked search. In order to
obtain high search efficiency, we construct a tree-based index structure and propose a
“Greedy Depth-first Search” algorithm based on this index tree.
 The secure KNN algorithm is utilized to encrypt the index and query vectors, and
meanwhile ensure accurate relevance score calculation between encrypted index and
query vectors.
 To resist different attacks in different threat models, we construct two secure search
schemes: the basic dynamic multi-keyword ranked search (BDMRS) scheme in the
known cipher-text model, and the enhanced dynamic multi-keyword ranked search
(EDMRS) scheme in the known background model.

ADVANTAGES OF PROPOSED SYSTEM:


Due to the special structure of our tree-based index, the proposed search scheme can flexibly
achieve sub-linear search time and deal with the deletion and insertion of documents.
 We design a searchable encryption scheme that supports both the accurate multi-
keyword ranked search and flexible dynamic operation on document collection.
 Due to the special structure of our tree-based index, the search complexity of the
proposed scheme is fundamentally kept to logarithmic. And in practice, the proposed
scheme can achieve higher search efficiency by executing our “Greedy Depth-first
Search” algorithm. Moreover, parallel search can be flexibly performed to further
reduce the time cost of search process.

SYSTEM DESIGN
SYSTEM ARCHITECTURE:

4.2.1 SYSTEM ARCHITECTURE

SYSTEM REQUIREMENTS
HARDWARE REQUIREMENTS:

• System : Pentium IV 2.4 GHz.


• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

• Operating system : - Windows XP.


• Coding Language : C#.NET
• Data Base : MS SQL SERVER 2005
INPUT DESIGN :
The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to
put transaction data in to a usable form for processing can be achieved by inspecting the
computer to read data from a written or printed document or it can occur by having people
keying the data directly into the system. The design of input focuses on controlling the amount
of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the
process simple. The input is designed in such a way so that it provides security and ease of use
with retaining the privacy. Input Design considered the following things:
 What data should be given as input?
 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error occur.
OBJECTIVES:
1. Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The
data entry screen is designed in such a way that all the data manipulates can be performed. It
also provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in
maize of instant. Thus the objective of input design is to create an input layout that is easy to
follow

OUTPUT DESIGN:
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the
right output must be developed while ensuring that each output element is designed so that
people will find the system can use easily and effectively. When analysis design computer
output, they should Identify the specific output that is needed to meet the requirements.
2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following
objectives.
 Convey information about past activities, current status or projections of the
 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.

SOFTWARE ENVIRONMENT
4.1 Features OF. Net:
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating
XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET
Framework is a language-neutral platform for writing programs that can easily and securely
interoperate. There’s no language barrier with .NET: there are numerous languages available
to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET
framework provides the foundation for components to interact seamlessly, whether locally or
remotely on different platforms. It standardizes common data types and communications
protocols so that components created in different languages can easily interoperate.
“.NET” is also the collective name given to various software components built upon the .NET
platform. These will be both products (Visual Studio.NET and Windows.NET Server, for
instance) and services (like Passport, .NET My Services, and so on).

THE .NET FRAMEWORK:


The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are

 Conversion from a low-level assembler-style language, called Intermediate


Language (IL), into code native to the platform being executed on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
 Loading and executing programs, with version control and other such features.
 The following features of the .NET framework are also worth description:
Managed Code:
The code that targets .NET, and which contains certain extra Information -
“metadata” - to describe itself. Whilst both managed and unmanaged code can
run in the runtime, only managed code contains the information that allows the
CLR to guarantee, for instance, safe execution and interoperability.
Managed Data:

With Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET anguages use Managed Data by default,
such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not.
Targeting CLR can, depending on the language you’re using, impose certain constraints on the
features available. As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications - data that doesn’t get garbage collected but instead is
looked after by unmanaged code.

Common Type System:


The CLR uses something called the Common Type System (CTS) to strictly enforce type-
safety. This ensures that all classes are compatible with each other, by describing types in a
common way. CTS define how types work within the runtime, which enables types in one
language to interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the runtime also
ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification:


The CLR provides built-in support for language interoperability. To ensure that you
can develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common Language
Specification (CLS) has been defined. Components that follow these rules and expose only
CLS features are considered CLS-compliant.
The Class Library:
.NET provides a single-rooted hierarchy of classes, containing over 7000 types.
The root of the namespace is called System; this contains basic types like Byte, Double,
Boolean, and String, as well as Object. All objects derive from System. Object. As well as
objects, there are value types. Value types can be allocated on the stack, which can provide
useful flexibility. There are also efficient means of converting value types to object types if
and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen,
and network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each
providing distinct areas of functionality, with dependencies between the namespaces kept to a
minimum.

LANGUAGES SUPPORTED BY .NET:


The multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of applications and XML
Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual
Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions
to the family.
Visual Basic .NET has been updated to include many new and improved
language features that make it a powerful object-oriented programming language. These
features include inheritance, interfaces, and overloading, among others. Visual Basic also now
supports structured exception handling, custom attributes and also supports multi- threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-
compliant language can use the classes, objects, and components you create in Visual Basic
.NET.
Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++
for Rapid Application Development”. Unlike other languages, its specification is just the
grammar of the language. It has no standard library of its own, and instead has been designed
with the intention of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language
developers into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a variety of other
programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-
aware applications to be built in either Perl or Python. Both products can be integrated into the
Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.
Other languages for which .NET compilers are available include

 FORTRAN
 COBOL
 Eiffel

ASP.NET Windows Forms

XML WEB SERVICES

Base Class Libraries

Common Language Runtime

Operating System

4.4.1. Net Framework

C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the
.NET Framework; it manages the execution of the code and also makes the development
process easier by providing services.
C#.NET is a CLS-compliant language. Any objects, classes, or components that created
in C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in C#.NET
.The use of CLS ensures complete interoperability among applications, regardless of the
languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas destructors are used to destroy
them. In other words, destructors are used to release the resources allocated to the object.
In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to
complete the tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the sub finalize
procedure can be called only from the class it belongs to or from derived classes.

GARBAGE COLLECTION:
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors
allocated resources, such as objects and variables. In addition, the .NET Framework
automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for
garbage collection, it releases the memory occupied by the object.
OVER LOADING:
Overloading is another feature in C#. Overloading enables us to define multiple
procedures with the same name, where each procedure has a different set of arguments.
Besides using overloading for procedures, we can use it for constructors and properties in
a class.

MULTI-THREADING:
C#.NET also supports multithreading. An application that supports multithreading can
handle multiple tasks simultaneously, we can use multithreading to decrease the time
taken by an application to respond to user interaction.
STRUCTURED EXCEPTION HANDLING:
C#.NET supports structured handling, which enables us to detect and remove
errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create
exception handlers. Using Try…Catch…Finally statements, we can create robust and
effective exception handlers to improve the performance of our application.
THE .NET FRAME WORK:
The .NET Framework is a new computing platform that simplifies application
development in the highly distributed environment of the Internet.
OBJECTIVES OF. NET FRAMEWOR
1. To provide a consistent object-oriented programming environment whether object
codes is stored and executed locally on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes software deployment and
guarantees safe execution of code.
3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and Web-
based applications.

4.3 FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called
SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component. The
Repository component available in SQL Server version 7.0 is now called Microsoft SQL
Server 2000 Meta Data Services. References to the component now use the term Meta Data
Services. The term repository is used only in reference to the repository engine within Meta
Data Services
SQL-SERVER database consist of six type of objects, They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TABLE:
A database is a collection of data about a specific topic.

VIEWS OF TABLE:
We can work with a table in two types,

1. Design View

2. Datasheet View

1. Design View
To build or modify the structure of a table we work in the table design view. We can specify what
kind of data will be hold.
2. Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers
the question from one or more table. The data that make up the answer is either dynaset (if you
edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in
the dynast. Access either displays the dynast or snapshot for us to view or perform an action on
it, such as deleting or updating.
CHAPTER-4
DATA FLOW DIAGRAM
1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be
used to represent a system in terms of input data to the system, various processing
carried out on this data, and the output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to
model the system components. These components are the system process, the data used
by the process, an external entity that interacts with the system and the information
flows in the system.
3. DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and
the transformations that are applied as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent increasing
information flow and functional detail.

Registration

login

admin
user

Verify from Admin


Search activate user

yes

download upload file

monitor user

5.1 Data Flow Diagram


UML DIAGRAMS :
UML stands for Unified Modeling Language. UML is a standardized general-purpose
modeling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object
oriented computer software. In its current form UML is comprised of two major components: a
Meta-model and a notation. In the future, some form of method or process may also be added
to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modeling and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the
software development process. The UML uses mostly graphical notations to express the design
of software projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.
CHAPTER-6
USE CASE DIAGRAM
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented
as use cases), and any dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor. Roles of the actors in
the system can be depicted.

Registration

Activate user

Search

search files via query

Requst
Admin
Approved
User

Download

view user activities

Monitor the User Actions

6.1 Use Case Diagram


CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.

ADMIN
USER
KEY
FILE LOG
SEARCH USER
QUERY

QUERY();
VIEW(); USER_REQUESTS();
DOWNLOAD(); APPROVE();
MONITOR_USER();
UPLOAD();

6.2 Class Diagram


SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram
that shows how processes operate with one another and in what order. It is a construct of a
Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.

USER
ADMIN

REGISTRATION

Activate user

Request

LOGIN UPLOAD FILE SEARCH


VIEW FILES DOWNLOAD FILES
MONITOR THE USER ACTIVITIES

VIEW USER DETAILS

6.3 Sequence Diagram


ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity

Input key
activate user

Login

USER ADMIN

WRONG CHECK

CORRECT Responce

invalid key

Search

View user

VIEW FILES

UPLOAD FILES
DOWNLOADS

UPDATE PROFILE A

6.4 Activity Diagram


CHAPTER-7
IMPLEMENTATION

MODULES
 Data Owner
Module
 Data User
Module
 Cloud server and Encryption Module
 Rank Search Module

MODULES DESCRIPTION
Data Owner Module:
This module helps the owner to register those details and also include login details. This
module helps the owner to upload his file with encryption using RSA algorithm. This ensures
the files to be protected from unauthorized user. Data owner has a collection of documents F
={f1; f2; :::; fn} that he wants to outsource to the cloud server in encrypted form while still
keeping the capability to search on them for effective utilization. In our scheme, the data
owner firstly builds a secure searchable tree index I from document collection F, and then
generates an encrypted document collection C for F. Afterwards, the data owner outsources the
encrypted collection C and the secure index I to the cloud server, and securely distributes the
key information of trapdoor generation and document decryption to the authorized data users.
Besides, the data owner is responsible for the update operation of his documents stored in the
cloud server. While updating, the data owner generates the update information locally and
sends it to the server.

Data User Module:


This module includes the user registration login details. This module is used to help the client
to search the file using the multiple key words concept and get the accurate result list based on
the user query. The user is going to select the required file and register the user details and get
activation code in mail email before enter the activation code. After user can download the Zip
file and extract that file. Data users are authorized ones to access the documents of data owner.
With t query keywords, the authorized user can generate a trapdoor TD according to search
control mechanisms to fetch k encrypted documents from cloud server. Then, the data user can
decrypt the documents with the shared secret key.
Cloud Server and Encryption Module:
This module is used to help the server to encrypt the document using RSA Algorithm and to
convert the encrypted document to the Zip file with activation code and then activation code
send to the user for download. Cloud server stores the encrypted document collection C and
the encrypted searchable tree index I for data owner. Upon receiving the trapdoor TD from the
data user, the cloud server executes search over the index tree I, and finally returns the
corresponding collection of top- k ranked encrypted documents. Besides, upon receiving the
update information from the data owner, the server needs to update the index I and document
collection C according to the received information. The cloud server in the proposed scheme is
considered as “honest-but-curious”, which is employed by lots of works on secure cloud data
search

Rank Search Module:


These modules ensure the user to search the files that are searched frequently using rank
search. This module allows the user to download the file using his secret key to decrypt the
downloaded data. This module allows the Owner to view the uploaded files and downloaded
files. The proposed scheme is designed to provide not only multi-keyword query and accurate
result ranking, but also dynamic update on document collections.
CHAPTER-8
SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

TYPES OF TESTS:
Unit Testing:
Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application, and/or
system configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and expected
results.

Integration testing:
Integration tests are designed to test integrated software components to determine if
they actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components
is correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.

Functional test:
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.


Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key functions,
or special test cases. In addition, systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.

System Test:
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is
the configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.

White Box Testing:


White Box Testing is a testing in which in which the software tester has knowledge of
the inner workings, structure and language of the software, or at least its purpose. It is purpose.
It is used to test areas that cannot be reached from a black box level.

Black Box Testing:


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
6.1 Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.

Test strategy and approach:


Field testing will be performed manually and functional tests will be written in detail.

Test objectives:
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested:
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.
6.2 Integration Testing:
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the company
level – interact without error.

6.3 Acceptance Testing:


User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.

Test Results:

All the test cases mentioned above passed successfully. No defects encountered.

RESULTS
USER:
STEP 1 : INTRODUCTION TO THE ABSTRACT
STEP 2:REGISTRATION FORM
STEP 3: OPEN REGISTRATION FORM
STEP 4: SEARCH ANOTHER PERSON
STEP 5:DOWNLOAD TO THE FILE
STEP 6:ENTER SECURITY CODE
STEP 7: ADD NEW FILE
STEP 8: EDIT TO THE PROFILE
STEP 9: LOG IN TO THE PERSON
STEP 10:SELECT TO THE FILE
STEP 11:EDIT OR DELETE THE FILES
CHAPTER-9
CONCLUSION
In this paper, a secure, efficient and dynamic search scheme is proposed, which supports not
only the accurate multi-keyword ranked search but also the dynamic deletion and insertion of
documents. We construct a special keyword balanced binary tree as the index, and propose a
“Greedy Depth-first Search” algorithm to obtain better efficiency than linear search. In
addition, the parallel search process can be carried out to further reduce the time cost. The
security of the scheme is protected against two threat models by using the secure KNN
algorithm. Experimental results demonstrate the efficiency of our proposed scheme. There are
still many challenge problems in symmetric SE schemes. In the proposed scheme, the data
owner is responsible for generating updating information and sending them to the cloud server.
Thus, the data owner needs to store the unencrypted index tree and the information that are
necessary to recalculate the IDF values. Such an active data owner may not be very suitable for
the cloud computing model. It could be a meaningful but difficult future work to design a
dynamic searchable encryption scheme whose updating operation can be completed by cloud
server only, meanwhile reserving the ability to support multi-keyword ranked search. In
addition, as the most of works about searchable encryption, our scheme mainly considers the
challenge from the cloud server. Actually, there are many secure challenges in a multi-user
scheme. Firstly, all the users usually keep the same secure key for trapdoor generation in a
symmetric SE scheme. In this case, the revocation of the user is big challenge. If it is needed to
revoke a user in this scheme, we need to rebuild the index and distribute the new secure keys
to all the authorized users. Secondly, symmetric SE schemes usually assume that all the data
users are trustworthy. It is not practical and a dishonest data user will lead to many secure
problems. For example, a dishonest data user may search the documents and distribute the
decrypted documents to the unauthorized ones. Even more, a dishonest data user may
distribute his/her secure keys to the unauthorized ones. In the future works, we will try to
improve the SE scheme to handle these challenge problems.
CHAPTER-10
REFERENCES

[1] K. Ren, C.Wang, Q.Wang et al., “Security challenges for the public cloud,” IEEE Internet
Computing,vol. 16, no. 1, pp. 69–73, 2012.

[2] S. Kamara and K. Lauter, “ Cryptographic cloud storage,” in Financial Cryptography and
Data Security. Springer, 2010, pp. 136–149.

[3] C. Gentry, “A fully homomorphic encryption scheme,” Ph.D. dissertation, Stanford


University, 2009.

[4] O. Goldreich and R. Ostrovsky, “Software protection and simulation on oblivious rams,”
Journal of the ACM (JACM), vol. 43, no. 3 , pp. 431–473, 1996.

[5] D. Boneh, G. Di Crescenzo, R. Ostrovsky, and G. Persiano, “Public key encryption with
keyword search,” in Advances in Cryptology-Euro crypt 2004. Springer, 2004, pp. 506–522.

[6] D. Boneh, E. Kushilevitz, R. Ostrovsky, and W. E. Skeith III, “Public key encryption that
allows per queries,” in Advances in Cryptology-CRYPTO 2007. Springer, 2007, pp. 50–67.

[7] D. X. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on encrypted
data,” in Security and Privacy, 2000. S&P 2000.Proceedings. 2000 IEEE Symposium on.
IEEE, 2000, pp. 44–55.

[8] E.-J. Goh et al., “Secure indexes.” IACR Cryptology print Archive, vol. 2003, p. 216, 2003.
[9] Y.-C. Chang and M. Mitzenmacher, “Privacy preserving keyword searches on remote
encrypted data,” in Proceedings of the Third international conference on Applied
Cryptography and Network Security. Springer-Verlag, 2005, pp. 442–455.
[10] R. Curtmola, J. Garay, S. Kamara, and R. Ostrovsky, “Searchable symmetric encryption:
improved definitions and efficient constructions,” in Proceedings of the 13th ACM conference
on Computer and communications security. ACM, 2006, pp. 79–88.

[11] J. Li, Q. Wang, C. Wang, N. Cao, K. Ren, and W. Lou, “Fuzzy keyword search over
encrypted data in cloud computing,” in INFOCOM, 2010 Proceedings IEEE. IEEE, 2010, pp.
1–5.

[12] M. Kuzu, M. S. Islam, and M. Kantarcioglu, “Efficient similarity search over encrypted
data,” in Data Engineering (ICDE), 2012 IEEE 28th International Conference on. IEEE, 2012,
pp. 1156–1167.

[13] C. Wang, K. Ren, S. Yu, and K. M. R. Urs, “Achieving usable and privacy-assured
similarity search over outsourced cloud data,” in INFOCOM, 2012 Proceedings IEEE. IEEE,
2012, pp. 451–459.

[14] B. Wang, S. Yu, W. Lou, and Y. T. Hou, “Privacy-preserving multi keyword fuzzy
search over encrypted data in the cloud,” in IEEE INFOCOM, 2014.

[15] P. Galle, J. Sadden, and B. Waters, “Secure conjunctive keyword search over encrypted
data,” in Applied Cryptography and Network Security. Springer, 2004, pp. 31–45.

[16] Y. H. Hwang and P. J. Lee, “Public key encryption with conjunctive keyword search and
its extension to a multi-user system,” in Proceedings of the First international conference on
Pairing-Based Cryptography. Springer-Verlag, 2007, pp. 2–22.

[17] .Ballard, Kamara, and Monroe “Achieving efficient conjunctive keyword searches over
encrypted data,” in Proceedings of the 7th international conference on Information and
Communications Security. Springer-Verlag, 2005, pp. 414–426.

[18] D. Boneh and B. Waters, “Conjunctive, subset, and range queries on encrypted data,” in
Proceedings of the 4th conference on Theory of cryptography. Springer-Verlag, 2007, pp.
535–554.
[19] B. Zhang and F. Zhang, “An efficient public key encryption with conjunctive-subset
keywords search,” Journal of Network and Computer Applications, vol. 34, no. 1, pp. 262–
267, 2011.

[20] J. Katz, A. Sahai, and B. Waters, “Predicate encryption supporting disjunctions,


polynomial equations, and inner products,” in Advances in Cryptology–EUROCRYPT 2008.
Springer, 2008, pp. 146–162.

[21] E. Shen, E. Shi, and B. Waters, “Predicate privacy in encryption systems,” in


Proceedings of the 6th Theory of Cryptography Conference on Theory of Cryptography.
Springer-Verlag, 2009, pp. 457–473.

[22] A. Lewko, T. Okamoto, A. Sahai, K. Takashima, and B. Waters, “Fully secure functional
encryption: attribute-based encryption and (hierarchical) inner product encryption,” in
Proceedings of the 29th Annual international conference on Theory and Applications of
Cryptographic Techniques. Springer-Verlag, 2010, pp. 62–91.

[23] A. Swaminathan, Y. Mao, G.-M. Su, H. Gou, A. L. Varna, S. He, MWu, and D.W. Oard,
“Confidentiality-preserving rank-ordered search,” in Proceedings of the 2007 ACM workshop
on Storage security and survivability. ACM, 2007, pp. 7–12.

[24] S. Zerr, D. Olmedilla, W. Nejdl, and W. Siberski, “Zerber+ r: Topk retrieval from a
confidential index,” in Proceedings of the 12th International Conference on Extending
Database Technology: Advances in Database Technology. ACM, 2009, pp. 439–449.

[25] C. Wang, N. Cao, K. Ren, and W. Lou, “Enabling secure and efficient ranked keyword
search over outsourced cloud data,” Parallel and Distributed Systems, IEEE Transactions on,
vol. 23, no. 8, pp. 1467–1479, 2012.

[26] N. Cao, C. Wang, M. Li, K. Ren, and W. Lou, “Privacy-preserving multi-keyword ranked
search over encrypted cloud data,” in IEEE INFOCOM, April 2011, pp. 829–837.

[27] W. Sun, B. Wang, N. Cao, M. Li, W. Lou, Y. T. Hou, and H. Li, “Privacy-preserving
multi-keyword text search in the cloud supporting similarity-based ranking,” in Proceedings
of the 8th ACM SIGSAC symposium on Information, computer and communications security. ACM,
2013, pp. 71–82.

[28] C. Orencik, M. Kantarcioglu, and E. Savas, “A practical and secure multi-keyword search
method over encrypted cloud data,” in Cloud Computing (CLOUD), 2013 IEEE Sixth
International Conference on. IEEE, 2013, pp. 390–397.

[29] W. Zhang, S. Xiao, Y. Lin, T. Zhou, and S. Zhou, “Secure ranked multi-keyword search
for multiple data owners in cloud computing,” in Dependable Systems and Networks (DSN),
2014 44th Annual IEEE/IFIP International Conference on. IEEE, 2014, pp. 276–286.

[30] S. Kamara, C. Papamanthou, and T. Roeder, “Dynamic searchable symmetric


encryption,” in Proceedings of the 2012 ACM conference on Computer and communications
security. ACM, 2012, pp. 965–976.

[31] S. Kamara and C. Papamanthou, “Parallel and dynamic searchable symmetric


encryption,” in Financial Cryptography and Data Security. Springer, 2013, pp. 258–274.

[32] D. Cash, S. Jarecki, C. Jutla, H. Krawczyk, M.-C. Ros¸u, and M. Steiner, “Highly-
scalable searchable symmetric encryption with support for boolean queries,” in Advances in
Cryptology–CRYPTO 2013. Springer, 2013, pp. 353–373.

[33] D. Cash, J. Jaeger, S. Jarecki, C. Jutla, H. Krawczyk, M.-C. Rosu, and M. Steiner,
“Dynamic searchable encryption in very large databases: Data structures and implementation,”
in Proc. of NDSS, vol. 14, 2014.

[34] C. D. Manning, P. Raghavan, and H. Sch ¨ utze, Introduction to information retrieval.


Cambridge University press Cambridge, 2008, vol. 1.

[35] B. Gu and V. S. Sheng, “Feasibility and finite convergence analysis for accurate on-line
-support vector learning,” IEEE Transactions on Neural Networks and Learning Systems, vol.
24, no. 8, pp. 1304–1315, 2013.
[36] X. Wen, L. Shao, W. Fang, and Y. Xue, “Efficient feature selection and classification for
vehicle detection.”
[37] H. Delfs and H. Knebl, Introduction to cryptography: principles and applications.
Springer, 2007.

[38] W.KWong, D. W.-l. Cheung, B. Kao, and N. Mamalis, “Secure kin computation on
encrypted databases,” in Proceedings of the 2009 ACM SIGMOD International Conference on
Management of data. ACM, 2009, pp. 139–152.

[39] “Request for comments,” http://www.rfc-editor.org/index.html.

[40] www.SemiarsTopics.com

You might also like