Authentication and Key Agreement Based On Anonymous Identity For Peer
Authentication and Key Agreement Based On Anonymous Identity For Peer
Cloud
Abstract:
Cross cloud data migration is one of the prevailing challenges faced by
mobile users, which is an essential process when users change their mobile phones
to a different provider. However,due to the insufficient local storage and
computational capabilities of the smart phones. It is often very difficult for users to
backup all data from the original cloud server to their mobile phones in order to
further upload the downloaded data to the new cloud provider. To solve this
problem, we propose an efficient data migration model between cloud providers
and construct a mutual authentication and key agreement scheme based on elliptic
curve certification –free cryptography for peer-to-peer cloud. The proposed
scheme helps to develop trust between different cloud provider and lays a
foundation for the realization of cross-cloud data migration. Mathematical
verification and security correctness of our scheme is evaluated against notable
existing schemes of data migration. which demonstrate that our proposed scheme
exhibits a better performance than other state-of-the-art scheme in terms of the
achieved reduction in both the computational and communication cost.
Proposed system:
To our knowledge, this is the first authentication and key agreement
scheme for peer cloud servers. Important contributions of this paper include the
following.
We propose a peer-to-peer cloud authentication and key agreement
(PACAKA) scheme based on anonymous identity to solve the problem of
trust between cloud servers. Based on the elliptic curve certificate-free
cryptography, our scheme can establish secure session security.
The novelty of our scheme lies in the fact that it eliminates the need for
trusted authority(TA) and simplifies operation while maintaining security. In
our scheme, the cloud servers enable the data owners in need of the data
migration services to act as trusted third authority, so that they can verify
each other and establish trusted session keys after each of the involved users
performs some computation independently.
Our scheme providers and users. It is worthy of note that both the two cloud
servers involved in the migration process use anonymous identities for
mutual authentication and key agreement. This strategy not only protects the
identity privacy of the cloud service providers, but also makes it impossible
for the involved cloud service providers to gain unnecessary information
such as the brand of the old and new mobile phones belonging to the users
respectively. Thus, our methodology maintains the privacy of the users by
not revealing his/her personal choice.
Our scheme provides identity traceability to trace malicious cloud servers. If
the cloud service providers exhibit any errors or illegal operations in the
service process, users can trace back to the real identity of the corresponding
cloud server based on the anonymous identity
Existing system:
In order to realize data sharing in the cloud, a few schemes have used proxy re-
encryption techniques. For example, Liang and Cao proposed a property-based
proxy re-encryption scheme to enable users to achieve authorization in access
control environments. However, Liang and Au[10] pointed out that this scheme
does not have Adaptive security and CCA security features. Sun et al. [12]
introduced a new proxy broadcast repeat encryption (PBRE) scheme and proved its
security against selective ciphertext attact (CCA) in a random oracle model under
the decision n-BDHE hypothesis. Ge and Liu [13] proposed a broadcast agent
encryption (Rib-BPRE) security concept based on revocable identity to solve the
key revocation problem. In this RIB-BPRE scheme, the agent can undo a set of
delegants specified by the principal from the re-encryption key. They also pointed
out that the identity-based broadcast agent re-encryption(RIB-BPRE) schemes do
not take advantage of cloud computing, thus causes inconvenience to cloud users.
Liu et al. [14] proposed a secure multi-owner data sharing scheme for dynamic
groups in the cloud. Based on group signature and dynamic broadcast encryption
technology, any cloud user can share their data anonymously with others. Yuan et
al. [15] proposed a cloud user data integrity check scheme based on. Polynomial
authentication tag and agent tag update technology, which supports multi-user
modification to resist collusive attack and other features.
Abstract
With cloud computing, users can remotely store their data into the cloud
and use on-demand high-quality applications. Data outsourcing: users are
relieved from the burden of data storage and maintenance When users put their
data (of large size) on the cloud, the data integrity protection is challenging
enabling public audit for cloud data storage security is important Users can ask an
external audit party to check the integrity of their outsourced data. Purpose of
developing data security for data possession at un-trusted cloud storage servers
we are often limited by the resources at the cloud server as well as at the client.
Given that the data sizes are large and are stored at remote servers, accessing the
entire file can be expensive in input output costs to the storage server. Also
transmitting the file across the network to the client can consume heavy
bandwidths. Since growth in storage capacity has far outpaced the growth in data
access as well as network bandwidth, accessing and transmitting the entire
archive even occasionally greatly limits the scalability of the network resources.
Furthermore, the input output to establish the data proof interferes with the on-
demand bandwidth of the server used for normal storage and retrieving purpose.
The Third Party Auditor is a respective person to manage the remote data in a
global manner.
CHAPTER 1
INTRODUCTION
Cloud Computing has been envisioned as the next-generation architecture of IT enterprise, due to
its long list of unprecedented advantages in the IT history: on-demand self-service, ubiquitous
network access, location independent resource pooling, rapid resource elasticity, usage-based
pricing and transference of risk. As a disruptive technology with profound implications, Cloud
Computing is transforming the very nature of how businesses use information technology.
One fundamental aspect of this paradigm shifting is that data is being
centralized or outsourced into the Cloud. From users’ perspective, including both individuals and
enterprises, storing data remotely into the cloud in a flexible on-demand manner brings appealing
benefits: relief of the burden for storage management, universal data access with independent
geographical locations, and avoidance of capital expenditure on hardware, software, and
personnel maintenances, etc . While these advantages of using clouds are unarguable, due to the
opaqueness of the Cloud—as separate administrative entities, the internal operation details of
cloud service providers (CSP) may not be known by cloud users—data outsourcing is also
relinquishing user’s ultimate control over the fate of their data.
As a result, the correctness of the data in the cloud is being put at risk due to
the following reasons. First of all, although the infrastructures under the cloud are
much more powerful and reliable than personal computing devices, they are still
facing the broad range of both internal and external threats for data integrity.
Examples of outages and security breaches of noteworthy cloud services appear
from time to time . Secondly, for the benefits of their own, there do exist various
motivations for cloud service providers to behave unfaithfully. Towards the cloud
users regarding the status of their outsourced data. Examples include cloud service
providers, for monetary reasons, reclaiming storage by discarding data that has not
been or is rarely accessed, or even hiding data loss incidents so as to maintain a
reputation. In short, although outsourcing data into the cloud is economically
attractive for the cost and complexity of long-term large-scale data storage, it does
not offer any guarantee on data integrity and availability. This problem, if not
properly addressed, may impede the successful deployment of the cloud
architecture.
Considering the large size of the outsourced data and the user’s constrained resource
capability, the ability to audit the correctness of the data in a cloud environment can be
formidable and expensive for the cloud users. Therefore, to fully ensure the data security and
save the cloud users’ computation resources, it is of critical importance to enable public
auditability for cloud data storage so that the users may resort to a third party auditor (TPA), who
has expertise and capabilities that the users do not, to audit the outsourced data when needed.
Based on the audit result, TPA could release an audit report, which would not only help users to
evaluate the risk of their subscribed cloud data services, but also be beneficial for the cloud
service provider to improve their cloud based service platform . In a word, enabling public risk
auditing protocols will play an important role for this nascent cloud.
CHAPTER 2
LITERATURE SURVEY
2.1 Using Third Party Auditor for Cloud Data Security: A Review, Ashish
Bhagat
Cloud data security is a major concern for the client while using the cloud services
provided by the service provider. There can be some security issues and conflicts between the
client and the service provider. To resolve those issues, a third party can be used as an auditor. In
this paper, we have analysed various mechanisms to ensure reliable data storage using cloud
services. It mainly focuses on the way of providing computing resources in form of service rather
than a product and utilities are provided to users over internet. The cloud is a platform where
data owner remotely store their data in cloud. The main goal of cloud computing concept is to
secure and protect the data which come under the property of users. The security of cloud
computing environment is exclusive research area which requires further development from both
academic and research communities. In the corporate world there are a huge number of clients
which is accessing the data and modifying the data. In the cloud, application and services move
to centralized huge data center and services and management of this data may not be trustworthy,
into cloud environment the computing resources are under control of service provider and the
third-party-auditor ensures the data integrity over out sourced data. Third-party-auditor not only
read but also may be change the data. Therefore a mechanism should be provided to solve the
problem. We examine the problem contradiction between client and CSP, new potential security
scheme used to solve problem. The purpose of this paper is to bring greater clarity landscape
about cloud data security and their solution at user level using encryption algorithms which
ensure the data owner and client that their data are intact.
2.2 A Review of Approaches to Achieve Data Storage Correctness in Cloud
Computing Using Trusted Third Party Auditor, Patel, H
“CLOUD COMPUTING” is one of the emerging research area that is been used
effectively at the industry level. One of the major contribution of cloud computing is to avail all
the resources at one place in the form a cluster and to perform the resource allocation based on
request performed by different users. We will define the user request in the form of requirement
query. Cloud Computing devices being able to exchange data such as text files as well as
business information with the help of internet. Technically, it is completely distinct from an
infrared. Using a new models Iaas,Paas,Saas..The transmission and storage of large amounts of
information, and become propulsion of fiber-optic accelerating towards 40G/100G. Its
foreground is to provide secure, quick, convenient data storage and net computing service
centered by internet. In this paper we consider about cloud computing, its introduction, its
evalution, virtulization, service delivery model, cloud deployment model, working and future
development of cloud computing.
Cloud computing is one of today's most exciting technologies due to its ability to reduce
costs associated with computing while increasing flexibility and scalability for computer
processes. During the past few years, cloud computing has grown from being a promising
business idea to one of the fastest growing parts of the IT industry. IT organizations have
expresses concern about critical issues (such as security) that exist with the widespread
implementation of cloud computing. These types of concerns originate from the fact that data is
stored remotely from the customer's location; in fact, it can be stored at any location. Security, in
particular, is one of the most argued-about issues in the cloud computing field; several
enterprises look at cloud computing warily due to projected security risks. The risks of
compromised security and privacy may be lower overall, however, with cloud computing than
they would be if the data were to be stored on individual machines instead of in a so - called
"cloud" (the network of computers used for remote storage and maintenance). Comparison of the
benefits and risks of cloud computing with those of the status quo are necessary for a full
evaluation of the viability of cloud computing. Consequently, some issues arise that clients need
to consider as they contemplate moving to cloud computing for their businesses. In this paper I
summarize reliability, availability, and security issues for cloud computing (RAS issues), and
propose feasible and available solutions for some of them.
2.4 Third Party Auditing For Security Data Storage in cloud through digital
signature using RSA, Govinda V, and Gurunathaprasad
Cloud computing is a model where computing resources are rendered on rental basis with
the use of clusters of commodity computers. In one of the services offered by cloud viz. Storage
as a Service, users outsource their data to cloud without having direct possession or control on it.
As the cloud service provider is not completely trustworthy, it raises issues such as data security
and privacy. Achieving secure cloud data storage is one of the major security issues. The issue
can be addressed into two directions viz. first which makes use of trusted third party auditor
(TTPA) and other which do not. In this paper, we review various recently proposed approaches
to ensure data storage correctness without using TTPA.
2.5 Draft Nist Working Definition of Cloud Computing, P. Mell and t. Grance
Cloud computing is an emerging computing model which has evolved as a result of the
maturity of underlying prerequisite technologies. There are differences in perspective as to when
a set of underlying technologies becomes a “cloud” model. In order to categorize cloud
computing services, and to expect some level of consistent characteristics to be associated with
the services, cloud adopters need a consistent frame of reference. The NIST Cloud Computing
Reference Architecture and Taxonomy document defines a standard reference architecture and
taxonomy that provide the USG agencies with a common and consistent frame of reference for
comparing cloud services from different service providers when selecting and deploying cloud
services to support their mission requirements. At a certain level of abstraction, a cloud adopter
does not need to repeatedly interpret the technical representation of cloud services available from
different vendors. Rather the use of a common reference architecture by the cloud service
providers can be an efficient tool that ensures consistent categorization of the services offered.
2.6 Enabling Public Auditability And Data Dynamics For Storage Security in
Cloud Computing, Qian Wang and Cong Wang and Kui Ren
This approach proposes a privacy-preserving public auditing system for data storage
security in Cloud Computing, where TPA can perform the storage auditing without demanding
the local copy of data. We utilize the homomorphism authenticator and random mask technique
to guarantee that TPA would not learn any knowledge about the data content stored on the cloud
server during the efficient auditing process, which not only eliminates the burden of cloud user
from the tedious and possibly expensive auditing task, but also alleviates the users’ fear of their
outsourced data leakage. Considering TPA may concurrently handle multiple audit sessions from
different users for their outsourced data files, we further extend our privacy-preserving public
auditing protocol into a multi-user setting, where TPA can perform the multiple auditing tasks in
a batch manner, i.e., simultaneously. Extensive security and performance analysis shows that the
proposed schemes are provably secure and highly efficient. We believe all these advantages of
the proposed schemes will shed light on economies of scale for Cloud Computing.
3.3 DATA FLOW DIAGRAM
Login
Yes No
User Exists
No
Yes
Key Generation
Upload Files
Download Files
Registration
Edit Profile
4.1 MODULES
4.1.5. Sentinels
a. Setup Phase
b. Audit Phase
4.1.2 BATCH AUDITING
With the establishment of privacy-preserving public auditing in Cloud Computing, TPA may
concurrently handle multiple auditing delegations upon different users’ requests. The individual
auditing of these tasks for TPA can be tedious and very inefficient. Batch auditing not only allows
TPA to perform the multiple auditing tasks simultaneously, but also greatly reduces the computation
cost on the TPA side.
Hence, supporting data dynamics for privacy-preserving public risk auditing is also of paramount
importance. Now we show how our main scheme can be adapted to build upon the existing work to
support data dynamics, including block level operations of modification, deletion and insertion. We
can adopt this technique in our design to achieve privacy-preserving public risk auditing with
support of data dynamics.
4.1.5. SENTINELS
In this scheme, unlike in the key-hash approach scheme, only a single key can be used
irrespective of the size of the file or the number of files whose retrievability it wants to verify. Also
the archive needs to access only a small portion of the file F unlike in the key-has scheme which
required the archive to process the entire file F for each protocol verification. If the prover has
modified or deleted a substantial portion of F, then with high probability it will also have suppressed
a number of sentinels.
4.1.6. VERIFICATION PHASE
The verifier before storing the file at the archive, preprocesses the file and appends some
Meta data to the file and stores at the archive. At the time of verification the verifier uses this Meta
data to verify the integrity of the data. It is important to note that our proof of data integrity protocol
just checks the integrity of data i.e. if the data has been illegally modified or deleted. It does not
prevent the archive from modifying the data.
CHAPTER 5
REQUIREMENT SPECIFICATION
Hard Disk : 40 GB
Mouse : Logitech
Ram : 512 Mb
Features OF .Net
“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).
The CLR is described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
Conversion from a low-level assembler-style language, called Intermediate
Language (IL), into code native to the platform being executed on.
Memory management, notably including garbage collection.
Checking and enforcing security restrictions on the running code.
Loading and executing programs, with version control and other such
features.
The following features of the .NET framework are also worth description:
Managed Code
The code that targets .NET, and which contains certain extra Information - “metadata” -
to describe itself. Whilst both managed and unmanaged code can run in the runtime, only
managed code contains the information that allows the CLR to guarantee, for instance, safe
execution and interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation
and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not.
Targeting CLR can, depending on the language you’re using, impose certain constraints on the
features available. As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications - data that doesn’t get garbage collected but instead is
looked after by unmanaged code.
The CLR uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by describing types in a
common way. CTS define how types work within the runtime, which enables types in one
language to interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the runtime also
ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming language, a
set of language features and rules for using them called the Common Language Specification
(CLS) has been defined. Components that follow these rules and expose only CLS features are
considered CLS-compliant.
The set of classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each
providing distinct areas of functionality, with dependencies between the namespaces kept to a
minimum.
The multi-language capability of the .NET Framework and Visual Studio .NET
enables developers to use their existing programming skills to build all types of applications and
XML Web services. The .NET framework supports new versions of Microsoft’s old favorites
Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new
additions to the family.
Visual Basic .NET has been updated to include many new and improved language
features that make it a powerful object-oriented programming language. These features include
inheritance, interfaces, and overloading, among others. Visual Basic also now supports
structured exception handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant
language can use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
Active State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be integrated into the Visual
Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.
FORTRAN
COBOL
Eiffel
ASP.NET Windows Forms
XML WEB
SERVICES
Base Class Libraries
Common Language Runtime
Operating System
C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET
Framework; it manages the execution of the code and also makes the development process
easier by providing services C#.NET is a CLS-compliant language. Any objects, classes, or
components that created in C#.NET can be used in any other CLS-compliant language. In
addition, we can use objects, classes, and components created in other CLS-compliant
languages in C#.NET .The use of CLS ensures complete interoperability among applications,
regardless of the languages used to create the application.
Constructors are used to initialize objects, whereas destructors are used to destroy them.
In other words, destructors are used to release the resources allocated to the object. In
C#.NET the sub finalize procedure is available. The sub finalize procedure is used to
complete the tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the sub finalize
procedure can be called only from the class it belongs to or from derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors
allocated resources, such as objects and variables. In addition, the .NET Framework
automatically releases memory for reuse by destroying objects that are no longer in use. In
C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.
OVERLOADING
MULTITHREADING:
There are different types of application, such as Windows-based applications and Web-based
applications.
5.3 SQL SERVER
The OLAP Services feature available in SQL Server version 7.0 is now called
SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component. The
Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server
2000 Meta Data Services. References to the component now use the term Meta Data Services.
The term repository is used only in reference to the repository engine within Meta Data Services
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE:
VIEWS OF TABLE:
2. Datasheet View
DESIGN VIEW
To build or modify the structure of a table we work in the table design view. We can
specify what kind of data will be hold.
DATASHEET VIEW
To add, edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either dynaset (if you edit
it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the
dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it,
such as deleting or updating.
CHAPTER 6
IMPLEMENTATION AND RESULTS
6.1 CODING
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Data.SqlClient;
if (!IsPostBack)
tex_logusr.Text = "";
tex_logpriky.Text = "";
con2.Open();
adp.Fill(ds);
if (ds.Tables[0].Rows.Count > 0)
{
a = ds.Tables[0].Rows[0]["usrid"].ToString();
b = ds.Tables[0].Rows[0]["nam"].ToString();
Session["us"] = tex_logusr.Text;
Session["pr"] = tex_logpriky.Text;
Session["c"] = a;
Session["d"] = b;
if (ds.Tables[0].Rows[0]["loginid"].ToString() == tex_logusr.Text)
if (ds.Tables[0].Rows[0]["privacykey"].ToString() == tex_logpriky.Text)
Response.Redirect("uploadfiles.aspx");
else
else
MsgBox.Show("Invalid user");
{
Response.Redirect("ownerpage.aspx");
else
con2.Close();
}
6.1.2. Register
using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.IO;
using System.Text;
using System.Web.Mail;
int m,len,min,max;
string key;
if (!IsPostBack)
txtRandom.Visible = false;
lb_usrid.Text = Convert.ToString(cs.userid());
Tex_key.Visible = false;
Tex_paskey.Visible = false;
if (Page.IsValid)
try
m = cs.userid();
txtRandom.Text = Convert.ToString(randomnumber);
Session["ran"] = txtRandom.Text;
Session["useid"] = lb_usrid.Text;
Session["nam"] = Tex_nam.Text;
Session["pwd"] = Tex_paswrd.Text;
//Tex_key.Visible = true;
//Tex_paskey.Visible = true;
Literal2.Visible = false;
Literal2.Text = sb.ToString();
Session["ke"]=Tex_key.Text;
Session["pasky"]=Tex_paskey.Text;
//Session["lit"] = Literal2.Text;
//len = Tex_usrnam.Text.Length;
// min = len;
// max = m;
//}
//Session["random"] = txtRandom.Text;
key = txtRandom.Text;
Session["mail"] = Tex_email.Text;
Response.Redirect("registrdetails.aspx");
MsgBox.Show(ex.Message);
finally
Tex_nam.Text = "";
Tex_usrnam.Text = "";
Tex_paswrd.Text = "";
Tex_email.Text = "";
Tex_addrs.Text = "";
Tex_city.Text = "";
Tex_dob.Text = "";
Tex_mblno.Text = "";
{
}
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.IO;
using System.Text;
using System.Web.Mail;
if (!IsPostBack)
TextBox1.Visible = false;
TextBox2.Visible = false;
TextBox3.Visible = false;
Panel1.Visible = false;
Label6.Text = (string)Session["useid"];
le1 = (string)Session["nam"];
Label1.Text = le1.Length.ToString();
le2 = (string)Session["pwd"];
Label5.Text = le2.Length.ToString();
Label7.Text = (string)Session["ran"];
TextBox1.Text = (string)Session["mail"];
TextBox2.Text = (string)Session["ke"];
TextBox3.Text = (string)Session["pasky"];
Literal1.Visible = false;
Literal1.Text = sb1.ToString();
{
if (GmailSender.SendMail("[email protected]", "5f08031987", TextBox1.Text, "Hi
your login details are below", Literal1.Text))
//lblmsg.Visible = true;
//lblmsg.Visible = true;
//lblmsg.Visible = true;
//lblmsg.Visible = true;
//lblmsg.Visible = true;
//lblmsg.Visible = true;
}
}
Panel1.Visible = true;
Label10.Text = (string)Session["ke"];
Label11.Text = (string)Session["pasky"];
Panel1.Visible = true;
Label10.Text = (string)Session["ke"];
Label11.Text = (string)Session["pasky"];
Panel1.Visible = true;
Label10.Text = (string)Session["ke"];
Label11.Text = (string)Session["pasky"];
Panel1.Visible = true;
Label10.Text = (string)Session["ke"];
Label11.Text = (string)Session["pasky"];
Panel1.Visible = true;
Label10.Text = (string)Session["ke"];
Label11.Text = (string)Session["pasky"];
Panel1.Visible = true;
Label10.Text = (string)Session["ke"];
Label11.Text = (string)Session["pasky"];
Panel1.Visible = true;
Label10.Text = (string)Session["ke"];
Label11.Text = (string)Session["pasky"];
}
6.1.4. Class File
using System;
using System.Data;
using System.Configuration;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Data.SqlClient;
public class1()
//
//
public void userregister(int uid, string nm, string usrnam, string paswrd, string Email, string
addrs, string city, string dob, string mobile, string logid, string prikey)
try
con.Open();
cmd.Connection = con;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "registerform";
cmd.Parameters["@usrid"].Value = uid;
cmd.Parameters["@nam"].Value = nm;
cmd.Parameters["@username"].Value = usrnam;
cmd.Parameters.Add("@password", SqlDbType.VarChar, 50);
cmd.Parameters["@password"].Value = paswrd;
cmd.Parameters["@email"].Value = Email;
cmd.Parameters["@address"].Value = addrs;
cmd.Parameters["@city"].Value = city;
cmd.Parameters["@dob"].Value = dob;
cmd.Parameters["@mobileno"].Value = mobile;
cmd.Parameters["@loginid"].Value = logid;
cmd.Parameters["@privacykey"].Value = prikey;
cmd.ExecuteNonQuery();
MsgBox.Show(ex.Message);
public void uploadfile(byte[] fibytes, string fityp,string dattim, string uid, string nam, string
fiid, string fnam)
{
try
con.Open();
// int n = finam.Length;
cmd6 = new SqlCommand("insert into commnfile values('" + uid + "','" + nam + "','" +
fiid + "',@uploadfiles,'" + fnam + "','" + fityp + "','" + dattim + "')", con);
cmd2.Parameters.AddWithValue("@uploadfiles", fibytes);
cmd6.Parameters.AddWithValue("@uploadfiles", fibytes);
cmd2.ExecuteNonQuery();
cmd6.ExecuteNonQuery();
con.Close();
MsgBox.Show(ex.Message);
public void uploadimg(byte[] fibytes, string fityp1, string dattim ,string uid, string nam, string
fiid, string fnam)
try
{
con1.Open();
con.Open();
// int n = finam.Length;
cmd7 = new SqlCommand("insert into commnfile values('" + uid + "','" + nam + "','" +
fiid + "',@uploadimg,'" + fnam + "','" + fityp1 + "','" + dattim + "')", con);
cmd4.Parameters.AddWithValue("@uploadimg", fibytes);
cmd7.Parameters.AddWithValue("@uploadimg", fibytes);
cmd4.ExecuteNonQuery();
cmd7.ExecuteNonQuery();
con1.Close();
con.Close();
MsgBox.Show(ex.Message);
con1.Open();
id2 = Convert.ToString(cmd5.ExecuteScalar());
if (id2 == "")
{
fid2 = 1;
else
fid2 = Convert.ToInt16(id2);
fid2 = fid2 + 1;
con1.Close();
return fid2;
con.Open();
id = Convert.ToString(cmd1.ExecuteScalar());
if (id == "")
fid = 1;
else
fid = Convert.ToInt16(id);
fid = fid + 1;
}
con.Close();
return fid;
con.Open();
id1 = Convert.ToString(cmd3.ExecuteScalar());
if (id1 == "")
fid1 = 1;
else
fid1 = Convert.ToInt16(id1);
fid1 = fid1 + 1;
con.Close();
return fid1;
len1 = Convert.ToString(n2.Length);
logid = Convert.ToString(n2 + n1 + len1 + n3);
return logid;
len2 = Convert.ToString(s3.Length);
return prky;
con.Open();
cmd7.ExecuteNonQuery();
con.Close();
con.Open();
con1.Open();
ad1.Fill(set);
con.Close();
con1.Close();
return set;
con.Open();
cmd8.Fill(dt1);
con.Close();
return dt1;
con.Open();
ad3.Fill(dt);
con.Close();
return dt;
con.Open();
SqlDataAdapter ad4 = new SqlDataAdapter("select username from register where nam='" +
unam + "'", con);
ad4.Fill(set1);
con.Close();
return set1;
con.Open();
ad5.Fill(set2);
con.Close();
return set2;
con.Open();
ad6.Fill(set1);
con.Close();
return set1;
}
public DataSet selectdd2()
con.Open();
cmd9.Fill(dt2);
con.Close();
return dt2;
con.Open();
ad7.Fill(dt3);
con.Close();
return dt3;
}
6.2 SCREEN SHOTS
CHAPTER 7
FUTURE WORK
We can further extend our privacy-preserving public auditing protocol into a multi-user
setting, where TPA can perform the multiple auditing tasks in a batch manner, i.e.,
simultaneously. Extensive security and performance analysis shows that the proposed schemes
are provably secure and highly efficient. We believe all these advantages of the proposed
schemes will shed light on economies of scale for Cloud Computing.
CHAPTER 8
REFERENCES
[2] R. AHLSWEDE, N. CAI, S.-Y. R. LI, AND R.W. YEUNG. NETWORK INFORMATION
FLOW. IEEE TRANS. ON INFORMATION THEORY, 46(4):1204–1216, JUL 2000.