0% found this document useful (0 votes)
138 views85 pages

Clustering Cloud Computing

The document discusses three main cloud security challenges: data protection, user authentication, and disaster/data breach contingency planning. It provides details on each challenge, noting that data must be encrypted at all times and access logs monitored to restrict access to authorized users only. Companies also need contingency plans in case their cloud provider fails or a data breach occurs to ensure continued access to their critical data. The document then discusses virtual machine security in the cloud, including potential attacks like resource manipulation, data attacks, and denial of service attacks. It recommends steps like strong physical security, traffic monitoring, gateway security configuration, integrity checks, encryption, and role-based access controls to help secure virtual machines in the cloud. Finally, it defines identity and access management

Uploaded by

Sahil Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
138 views85 pages

Clustering Cloud Computing

The document discusses three main cloud security challenges: data protection, user authentication, and disaster/data breach contingency planning. It provides details on each challenge, noting that data must be encrypted at all times and access logs monitored to restrict access to authorized users only. Companies also need contingency plans in case their cloud provider fails or a data breach occurs to ensure continued access to their critical data. The document then discusses virtual machine security in the cloud, including potential attacks like resource manipulation, data attacks, and denial of service attacks. It recommends steps like strong physical security, traffic monitoring, gateway security configuration, integrity checks, encryption, and role-based access controls to help secure virtual machines in the cloud. Finally, it defines identity and access management

Uploaded by

Sahil Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 85

TGPCET/CSE

Unit-4
Q1) What are different cloud security challenges? (7M)(S-17)
Cloud computing security challenges fall into three broad categories:
Data Protection: Securing your data both at rest and in transit
User Authentication: Limiting access to data and monitoring who accesses the data
Disaster and Data Breach: Contingency Planning
Data Protection: Implementing a cloud computing strategy means placing critical data in the hands of a third party, so
ensuring the data remains secure both at rest (data residing on storage media) as well as when in transit is of paramount
importance. Data needs to be encrypted at all times, with clearly defined roles when it comes to who will be managing the
encryption keys. In most cases, the only way to truly ensure confidentiality of encrypted data that resides on a cloud
provider's storage servers is for the client to own and manage the data encryption keys.
User Authentication: Data resting in the cloud needs to be accessible only by those authorized to do so, making it critical to
both restrict and monitor who will be accessing the company's data through the cloud. In order to ensure the integrity of user
authentication, companies need to be able to view data access logs and audit trails to verify that only authorized users are
accessing the data. These access logs and audit trails additionally need to be secured and maintained for as long as the
company needs or legal purposes require. As with all cloud computing security challenges, it's the responsibility of the
customer to ensure that the cloud provider has taken all necessary security measures to protect the
customer's data and the access to that data.

Contingency Planning

With the cloud serving as a single centralized repository for a company's mission-critical data, the risks of
having that data compromised due to a data breach or temporarily made unavailable due to a natural
disaster are real concerns. Much of the liability for the disruption of data in a cloud ultimately rests with the
company whose mission-critical operations depend on that data, although liability can and should be
negotiated in a contract with the services provider prior to commitment. A comprehensive security
assessment from a neutral third-party is strongly recommended as well.
Companies need to know how their data is being secured and what measures the service provider will be
taking to ensure the integrity and availability of that data should the unexpected occur. Additionally,
companies should also have contingency plans in place in the event their cloud provider fails or goes
bankrupt. Can the data be easily retrieved and migrated to a new service provider or to a non-cloud strategy
if this happens? And what happens to the data and the ability to access that data if the provider gets
acquired by another company?
========================================================================
=======

Q.2) Explain virtual machine security in cloud computing.(7M)(S-17) Ans:


2011 ended with the popularization of an idea: Bringing VMs (virtual machines) onto the cloud. Recent
years have seen great advancements in both cloud computing and virtualization On one hand there is the
ability to pool various resources to provide software-as-a-service, infrastructure-as-a- service and platform-
as-a-service. At its most basic, this is what describes cloud computing. On the other hand, we have virtual
machines that provide agility, flexibility, and scalability to the cloud resources by allowing the vendors to
copy, move, and manipulate their VMs at will. The term virtual machine essentially describes sharing the
resources of one single physical computer into various computers within itself. VMware and virtual box are
very commonly used virtual systems on desktops. Cloud computing effectively stands for many computers
pretending to be one computing environment. Obviously, cloud computing would have many virtualized
systems to maximize resources.
Keeping this information in mind, we can now look into the security issues that arise within a cloud-
computing scenario. As more and more organizations follow the “Into the Cloud” concept, malicious
hackers keep finding ways to get their hands on valuable information by manipulating safeguards and
breaching the security layers (if any) of cloud environments. One issue is that the cloud-computing scenario
is not as transparent as it claims to be. The service user has no clue about how his information is processed
and stored. In addition, the service user cannot directly control the flow of data/information storage and
processing. The service provider usually is not aware of the details of the service running on his or her
environment. Thus, possible attacks on the cloud-computing environment can be classified in to:
Resource attacks:

These kinds of attacks include manipulating the available resources into mounting a large-scale botnet
attack. These kinds of attacks target either cloud providers or service providers.
Data attacks: These kinds of attacks include unauthorized modification of sensitive data at nodes, or
performing configuration changes to enable a sniffing attack via a specific device etc. These attacks are
focused on cloud providers, service providers, and also on service users.
Denial of Service attacks: The creation of a new virtual machine is not a difficult task, and thus, creating
rogue VMs and allocating huge spaces for them can lead to a Denial of Service attack for service providers
when they opt to create a new VM on the cloud. This kind of attack is generally called virtual machine
sprawling.
Backdoor: Another threat on a virtual environment empowered by cloud computing is the use of backdoor
VMs that leak sensitive information and can destroy data privacy.
Having virtual machines would indirectly allow anyone with access to the host disk files of the VM to take
a snapshot or illegal copy of the whole System. This can lead to corporate espionage and piracy of
legitimate products.
With so many obvious security issues (and a lot more can be added to the list), we need to enumerate some
steps that can be used to secure virtualization in cloud computing.
The most neglected aspect of any organization is its physical security. An advanced social engineer can take
advantage of weak physical-security policies an organization has put in place. Thus, it’s important to have a
consistent, context-aware security policy when it comes to controlling access to a data center. Traffic
between the virtual machines needs to be monitored closely by using at least a few standard monitoring
tools.
After thoroughly enhancing physical security, it’s time to check security on the inside. A well- configured
gateway should be able to enforce security when any virtual machine is reconfigured, migrated, or added.
This will help prevent VM sprawls and rogue VMs. Another approach that might help enhance internal
security is the use of third-party validation checks, preformed in accordance with security standards.
Checking virtual systems for integrity increases the capabilities for monitoring and securing environments.
One of the primary focuses of this integrity check should the seamless integration of existing virtual
systems like VMware and virtual box. This would lead to file integrity checking and increased protection
against data losses within VMs. Involving agentless anti-malware intrusion detection and prevention in one
single virtual appliance (unlike isolated point security solutions) would contribute greatly towards VM
integrity checks. This will greatly reduce operational overhead while adding zero footprints.
A server on a cloud may be used to deploy web applications, and in this scenario an OWASP top-ten
vulnerability check will have to be performed. Data on a cloud should be encrypted with suitable
encryption and data-protection algorithms. Using these algorithms, we can check the integrity of the user
profile or system profile trying to access disk files on the VMs. Profiles lacking in security protections can
be considered infected by malwares. Working with a system ratio of one user to one machine would also
greatly reduce risks in virtual computing platforms. To enhance the security aspect even more, after a
particular environment is used, it’s best to sanitize the system (reload) and destroy all the residual data.
Using incoming IP addresses to determine scope on Windows-based machines, and using SSH
configuration settings on Linux machines, will help maintain a secure one- to-one connection.
========================================================================
=

Q3) Explain Identity Access Management? (7M)(S-17) Ans:


Identity and access management (IAM) is a framework for business processes that facilitates the
management of electronic or digital identities. The framework includes the organizational policies for
managing digital identity as well as the technologies needed to support identity management.
With IAM technologies, IT managers can control user access to critical information within their
organizations. Identity and access management products offer role-based access control, which lets system
administrators regulate access to systems or networks based on the roles of individual users within the
enterprise.
In this context, access is the ability of an individual user to perform a specific task, such as view, create or
modify a file. Roles are defined according to job competency, authority and responsibility within the
enterprise.
Systems used for identity and access management include single sign-on systems, multifactor
authentication and access management. These technologies also provide the ability to securely store
identity and profile data as well as data governance functions to ensure that only data that is necessary and
relevant is shared.
These products can be deployed on premises, provided by a third party vendor via a cloud-based
subscription model or deployed in a hybrid cloud.
========================================================================
==========

Q.4) What do you mean by cloud contract? Explain in details(7M)(S-17) Ans:


"Cloud computing” means accessing computer capacity and programming facilities online or "in the
cloud". Customers are spared the expense of purchasing, installing and maintaining hardware and software
locally.
Customers can easily expand or reduce IT capacity according to their needs. This essentially transforms
computing into an on-demand utility. An added boon is that data can be accessed and processed from
anywhere via the Internet.
Unfortunately, consumers and companies are often reluctant to take advantage of cloud computing services
either because contracts are unclear or are unbalanced in favour of service providers. Existing regulations
and national contract laws may not always be adapted to cloud-based services. Protection of personal data
in a cloud environment also needs to be addressed. Adapting contract law is therefore an important part of
the Commission’s cloud computing strategy.
Safe and fair contracts for cloud computing

The Commission is working towards cloud computing contracts that contain safe and fair terms and
conditions for all parties. On 18 June 2013, the Commission set up a group of experts to define safe and fair
conditions and identify best practices for cloud computing contracts. The Commission has
also launched a comparative study on cloud computing contracts to supplement the work of the Expert
Group.
-

Q5) Justify cloud security challenges in detail.(7M)(W-17)(W-18)

Cloud computing security challenges fall into three broad categories:

Data Protection: Securing your data both at rest and in transit

User Authentication: Limiting access to data and monitoring who accesses the data

Disaster and Data Breach: Contingency Planning

Data Protection

Implementing a cloud computing strategy means placing critical data in the hands of a third party, so
ensuring the data remains secure both at rest (data residing on storage media) as well as when in transit is of
paramount importance. Data needs to be encrypted at all times, with clearly defined roles when it comes to
who will be managing the encryption keys. In most cases, the only way to truly ensure confidentiality of
encrypted data that resides on a cloud provider's storage servers is for the client to own and manage the data
encryption keys.

User Authentication

Data resting in the cloud needs to be accessible only by those authorized to do so, making it critical to both
restrict and monitor who will be accessing the company's data through the cloud. In order to ensure the
integrity of user authentication, companies need to be able to view data access logs and audit trails to verify
that only authorized users are accessing the data. These access logs and audit trails additionally need to be
secured and maintained for as long as the company needs or legal purposes require. As with all cloud
computing security challenges, it's the responsibility of the customer to ensure that the cloud provider has
taken all necessary security measures to protect the customer's data and the access to that data.

Contingency Planning

With the cloud serving as a single centralized repository for a company's mission-critical data, the risks of
having that data compromised due to a data breach or temporarily made unavailable due to a natural
disaster are real concerns. Much of the liability for the disruption of data in a cloud ultimately rests with the
company whose mission-critical operations depend on that data, although liability can and should be
negotiated in a contract with the services provider prior to commitment. A comprehensive security
assessment from a neutral third-party is strongly recommended as well.

Companies need to know how their data is being secured and what measures the service provider will be
taking to ensure the integrity and availability of that data should the unexpected occur. Additionally,
companies should also have contingency plans in place in the event their cloud provider fails or goes
bankrupt. Can the data be easily retrieved and migrated to a new service provider or to a
non-cloud strategy if this happens? And what happens to the data and the ability to access that data if the
provider gets acquired by another company?

========================================================================

Q.6) Explain the contracting models in cloud.(6M)(W-17)(W-16)

Organizations can only reap the advantages of Cloud computing once the contract for such a
service has been agreed and is water-tight. This article provides a guide for what contract
managers need to consider when negotiating a deal for their organizations’ ‘Cloud’.

Today it is possible to contract for ‘on-demand’ computing which is internet-based whereby shared
resources and information are provided, i.e., the phenomenon known as ‘Cloud’ computing. In
addition to economies of scale enjoyed by multiple users, one of the big advantages is that work
can be allocated depending upon the time zone users are operating in. Thus, entities operating
during European business hours would not be operating at the same time that users would be
operating in North America. The result is a reduced overall cost of the resources including reduced
operating and maintenance expenses. In addition, multiple users can obtain access to a single
server without the need to purchase licenses for different applications. Moreover, it has the
advantages of a ‘pay-as-you-go’ model as well as a host of other potential advantages which have
been referred to as ‘agility’, ‘device and location independence’, ‘scalability and elasticity’, etc.
Nevertheless, questions remain and Cloud computing presents potential commercial and
contractual pitfalls for the unwary.

It is clear that despite some unanswered questions, computing resources delivered over the internet
are here today and here to stay. Analogous to a utility, the advantages of the ‘Cloud’ allow the cost
of the infrastructure, platform and service delivery to be shared amongst many users. But does this
in any way change basic contracting principles and time honored sound contract management
practices? The short answer is “No”. However, this does not detract from the fact that contracting
for and managing contracts for Cloud computing services can be a challenge. The complexity
associated with such contracts can be reduced by addressing some early threshold questions.

Definitions are of vital importance in any contract, including ones for Cloud computing.

A key concern is data security. Thus, it is important to define what is meant by ‘data’ and
distinguish between ‘personal data’ and ‘other data’. A distinction can be made between data that
is identified to or provided by the customer and information that is derived from the use of that
data, e.g., metadata. Careful attention should be paid to how the contract defines ‘consent’ to use
derived data. Generally, any such data should be explicit and based upon a meaningful
understanding of how the derived data is going to be used.

Security standards might warrant different levels of security depending upon the nature of the data.
Likewise, what is meant by ‘security’? The tendency is to define security only in technical terms,
but security should be defined to include a broad range of data protection obligations. There are, of
course, many other potential key terms that warrant careful
definition in contracts for Cloud computing services. However, this is nothing new to the field of
good contracting and sound contract management practices.

Naturally, if personal or confidential information is going to be entrusted to a third party, the


recipient must comply with appropriate contractual controls and statutory requirements regarding
privacy and confidentiality. This is why taking the time to define things carefully is so important.
Simply asserting that those security considerations will be ‘reasonable’ or comply with ‘industry
standards’ falls short of what is necessary. Abstract promises should be rejected in favor of
specific protocols and clear audit requirements as well as the obligation to comply with specific
legal requirements. This is true for all transactions.

‘Notice’ provisions are common in contracts. It follows that if you are contracting for computing
resources delivered over the internet you’d want clearly defined notice provisions that would
require notice of any security breaches as well as any discovery requests made in the context of
litigation. ‘Storage’ is also a key concept and term to be addressed and warrants special attention.
From a risk management standpoint you’d also want to understand the physical location of the
equipment and data storage. Perhaps geographical distance and diversity is both a challenge and an
opportunity in terms of risk management.

Defining success is always a challenge in any contract. The enemy of all good contracts is
ambiguity. When it comes to ‘availability’, users should avoid notions that the service provider
will use its ‘best efforts’ and exercise ‘reasonable care’. Clear availability targets are preferred
since there must be a way to measure availability. Usually, availability measured in terms of an
expressed percentage ends up being difficult if not impossible to understand let alone enforce.
Expressing availability in terms of units of time (e.g., a specified number of minutes per day of
down time) is preferable.

Early on, it makes sense to focus on the deployment model that works best for your organization. These fall
into three basic categories: Private Cloud, Public Cloud and Hybrid Cloud. As the name suggests, a Private
Cloud is an infrastructure operated for or by a single entity whether managed by that entity or some other
third party. A Private Cloud can, of course, be hosted internally or externally. A Public Cloud is when
services are provided over a network that is open to the public and may be free. Examples of Public Cloud
include Google, Amazon and Microsoft and provide services generally available via the internet. A Hybrid
Cloud (sometimes referred to as a ‘Community Cloud’) is composed of both private and/or public entities,
but decided to enter into some arrangement.
Consider the type of contractual arrangement. Is the form of contract essentially a ‘service’, ‘license’,
‘lease’ or some other form of contractual arrangement? Service agreements, licenses and leases have
different structures. Perhaps the contract for Cloud computing services contains aspects of all these
different types of agreements, including ones for IT infrastructure. Best to consider this early. Yet, such
considerations are common to all contracting efforts.

A threshold question is whether the data being stored or processed is being sent out of the country
and any associated special legal compliance issues. However, essentially all the normal contractual
concerns apply to contracts involving the ‘Cloud’. These include termination or suspension as well as the
return of data in the case of threats to security or data integrity. Likewise, data ownership, data comingling,
access to data, service provider viability, integration risks, publicity, service levels, disaster recovery and
changes in control or ownership are all important in all contracts involving personal, sensitive or
proprietary information, including contracts involving Cloud computing services.

How such services are taxed at the local or even international levels also presents some interesting
questions, the answers to which may vary by jurisdiction and over time. However, the tax implications of
cross boarder transactions by multinationals is hardly a new topic.

Although the issues are many, they are closely related to what any good negotiator or contract manager
would consider early on. Developing a checklist can often be a useful exercise, especially when dealing
with a new topic like Cloud computing.

Q.7) Write short notes on any three.(13M)(W-17)(W-16)(W-18)


a) Host Level Security
b) Network Level Security
c) Application Level Security
d) Virtual Machine Security

a) Host Security

Host security describes how your server is set up for the following tasks:

o Preventing attacks.
o Minimizing the impact of a successful attack on the overall system.
o Responding to attacks when they occur.

It always helps to have software with no security holes. Good luck with that! In the real world, the best
approach for preventing attacks is to assume your software has security holes. As I noted earlier in this
chapter, each service you run on a host presents a distinct attack vector into the host. The more attack
vectors, the more likely an attacker will find one with a security exploit. You must therefore minimize the
different kinds of software running on a server.
Given the assumption that your services are vulnerable, your most significant tool in preventing attackers
from exploiting a vulnerability once it becomes known is the rapid rollout of security patches. Here’s where
the dynamic nature of the cloud really alters what you can do from a security perspective. In a traditional
data center, rolling out security patches across an entire infrastructure is time-consuming and risky. In the
cloud, rolling out a patch across the infrastructure takes three simple steps:

1. Patch your AMI with the new security fixes.


2. Test the results.
3. Relaunch your virtual servers.

b) Network Level Security


All data on the network need to be secured. Strong network traffic encryption techniques
such as Secure Socket Layer (SSL) and the Transport Layer Security (TLS) can be used to prevent
leakage of sensitive information. Several key security elements such as data security, data
integrity, authentication and authorization, data confidentiality, web application security,
virtualization vulnerability, availability, backup, and data breaches should be carefully considered
to keep the cloud up and running continuously.

C) APPLICATION LEVEL SECURITY

Studies indicate that most websites are secured at the network level while there may be security
loopholes at the application level which may allow information access to unauthorized users.
Software and hardware resources can be used to provide security to applications. In this way,
attackers will not be able to get control over these applications and change them. XSS attacks,
Cookie Poisoning, Hidden field manipulation, SQL injection attacks, DoS attacks, and Google
Hacking are some examples of threats to application level security which resulting from the
unauthorized usage of the applications.

D) Virtual Machine Security


Recent years have seen great advancements in both cloud computing and virtualization On one hand there
is the ability to pool various resources to provide software-as-a-service, infrastructure-as-a- service and
platform-as-a-service. At its most basic, this is what describes cloud computing. On the other hand, we have
virtual machines that provide agility, flexibility, and scalability to the cloud resources by allowing the
vendors to copy, move, and manipulate their VMs at will. The term virtual machine essentially describes
sharing the resources of one single physical computer into various computers within itself. VMwareand
virtual box are very commonly used virtual systems on desktops. Cloud computing effectively stands for
many computers pretending to be one computing environment. Obviously, cloud computing would have
many virtualized systems to maximize resources.

Keeping this information in mind, we can now look into the security issues that arise within a cloud-
computing scenario. As more and more organizations follow the “Into the Cloud” concept, malicious
hackers keep finding ways to get their hands on valuable information by manipulating
safeguards and breaching the security layers (if any) of cloud environments. One issue is that the cloud-
computing scenario is not as transparent as it claims to be. The service user has no clue about how his
information is processed and stored. In addition, the service user cannot directly control the flow of
data/information storage and processing. The service provider usually is not aware of the details of the
service running on his or her environment. Thus, possible attacks on the cloud- computing
environment can be classified in to:

1. Resource attacks: These kinds of attacks include manipulating the available resources into
mounting a large-scale botnet attack. These kinds of attacks target either cloud providers or service
providers.

2. Data attacks: These kinds of attacks include unauthorized modification of sensitive data at nodes,
or performing configuration changes to enable a sniffing attack via a specific device etc. These
attacks are focused on cloud providers, service providers, and also on service users.

3. Denial of Service attacks: The creation of a new virtual machine is not a difficult task, and thus,
creating rogue VMs and allocating huge spaces for them can lead to a Denial of Service attack for
service providers when they opt to create a new VM on the cloud. This kind of attack is
generally called virtual machine sprawling.

4. Backdoor: Another threat on a virtual environment empowered by cloud computing is the use of
backdoor VMs that leak sensitive information and can destroy data privacy.

5. Having virtual machines would indirectly allow anyone with access to the host disk files of the VM
to take a snapshot or illegal copy of the whole System. This can lead to corporate
espionage and piracy of legitimate products.

Q.8)Write short note on Infrastructure security in cloud Computing.(6M)(W-16)

Cloud infrastructure refers to a virtual infrastructure that is delivered or accessed via a network or
the internet. This usually refers to the on-demand services or products being delivered through the
model known as infrastructure as a service (IaaS), a basic delivery model of cloud computing. This
is a highly automated offering where computing resources complemented with storage and
networking services are provided to the user. In essence, users have an IT infrastructure that they
can use for themselves without ever having to pay for the construction of a physical infrastructure.

Cloud Infrastructure

Cloud infrastructure is one of the most basic products delivered by cloud computing services
through the IaaS model. Through the service, users can create their own IT infrastructure complete
with processing, storage and networking fabric resources that can be configured in any way, just as
with a physical data center enterprise infrastructure. In most cases, this provides more flexibility in
infrastructure design, as it can be easily set up, replaced or deleted as opposed to a physical one,
which requires manual work, especially when network connectivity needs to be modified or
reworked.
A cloud infrastructure includes virtual machines and components such as:

 Virtual servers
 Virtual PCs
 Virtual network switches/hubs/routers
 Virtual memory
 Virtual storage clusters

All of these elements combine to create a full IT infrastructure that works just as well as a
physical one, but boasts such benefits as:

 Low barrier to entry


 Low capital requirement
 Low total cost of ownership
 Flexibility
 Scalability

Q9) Explain Identity Access management. (7M)(W16)(W-18)


Security in any system involves primarily ensuring that the right entity gets access to only the
authorized data in the authorized format at an authorized time and from an authorized location.
Identity and access management (IAM) is of prime importance in this regard as far as Indian
businesses are concerned. This effort should be complemented by the maintenance of audit trails
for the entire chain of events from users logging in to the system, getting authenticated and
accessing files or running applications as authorized.

Even in a closed, internal environment with a well-established “trust boundary”, managing an


Active Directory server, an LDAP server or other alternatives, is no easy task. And for IAM in the
cloud, the challenges and problems are magnified many times over. An Indian organization
moving to the cloud could typically have applications hosted on the cloud and a database
maintained internally, with users logging on and getting authenticated internally on a local Active
Directory server. Just imagine attempting single sign-on (SSO) functionality in such a scenario!
Cloud delivery models comprising mainly SaaS, PaaS and IaaS require seamless integration
between cloud services and the organization’s IAM practices, processes and procedures, in a
scalable, effective and efficient manner.

Identity provisioning challenges

The biggest challenge for cloud services is identity provisioning. This involves secure and timely
management of on-boarding (provisioning) and off-boarding (deprovisioning) of users in the
cloud.

When a user has successfully authenticated to the cloud, a portion of the system resources in terms
of CPU cycles, memory, storage and network bandwidth is allocated. Depending on the capacity
identified for the system, these resources are made available on the system even if
no users have been logged on. Based on projected capacity requirements, cloud architects may
decide on a 1:4 scale or even 1:2 or lower ratios. If projections are exceeded and more users logon,
the system performance may be affected drastically. Simultaneously, adequate measures need to be
in place to ensure that as usage of the cloud drops, system resources are made available for other
objectives; else they will remain unused and constitute a dead investment.

Semester: B.E. Eighth Semester (CBS) Subject:


Clustering
Tulsiramji & Cloud
Gaikwad-Patil Computing
College of EngineeringUnit-5
and Technology
Solution Wardha Road, Nagpur-441 108
NAAC Accredited

Department
Q.1) Explain object oriented of C#
concept in Computer Science & Engineering
.net.(6M)(S-17)(W-16)(W-18)
Ans:
Introduction to Object Oriented Programming (OOP) concepts in C#: Abstraction, Encapsulation,
Inheritance and Polymorphism.

OOP Features

Object Oriented Programming (OOP) is a programming model where programs are organized around
objects and data rather than action and logic.
OOP allows decomposition of a problem into a number of entities called objects and then builds data and
functions around these objects.
 The software is divided into a number of small units called objects. The data and functions are built
around these objects.
 The data of the objects can be accessed only by the functions associated with that object.
 The functions of one object can access the functions of another object.
OOP has the following important features.
Class
A class is the core of any modern Object Oriented Programming language such as C#. In
OOP languages it is mandatory to create a class for representing data.

A class is a blueprint of an object that contains variables for storing data and functions to perform
operations on the data.
A class will not occupy any memory space and hence it is only a logical representation of data.
To create a class, you simply use the keyword "class" followed by the class name: class
Employee
{

Object

Objects are the basic run-time entities of an object oriented system. They may represent a person, a place
or any item that the program must handle.
"An object is a software bundle of related variable and methods." "An
object is an instance of a class"
A class will not occupy any memory space. Hence to work with the data represented by the class you must
create a variable for the class, that is called an object.

When an object is created using the new operator, memory is allocated for the class in the heap, the
object is called an instance and its starting address will be stored in the object in stack memory.
When an object is created without the new operator, memory will not be allocated in the heap, in other
words an instance will not be created and the object in the stack contains the value null.
When an object contains null, then it is not possible to access the members of the class using that object.
class Employee

Syntax to create an object of class Employee:


Employee objEmp = new Employee();
All the programming languages supporting Object Oriented Programming will be supporting these three
main concepts,
1)Encapsulation
2)Inheritance
3)sPolymorphism
Abstraction
Abstraction is "To represent the essential feature without representing the background details."
Abstraction lets you focus on what the object does instead of how it does it.
Abstraction provides you a generalized view of your classes or objects by providing relevant
information.

Abstraction is the process of hiding the working style of an object, and showing the information of an
object in an understandable manner.

Encapsulation

Wrapping up a data member and a method together into a single unit (in other words class) is called
Encapsulation.
Encapsulation is like enclosing in a capsule. That is enclosing the related operations and data related to an
object into that object.

Encapsulation is like your bag in which you can keep your pen, book etcetera. It means this is the
property of encapsulating members and functions.

Encapsulation means hiding the internal details of an object, in other words how an object does
something.
Encapsulation prevents clients from seeing its inside view, where the behaviour of the abstraction is
implemented.
Encapsulation is a technique used to protect the information in an object from another object.

Hide the data for security such as making the variables private, and expose the property to access the
private data that will be public.

Inheritance

When a class includes a property of another class it is known as inheritance.


Inheritance is a process of object reusability.
Polymorphism

Polymorphism means one name, many forms. One function behaves in different forms. In other words,
"Many forms of a single object is called Polymorphism."
=======================================================================

Q.2) Write a program in C#.net- to demonstrate object oriented concept: Design user Interface.(7M)
(S-17)
Ans:

1)namespace CLASS_DEMO

class person
{
private string
name; int age;
double salary; public
void getdata()
{

Console.Write("Enter Your Name:-");


name = Console.ReadLine();
Console.Write("Enter the age:-");
age = Convert.ToInt32(Console.ReadLine()); Console.Write("Enter
the salary:-");
salary = Convert.ToDouble(Console.ReadLine());

public void putdata()

Console.WriteLine("NAME==>" + name);
Console.WriteLine("AGE==>" + age);
Console.WriteLine("SALARY==>" + salary);
}
}

class Program

static void Main(string[] args)

person obj = new person();


obj.getdata(); obj.putdata();
Console.ReadLine();
}
}
}

2)

namespace single_inheritance

class Animal

public void Eat()

Console.WriteLine("Every animal eats something.");

public void dosomething()

Console.WriteLine("Every animal does something.");

class cat : Animal

static void Main(string[] args)

cat objcat = new cat();


objcat.Eat();
objcat.dosomething();
Console.ReadLine();
}

3)
namespace Method_Overloading_1

class Area

static public int CalculateArea(int len, int wide)

return len * wide;

static public double CalculateArea(double valone, double valtwo)

return 0.5 * valone * valtwo;

class Program

static void Main(string[] args)

int length = 10; int


breath = 22;
double tbase = 2.5; double
theight = 1.5;
Console.WriteLine("Area of Rectangle:-" + Area.CalculateArea(length, breath));
Console.WriteLine("Area of triangle:-" + Area.CalculateArea(tbase, theight));
Console.ReadLine();
}

========================================================================

Q.3) Explain the Architecture of ADO. Net(6M)(S-17)(w-17)(W-16)(W-18)


Ans:

ADO.NET

Fig:- asp.net-ado.net-architecture

ADO.NET consist of a set of Objects that expose data access services to the .NET environment. It is a data
access technology from Microsoft .Net Framework , which provides communication between relational and
non relational systems through a common set of components .
System.Data namespace is the core of ADO.NET and it contains classes used by all data providers.
ADO.NET is designed to be easy to use, and Visual Studio provides several wizards and other features that
you can use to generate ADO.NET data access code.
Data Providers and DataSet

Fig:-asp.net-ado.net

The two key components of ADO.NET are Data Providers and DataSet . The Data Provider classes
are meant to work with different kinds of data sources. They are used to perform all data-management
operations on specific databases. DataSet class provides mechanisms for managing data when it is
disconnected from the data source.

Fig:- Data Providers

The .Net Framework includes mainly three Data Providers for ADO.NET. They are the Microsoft SQL
Server Data Provider , OLEDB Data Provider and ODBC Data Provider . SQL Server uses the
SqlConnection object , OLEDB uses the OleDbConnection Object and ODBC uses OdbcConnection Object
respectively.
A data provider contains Connection, Command, DataAdapter, and DataReader objects. These four objects
provides the functionality of Data Providers in the ADO.NET.
Connection

The Connection Object provides physical connection to the Data Source. Connection object needs the
necessary information to recognize the data source and to log on to it properly, this information is provided
through a connection string.
ASP.NET Connection
Command
The Command Object uses to perform SQL statement or stored procedure to be executed at the Data
Source. The command object provides a number of Execute methods that can be used to perform the SQL
queries in a variety of fashions.
ASP.NET Command
DataReader
The DataReader Object is a stream-based , forward-only, read-only retrieval of query results from the Data
Source, which do not update the data. DataReader requires a live connection with the databse and provides
a very intelligent way of consuming all or part of the result set.
ASP.NET DataReader
DataAdapter
DataAdapter Object populate a Dataset Object with results from a Data Source . It is a special class whose
purpose is to bridge the gap between the disconnected Dataset objects and the physical data source.
ASP.NET DataAdapter
DataSet
Fig:-asp.net-dataset

DataSet provides a disconnected representation of result sets from the Data Source, and it is completely
independent from the Data Source. DataSet provides much greater flexibility when dealing with related
Result Sets.
DataSet contains rows, columns,primary keys, constraints, and relations with other DataTable objects. It
consists of a collection of DataTable objects that you can relate to each other with DataRelation objects.
The DataAdapter Object provides a bridge between the DataSet and the Data Source.
========================================================================

Q.4) Write and explain code in ASP. NET to create login page.(7M)(S-17)
Ans:
Introduction

This article demonstrates how to create a login page in an ASP.NET Web Application, using C#
connectivity by SQL server. This article starts with an introduction of the creation of the database and table
in SQL Server. Afterwards, it demonstrates how to design ASP.NET login page. In the end, the article
discusses how to create a connection ASp.NET Web Application to SQL Server.
Prerequisites

VS2010/2012/2013/15/17, SQL Server 2005/08/2012

Project used version

VS2013, SQL SERVER 2012

Step 1

Creating a database and a table


To create a database, write the query in SQL Server Create
database abcd //Login is my database name Use
abcd //Select database or use database
Create table Ulogin // create table Ulogin is my table name (
UserId varchar(50) primary key not null, //primary key not accept null value
Password varchar(100) not null
)

insert into Ulogin values ('Krish','kk@321') //insert value in Ulogin table

Let’s start design login view in ASP.NET Web Application. I am using simple design to view this article is
not the purpose of design, so let’s start opening VS (any version) and go to File, select New select Web site.
You can also use shortcut key (Shift+Alt+N). When you are done with expanding Solution Explorer, right
click on your project name, select add click Add New Item (for better help, refer the screenshot given
below). Select Web Form, if you want to change Web form name. You can save it as it is. Default.aspx is
added in my project.Now, let’s design my default.aspx page in <div
>tag insert table, as per required the rows and columns and set the layout style of the table. If you want all
tools set in center, go to Table propeties and click Style text align.
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs"
Inherits="_Default" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
<style type="text/css">
.auto-style1 { width:
100%;
}
</style>
</head>
<body>
<form id="form1" runat="server">
<div>

<table class="auto-style1">
<tr>
<td colspan="6" style="text-align: center; vertical-align: top">
</tr>
</table>

</div>
</form>
</body>
Afterwards, drag and drop two labels, two textbox and one button below design view source code. Set the
password textbox properties Text Mode as a password.
Complete source code is given below.

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs"


Inherits="_Default" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
<style type="text/css">
.auto-style1 { width:
100%;
}
</style>
</head>
<body>
<form id="form1" runat="server">
<div>

<table class="auto-style1">
<tr>
<td colspan="6" style="text-align: center; vertical-align: top">
<asp:Label ID="Label1" runat="server" Font-Bold="True" Font-Size="XX-Large" Font-
Underline="True" Text="Log In "></asp:Label>
</td>
</tr>
<tr>
<td> </td>
<td style="text-align: center">
<asp:Label ID="Label2" runat="server" Font-Size="X-Large" Text="UserId
:"></asp:Label>
</td>
<td style="text-align: center">
<asp:TextBox ID="TextBox1" runat="server" Font-Size="X-Large"></asp:TextBox>
</td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td style="text-align: center">
<asp:Label ID="Label3" runat="server" Font-Size="X-Large" Text="Password
:"></asp:Label>
</td>
<td style="text-align: center">
<asp:TextBox ID="TextBox2" runat="server" Font-Size="X-Large"
TextMode="Password"></asp:TextBox>
</td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
<td style="text-align: center">
<asp:Button ID="Button1" runat="server" BorderStyle="None" Font-Size="X-Large"
OnClick="Button1_Click" Text="Log In" />
</td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr>
<td> </td>
<td> </td>
<td>
<asp:Label ID="Label4" runat="server" Font-Size="X-Large"></asp:Label>
</td>
<td> </td>
<td> </td>
<td> </td>
</tr>
</table>

</div>
</form>
</body>
</html>

Q.5) Give the anatomy of .ASPX page & also explain with an example how to create a
web page using ASP .Net.(13M)(W-17)(W-16)(W-18)

ASP.NET is a web development platform, which provides a programming model, a


comprehensive software infrastructure and various services required to build up robust
web applications for PC, as well as mobile devices. ASP.NET works on top of the HTTP
protocol, and uses the HTTP commands and policies to set a browser-to- server bilateral
communication and cooperation. ASP.NET is a part of Microsoft .Net platform. ASP.NET
applications are compiled codes, written using the extensible and reusable components or
objects present in .Net framework. These codes can use the entire hierarchy of classes in
.Net framework.

The ASP.NET application codes can be written in any of the following languages:

 C#

 Visual Basic.Net

 Jscript

 J#

ASP.NET is used to produce interactive, data-driven web applications over the internet. It
consists of a large number of controls such as text boxes, buttons, and labels for assembling,
configuring, and manipulating code to create HTML pages.

ASP.NET Web Forms Model

ASP.NET web forms extend the event-driven model of interaction to the web applications. The
browser submits a web form to the web server and the server returns a full markup page or
HTML page in response.

All client side user activities are forwarded to the server for stateful processing. The server
processes the output of the client actions and triggers the reactions.

Now, HTTP is a stateless protocol. ASP.NET framework helps in storing the information
regarding the state of the application, which consists of:
 Page state

 Session state

The page state is the state of the client, i.e., the content of various input fields in the web form.
The session state is the collective information obtained from various pages the user visited and
worked with, i.e., the overall session state. To clear the concept, let us take an example of a
shopping cart.
User adds items to a shopping cart. Items are selected from a page, say the items page, and the
total collected items and price are shown on a different page, say the cart page. Only HTTP
cannot keep track of all the information coming from various pages. ASP.NET session state and
server side infrastructure keeps track of the information collected globally over a session.

The ASP.NET runtime carries the page state to and from the server across page requests while
generating ASP.NET runtime codes, and incorporates the state of the server side components in
hidden fields.

This way, the server becomes aware of the overall application state and operates in a two- tiered
connected way.

The ASP.NET Component Model

The ASP.NET component model provides various building blocks of ASP.NET pages. Basically
it is an object model, which describes:

 Server side counterparts of almost all HTML elements or tags, such as <form> and
<input>.

 Server controls, which help in developing complex user-interface. For example, the
Calendar control or the Gridview control.

ASP.NET is a technology, which works on the .Net framework that contains all web-related
functionalities. The .Net framework is made of an object-oriented hierarchy. An ASP.NET web
application is made of pages. When a user requests an ASP.NET page, the IIS delegates the
processing of the page to the ASP.NET runtime system.

The ASP.NET runtime transforms the .aspx page into an instance of a class, which inherits from
the base class page of the .Net framework. Therefore, each ASP.NET page is an object and all its
components i.e., the server-side controls are also objects.

Components of.Net Framework 3.5

Before going to the next session on Visual Studio.Net, let us go through at the various
components of the .Net framework 3.5. The following table describes the components of the
.Net framework 3.5 and the job they perform:

Components and their Description

(1) Common Language Runtime or CLR

It performs memory management, exception handling, debugging, security checking, thread


execution, code execution, code safety, verification, and compilation. The code that is directly
managed by the CLR is called the managed code. When the managed code is compiled, the compiler
converts the source code into a CPU independent intermediate language (IL) code. A Just In
Time(JIT) compiler compiles the IL code into native code, which is CPU specific.

(2) .Net Framework Class Library

It contains a huge library of reusable types. classes, interfaces, structures, and enumerated values,
which are collectively called types.

(3) Common Language Specification

It contains the specifications for the .Net supported languages and implementation of language
integration.

(4) Common Type System

It provides guidelines for declaring, using, and managing types at runtime, and cross-language
communication.

(5) Metadata and Assemblies

Metadata is the binary information describing the program, which is either stored in a portable
executable file (PE) or in the memory. Assembly is a logical unit consisting of the assembly
manifest, type metadata, IL code, and a set of resources like image files.

(6) Windows Forms

Windows Forms contain the graphical representation of any window displayed in the application.
(7) ASP.NET and ASP.NET AJAX

ASP.NET is the web development model and AJAX is an extension of ASP.NET for developing and
implementing AJAX functionality. ASP.NET AJAX contains the components that allow the
developer to update data on a website without a complete reload of the page.

(8) ADO.NET

It is the technology used for working with data and databases. It provides access to data sources like
SQL server, OLE DB, XML etc. The ADO.NET allows connection to data sources for retrieving,
manipulating, and updating data.

(9) Windows Workflow Foundation (WF)

It helps in building workflow-based applications in Windows. It contains activities, workflow


runtime, workflow designer, and a rules engine.

(10) Windows Presentation Foundation

It provides a separation between the user interface and the business logic. It helps in developing
visually stunning interfaces using documents, media, two and three dimensional graphics,
animations, and more.

(11) Windows Communication Foundation (WCF)

It is the technology used for building and executing connected systems.

(12) Windows CardSpace

It provides safety for accessing resources and sharing personal information on the internet.
(13) LINQ

It imparts data querying capabilities to .Net languages using a syntax which is similar to the tradition
query language SQL.

Create a web page using ASP .Net

Tasks illustrated in this walkthrough include:

 Creating a file system Web Forms application project.


 Familiarizing yourself with Visual Studio.
 Creating an ASP.NET page.
 Adding controls.
 Adding event handlers.
 Running and testing a page from Visual Studio.

Prerequisites

In order to complete this walkthrough, you will need:

 Microsoft Visual Studio 2013 or Microsoft Visual Studio Express 2013 for Web. The
.NET Framework is installed automatically.
Note

Microsoft Visual Studio 2013 and Microsoft Visual Studio Express 2013 for Web will often
be referred to as Visual Studio throughout this tutorial series.

If you are using Visual Studio, this walkthrough assumes that you selected the Web
Development collection of settings the first time that you started Visual Studio. For more
information.

In this part of the walkthrough, you will create a Web application project and add a new page to it.
You will also add HTML text and run the page in your browser.
To create a Web application project

1. Open Microsoft Visual Studio.


2. On the File menu, select New Project.
The New Project dialog box appears.

3. Select the Templates -> Visual C# -> Web templates group on the left.
4. Choose the ASP.NET Web Application template in the center column.
5. Name your project BasicWebApp and click the OK button.
6. Next, select the Web Forms template and click the OK button to create the project.

Visual Studio creates a new project that includes prebuilt functionality based on the Web
Forms template. It not only provides you with a Home.aspx page, an
About.aspxpage, a Contact.aspx page, but also includes membership functionality that
registers users and saves their credentials so that they can log in to your website. When a
new page is created, by default Visual Studio displays the page in Source view,
where you can see the page's HTML elements. The following illustration shows what you
would see in Source view if you created a new Web page named BasicWebApp.aspx.
A Tour of the Visual Studio Web Development Environment

Before you proceed by modifying the page, it is useful to familiarize yourself with the Visual
Studio development environment. The following illustration shows you the windows and tools that
are available in Visual Studio and Visual Studio Express for Web.

This diagram shows default windows and window locations. The View menu allows you to
display additional windows, and to rearrange and resize windows to suit your preferences. If
changes have already been made to the window arrangement, what you see will not match the
illustration.

The Visual Studio environment

Familiarize yourself with the Web designer


Examine the above illustration and match the text to the following list, which describes the most
commonly used windows and tools. (Not all windows and tools that you see are listed here, only
those marked in the preceding illustration.)

 Toolbars. Provide commands for formatting text, finding text, and so on. Some toolbars are
available only when you are working in Design view.
 Solution Explorer window. Displays the files and folders in your Web application.
 Document window. Displays the documents you are working on in tabbed windows. You
can switch between documents by clicking tabs.
 Properties window. Allows you to change settings for the page, HTML elements, controls,
and other objects.
 View tabs. Present you with different views of the same document. Design view is a near-
WYSIWYG editing surface. Source view is the HTML editor for the page. Split
view displays both the Design view and the Source view for the document. You will work
with the Design and Source views later in this walkthrough. If you prefer to open Web
pages in Design view, on the Tools menu, click Options, select the HTML Designer node,
and change the Start Pages In option.
 ToolBox. Provides controls and HTML elements that you can drag onto your page.
Toolbox elements are grouped by common function.
 S erver Explorer. Displays database connections. If Server Explorer is not visible, on the
View menu, click Server Explorer.

Creating a new ASP.NET Web Forms Page

When you create a new Web Forms application using the ASP.NET Web
Application project template, Visual Studio adds an ASP.NET page (Web Forms page) named
Default.aspx, as well as several other files and folders. You can use
the Default.aspx page as the home page for your Web application. However, for this
walkthrough, you will create and work with a new page.
To add a page to the Web application

1. Close the Default.aspx page. To do this, click the tab that displays the file name and then
click the close option.
2. In Solution Explorer, right-click the Web application name (in this tutorial the
application name is BasicWebSite), and then click Add -> New Item.
The Add New Item dialog box is displayed.
3. Select the Visual C# -> Web templates group on the left. Then, select Web
Form from the middle list and name it FirstWebPage.aspx.

4. Click Add to add the web page to your project. Visual Studio creates the new page and
opens it.

Adding HTML to the Page

In this part of the walkthrough, you will add some static text to the page.
To add text to the page

1. At the bottom of the document window, click the Design tab to switch to Design view.

Design view displays the current page in a WYSIWYG-like way. At this point, you do not
have any text or controls on the page, so the page is blank except for a dashed line that
outlines a rectangle. This rectangle represents a div element on the page.

2. Click inside the rectangle that is outlined by a dashed line.


3. Type Welcome to Visual Web Developer and press ENTER twice.

The following illustration shows the text you typed in Design view.
4. Switch to Source view.

You can see the HTML in Source view that you created when you typed in
Design view.

Running the Page

Before you proceed by adding controls to the page, you can first run it.
To run the page

1. In Solution Explorer, right-click FirstWebPage.aspx and select Set as Start Page.


2. Press CTRL+F5 to run the page.

The page is displayed in the browser. Although the page you created has a file-name
extension of .aspx, it currently runs like any HTML page.
To display a page in the browser you can also right-click the page in Solution
Explorerand select View in Browser.

3. Close the browser to stop the Web application.


Adding and Programming Controls

You will now add server controls to the page. Server controls, such as buttons, labels, text boxes, and
other familiar controls, provide typical form-processing capabilities for your Web Forms pages.
However, you can program the controls with code that runs on the server, rather than the client.
To add controls to the page

1. Click the Design tab to switch to Design view.


2. Put the insertion point at the end of the Welcome to Visual Web Developer text and
press ENTER five or more times to make some room in the div element box.
3. In the Toolbox, expand the Standard group if it is not already expanded. Note
that you may need to expand the Toolbox window on the left to view it.
4. Drag a TextBox control onto the page and drop it in the middle of the div element box that
has Welcome to Visual Web Developer in the first line.
5. Drag a Button control onto the page and drop it to the right of the TextBox control.
6. Drag a Label control onto the page and drop it on a separate line below the
Buttoncontrol.
7. Put the insertion point above the TextBox control, and then type Enter your name: .

This static HTML text is the caption for the TextBox control. You can mix static HTML
and server controls on the same page. The following illustration shows how the three
controls appear in Design view.

Setting Control Properties

Visual Studio offers you various ways to set the properties of controls on the page. In this part
of the walkthrough, you will set properties in both Design view and Source view.
To set control properties

1. First, display the Properties windows by selecting from the View menu -> Other
Windows -> Properies Window. You could alternatively select F4 to display the
Properties window.
2. Select the Button control, and then in the Properties window, set the value
of Text to Display Name. The text you entered appears on the button in the designer, as
shown in the following illustration.

3. Switch to Source view.

Source view displays the HTML for the page, including the elements that Visual Studio has
created for the server controls. Controls are declared using HTML-like syntax, except that
the tags use the prefix asp: and include the attribute runat="server".

Control properties are declared as attributes. For example, when you set
the Textproperty for the Button control, in step 1, you were actually setting
the Text attribute in the control's markup.
Note

All the controls are inside a form element, which also has the attribute
runat="server". The runat="server" attribute and the asp: prefix for control tags mark
the controls so that they are processed by ASP.NET on the server when the page runs. Code
outside of <form runat="server"> and <script runat="server">elements is sent
unchanged to the browser, which is why the ASP.NET code must be inside an
element whose opening tag contains the runat="server" attribute.
4. Next, you will add an additional property to the Label control. Put the insertion point
directly after asp:Label in the <asp:Label> tag, and then press SPACEBAR.
A drop-down list appears that displays the list of available properties you can set for a
Label control. This feature, referred to as IntelliSense, helps you in Source view with the
syntax of server controls, HTML elements, and other items on the page. The following
illustration shows the IntelliSense drop-down list for the Label control.
5. Select ForeColor and then type an equal sign.

IntelliSense displays a list of colors.


Note

You can display an IntelliSense drop-down list at any time by pressing CTRL+Jwhen
viewing code.

6. Select a color for the Label control's text. Make sure you select a color that is dark
enough to read against a white background.

The ForeColor attribute is completed with the color that you have selected, including the
closing quotation mark.
Programming the Button Control

For this walkthrough, you will write code that reads the name that the user enters into the text box
and then displays the name in the Label control.
Add a default button event handler
1. Switch to Design view.
2. Double-click the Button control.
By default, Visual Studio switches to a code-behind file and creates a skeleton event
handler for the Button control's default event, the Click event. The code-behind file
separates your UI markup (such as HTML) from your server code (such as C#).
The cursor is positioned to added code for this event handler.
Note
Double-clicking a control in Design view is just one of several ways you can create
event handlers.
3. Inside the Button1_Click event handler, type Label1 followed by a period (.).
When you type the period after the ID of the label (Label1), Visual Studio displays a list
of available members for the Label control, as shown in the following illustration. A
member commonly a property, method, or event.
4. Finish the Click event handler for the button so that it reads as shown in the following
code example.

C#Copy

protected void Button1_Click(object sender, System.EventArgs e)


{
Label1.Text = TextBox1.Text + ", welcome to Visual Studio!";
}
VBCopy

Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Label1.Text =


Textbox1.Text & ", welcome to Visual Studio!"
End Sub

5. Switch back to viewing the Source view of your HTML markup by right-
clicking FirstWebPage.aspx in the Solution Explorer and selecting View Markup.
6. Scroll to the <asp:Button> element. Note that the <asp:Button> element now has the
attribute onclick="Button1_Click".

Event handler methods can have any name; the name you see is the default name created by
Visual Studio. The important point is that the name used for the
OnClickattribute in the HTML must match the name of a method defined in the code-
behind.
Running the Page

You can now test the server controls on the page.


To run the page

1. Press CTRL+F5 to run the page in the browser. If an error occurs, recheck the steps
above.
2. Enter a name into the text box and click the Display Name button.

The name you entered is displayed in the Label control. Note that when you click the
button, the page is posted to the Web server. ASP.NET then recreates the page, runs your
code (in this case, the Button control's Click event handler runs), and then sends the new
page to the browser. If you watch the status bar in the browser, you can see that the page
is making a round trip to the Web server each time you click the button.

3. In the browser, view the source of the page you are running by right-clicking on the
page and selecting View source.

In the page source code, you see HTML without any server code. Specifically, you do not
see the <asp:> elements that you were working with in Source view. When the page runs,
ASP.NET processes the server controls and renders HTML elements to the page that
perform the functions that represent the control. For example,
the <asp:Button>control is rendered as the HTML <input type="submit"> element.

4. Close the browser.


Working with Additional Controls

In this part of the walkthrough, you will work with the Calendar control, which displays dates a
month at a time. The Calendar control is a more complex control than the button, text box, and
label you have been working with and illustrates some further capabilities of server controls.
To add a Calendar control

1. In Visual Studio, switch to Design view.


2. From the Standard section of the Toolbox, drag a Calendar control onto the page and
drop it below the div element that contains the other controls.

The calendar's smart tag panel is displayed. The panel displays commands that make it easy
for you to perform the most common tasks for the selected control. The following
illustration shows the Calendar control as rendered in Design view.

3. In the smart tag panel, choose Auto Format.

The Auto Format dialog box is displayed, which allows you to select a formatting scheme
for the calendar. The following illustration shows the Auto Format dialog box for the
Calendar control.
4. From the Select a scheme list, select Simple and then click OK.
5. Switch to Source view.

You can see the <asp:Calendar> element. This element is much longer than the elements
for the simple controls you created earlier. It also includes subelements, such as
<WeekEndDayStyle>, which represent various formatting settings. The following
illustration shows the Calendar control in Source view. (The exact markup that you see in
Source view might differ slightly from the illustration.)

Programming the Calendar Control

In this section, you will program the Calendar control to display the currently selected date.
To program the Calendar control

1. In Design view, double-click the Calendar control.

A new event handler is created and displayed in the code-behind file


named FirstWebPage.aspx.cs.
2. Finish the SelectionChanged event handler with the following code.

C#Copy

protected void Calendar1_SelectionChanged(object sender, System.EventArgs e)


{
Label1.Text = Calendar1.SelectedDate.ToLongDateString();
}
VBCopy

Protected Sub Calendar1_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs)


Label1.Text = Calendar1.SelectedDate.ToLongDateString() End
Sub

The above code sets the text of the label control to the selected date of the calendar
control.
Running the Page

You can now test the calendar.


To run the page

1. Press CTRL+F5 to run the page in the browser.


2. Click a date in the calendar.

The date you clicked is displayed in the Label control.

3. In the browser, view the source code for the page.

Note that the Calendar control has been rendered to the page as a table, with each day as a
td element.

4. Close the browser.

Q.7) Why there is need of ADO.Net? Explain how to use ADO.Net in any web
application.(8M)(W-17)

ADO.NET provides a comprehensive caching data model for marshalling data between
applications and services with facilities to optimistically update the original data sources. This
enables developer to begin with XML while leveraging existing skills with SQL and the relational
model.

Although the ADO.NET model is different from the existing ADO model, the same basic concepts
include provider, connection and command objects. By combining the continued use of SQL with
similar basic concepts, current ADO developers should be able to migrate to ADO.NET over a
reasonable period of time.
Create a simple data application by using ADO.NET
When you create an application that manipulates data in a database, stored procedures. By
following this topic, you can discover how to interact with a database from within a simple
Windows Forms "forms over data" application by using Visual C# or Visual Basic and ADO.NET.
All .NET data technologies—including datasets, LINQ to SQL, and Entity Framework—
ultimately perform steps that are very similar to those shown in this article.
This article demonstrates a simple way to get data out of a database in a fast manner. If your
application needs to modify data in non-trivial ways and update the database, you should consider
using Entity Framework and using data binding to automatically sync user interface controls to
changes in the underlying data.
Important
you perform basic tasks such as defining connection strings, inserting data, and running To
keep the code simple, it doesn't include production-ready exception handling.
Prerequisites
To create the application, you'll need:
 Visual Studio.
 SQL Server Express LocalDB. If you don't have SQL Server Express LocalDB,
Create the sample database by following these steps:
1. In Visual Studio, open the Server Explorer window.
2. Right-click on Data Connections and choose Create New SQL Server Database.
3. In the Server name text box, enter (localdb)\mssqllocaldb.
4. In the New database name text box, enter Sales, then choose OK.
The empty Sales database is created and added to the Data Connections node in Server
Explorer.
5. Right-click on the Sales data connection and select New Query. A
query editor window opens.
6. Copy the Sales Transact-SQL script to your clipboard.
7. Paste the T-SQL script into the query editor, and then choose the Execute button. After a
short time, the query finishes running and the database objects are created. The database
contains two tables: Customer and Orders. These tables contain no data initially, but you
can add data when you run the application that you'll create. The database also contains
four simple stored procedures.
Create the forms and add controls
1. Create a project for a Windows Forms application, and then name it SimpleDataApp.
Visual Studio creates the project and several files, including an empty Windows form
that's named Form1.
2. Add two Windows forms to your project so that it has three forms, and then give them the
following names:
 Navigation
 NewCustomer
 FillOrCancel
For each form, add the text boxes, buttons, and other controls that appear in the following
illustrations. For each control, set the properties that the tables describe. Note
The group box and the label controls add clarity but aren't used in the code.
Navigation form

Controls for the Navigation form Properties

Button Name = btnGoToAdd

Button Name = btnGoToFillOrCancel

Button Name = btnExit

NewCustomer form
Controls for the NewCustomer form Properties

TextBox Name = txtCustomerName

TextBox Name = txtCustomerID

Readonly = True

Button Name = btnCreateAccount

NumericUpdown DecimalPlaces = 0

Maximum = 5000

Name = numOrderAmount

DateTimePicker Format = Short

Name = dtpOrderDate

Button Name = btnPlaceOrder

Button Name = btnAddAnotherAccount

Button Name = btnAddFinish

FillOrCancel form
Controls for the FillOrCancel form Properties

TextBox Name = txtOrderID

Button Name = btnFindByOrderID

DateTimePicker Format = Short


Name = dtpFillDate

DataGridView Name = dgvCustomerOrders


Readonly = True
RowHeadersVisible = False

Button Name = btnCancelOrder

Button Name = btnFillOrder


Controls for the FillOrCancel form Properties

Button Name = btnFinishUpdates

Store the connection string

When your application tries to open a connection to the database, your application must have
access to the connection string. To avoid entering the string manually on each form, store the
string in the App.config file in your project, and create a method that returns the string when the
method is called from any form in your application.

You can find the connection string by right-clicking on the Sales data connection in Server
Explorer and choosing Properties. Locate the ConnectionString property, then use
Ctrl+A, Ctrl+C to select and copy the string to the clipboard.

1. If you're using C#, in Solution Explorer, expand the Properties node under the project,
and then open the Settings.settings file. If you're using Visual Basic, in Solution
Explorer, click Show All Files, expand the My Project node, and then open the
Settings.settings file.
2. In the Name column, enter connString.
3. In the Type list, select (Connection String).
4. In the Scope list, select Application.
5. In the Value column, enter your connection string (without any outside quotes), and then
save your changes.
Note

In a real application, you should store the connection string securely, as described in
Connection strings and configuration files.
Write the code for the forms

This section contains brief overviews of what each form does. It also provides the code that
defines the underlying logic when a button on the form is clicked.
Navigation form

The Navigation form opens when you run the application. The Add an account button opens the
NewCustomer form. The Fill or cancel orders button opens the FillOrCancel form. The Exit
button closes the application.
Make the Navigation form the startup form

If you're using C#, in Solution Explorer, open Program.cs, and then change the
Application.Run line to this: Application.Run(new Navigation());

If you're using Visual Basic, in Solution Explorer, open the Properties window, select the
Application tab, and then select SimpleDataApp.Navigation in the Startup form list.
Create auto-generated event handlers

Double-click the three buttons on the Navigation form to create empty event handler methods.
Double-clicking the buttons also adds auto-generated code in the Designer code file that enables a
button click to raise an event.
Add code for the Navigation form logic

In the code page for the Navigation form, complete the method bodies for the three button click
event handlers as shown in the following code.
C#Copy

/// <summary>
/// Opens the NewCustomer form as a dialog box,
/// which returns focus to the calling form when it is closed.
/// </summary>
private void btnGoToAdd_Click(object sender, EventArgs e)
{
Form frm = new NewCustomer();
frm.Show();
}

/// <summary>
/// Opens the FillorCancel form as a dialog box.
/// </summary>
private void btnGoToFillOrCancel_Click(object sender, EventArgs e)
{
Form frm = new FillOrCancel();
frm.ShowDialog();
}

/// <summary>
/// Closes the application (not just the Navigation form).
/// </summary>
private void btnExit_Click(object sender, EventArgs e)
{
this.Close();
}
NewCustomer form

When you enter a customer name and then select the Create Account button, the NewCustomer
form creates a customer account, and SQL Server returns an IDENTITY value as the new
customer ID. You can then place an order for the new account by specifying an amount and an
order date and selecting the Place Order button.
Create auto-generated event handlers

Create an empty Click event handler for each button on the NewCustomer form by double-
clicking on each of the four buttons. Double-clicking the buttons also adds auto-generated code
in the Designer code file that enables a button click to raise an event.
Add code for the NewCustomer form logic

To complete the NewCustomer form logic, follow these steps.

1. Bring the System.Data.SqlClient namespace into scope so that you don't have to
fully qualify the names of its members.

C#Copy

using System.Data.SqlClient;

2. Add some variables and helper methods to the class as shown in the following code.

C#Copy

// Storage for IDENTITY values returned from database.


private int parsedCustomerID;
private int orderID;

/// <summary>
/// Verifies that the customer name text box is not empty.
/// </summary>
private bool IsCustomerNameValid()
{
if (txtCustomerName.Text == "")
{
MessageBox.Show("Please enter a name.");
return false;
}
else
{
return true;
}
}

/// <summary>
/// Verifies that a customer ID and order amount have been provided.
/// </summary>
private bool IsOrderDataValid()
{
// Verify that CustomerID is present. if
(txtCustomerID.Text == "")
{
MessageBox.Show("Please create customer account before placing order.");
return false;
}
// Verify that Amount isn't 0.
else if ((numOrderAmount.Value < 1))
{
MessageBox.Show("Please specify an order amount.");
return false;
}
else
{
// Order can be submitted.
return true;
}
}

/// <summary>
/// Clears the form data.
/// </summary>
private void ClearForm()
{
txtCustomerName.Clear();
txtCustomerID.Clear(); dtpOrderDate.Value
= DateTime.Now; numOrderAmount.Value
= 0;
this.parsedCustomerID = 0;
}
3. Complete the method bodies for the four button click event handlers as shown in the
following code.

C#Copy

/// <summary>
/// Creates a new customer by calling the Sales.uspNewCustomer stored procedure.
/// </summary>
private void btnCreateAccount_Click(object sender, EventArgs e)
{
if (IsCustomerNameValid())
{
// Create the connection.
using (SqlConnection connection = new
SqlConnection(Properties.Settings.Default.connString))
{
// Create a SqlCommand, and identify it as a stored procedure.
using (SqlCommand sqlCommand = new
SqlCommand("Sales.uspNewCustomer", connection))
{
sqlCommand.CommandType = CommandType.StoredProcedure;

// Add input parameter for the stored procedure and specify what to use as its
value.
sqlCommand.Parameters.Add(new SqlParameter("@CustomerName",
SqlDbType.NVarChar, 40));
sqlCommand.Parameters["@CustomerName"].Value =
txtCustomerName.Text;

// Add the output parameter.


sqlCommand.Parameters.Add(new SqlParameter("@CustomerID",
SqlDbType.Int));
sqlCommand.Parameters["@CustomerID"].Direction =
ParameterDirection.Output;

try
{
connection.Open();

// Run the stored procedure.


sqlCommand.ExecuteNonQuery();

// Customer ID is an IDENTITY value from the database.


this.parsedCustomerID =
(int)sqlCommand.Parameters["@CustomerID"].Value;

// Put the Customer ID value into the read-only text box.


this.txtCustomerID.Text = Convert.ToString(parsedCustomerID);
}
catch
{
MessageBox.Show("Customer ID was not returned. Account could not be
created.");
}
finally
{
connection.Close();
}
}
}
}
}

/// <summary>
/// Calls the Sales.uspPlaceNewOrder stored procedure to place an order.
/// </summary>
private void btnPlaceOrder_Click(object sender, EventArgs e)
{
// Ensure the required input is present. if
(IsOrderDataValid())
{
// Create the connection.
using (SqlConnection connection = new
SqlConnection(Properties.Settings.Default.connString))
{
// Create SqlCommand and identify it as a stored procedure.
using (SqlCommand sqlCommand = new
SqlCommand("Sales.uspPlaceNewOrder", connection))
{
sqlCommand.CommandType = CommandType.StoredProcedure;

// Add the @CustomerID input parameter, which was obtained from


uspNewCustomer.
sqlCommand.Parameters.Add(new SqlParameter("@CustomerID",
SqlDbType.Int));
sqlCommand.Parameters["@CustomerID"].Value = this.parsedCustomerID;

// Add the @OrderDate input parameter.


sqlCommand.Parameters.Add(new SqlParameter("@OrderDate",
SqlDbType.DateTime, 8));
sqlCommand.Parameters["@OrderDate"].Value = dtpOrderDate.Value;

// Add the @Amount order amount input parameter.


sqlCommand.Parameters.Add(new SqlParameter("@Amount",
SqlDbType.Int));
sqlCommand.Parameters["@Amount"].Value = numOrderAmount.Value;

// Add the @Status order status input parameter.


// For a new order, the status is always O (open).
sqlCommand.Parameters.Add(new SqlParameter("@Status",
SqlDbType.Char, 1));
sqlCommand.Parameters["@Status"].Value = "O";

// Add the return value for the stored procedure, which is the order ID.
sqlCommand.Parameters.Add(new SqlParameter("@RC", SqlDbType.Int));
sqlCommand.Parameters["@RC"].Direction =
ParameterDirection.ReturnValue;

try
{
//Open connection.
connection.Open();

// Run the stored procedure.


sqlCommand.ExecuteNonQuery();

// Display the order number.


this.orderID = (int)sqlCommand.Parameters["@RC"].Value;
MessageBox.Show("Order number " + this.orderID + " has been
submitted.");
}
catch
{
MessageBox.Show("Order could not be placed.");
}
finally
{
connection.Close();
}
}
}
}
}

/// <summary>
/// Clears the form data so another new account can be created.
/// </summary>
private void btnAddAnotherAccount_Click(object sender, EventArgs e)
{
this.ClearForm();
}

/// <summary>
/// Closes the form/dialog box.
/// </summary>
private void btnAddFinish_Click(object sender, EventArgs e)
{
this.Close();
}
FillOrCancel form

The FillOrCancel form runs a query to return an order when you enter an order ID and then
click the Find Order button. The returned row appears in a read-only data grid. You can mark
the order as canceled (X) if you select the Cancel Order button, or you can mark the order as
filled (F) if you select the Fill Order button. If you select the Find Order button again, the
updated row appears.
Create auto-generated event handlers

Create empty Click event handlers for the four buttons on the FillOrCancel form by double-
clicking the buttons. Double-clicking the buttons also adds auto-generated code in the Designer
code file that enables a button click to raise an event.
Add code for the FillOrCancel form logic

To complete the FillOrCancel form logic, follow these steps.


1. Bring the following two namespaces into scope so that you don't have to fully qualify the
names of their members.

C#Copy

using System.Data.SqlClient;
using System.Text.RegularExpressions;

2. Add a variable and helper method to the class as shown in the following code.

C#Copy

// Storage for the order ID value.


private int parsedOrderID;

/// <summary>
/// Verifies that an order ID is present and contains valid characters.
/// </summary>
private bool IsOrderIDValid()
{
// Check for input in the Order ID text box. if
(txtOrderID.Text == "")
{
MessageBox.Show("Please specify the Order ID.");
return false;
}

// Check for characters other than integers.


else if (Regex.IsMatch(txtOrderID.Text, @"^\D*$"))
{
// Show message and clear input.
MessageBox.Show("Customer ID must contain only numbers.");
txtOrderID.Clear();
return false;
}
else
{
// Convert the text in the text box to an integer to send to the database.
parsedOrderID = Int32.Parse(txtOrderID.Text);
return true;
}
}
3. Complete the method bodies for the four button click event handlers as shown in the
following code.

C#Copy
/// <summary>
/// Executes a t-SQL SELECT statement to obtain order data for a specified
/// order ID, then displays it in the DataGridView on the form.
/// </summary>
private void btnFindByOrderID_Click(object sender, EventArgs e)
{
if (IsOrderIDValid())
{
using (SqlConnection connection = new
SqlConnection(Properties.Settings.Default.connString))
{
// Define a t-SQL query string that has a parameter for orderID.
const string sql = "SELECT * FROM Sales.Orders WHERE orderID =
@orderID";

// Create a SqlCommand object.


using (SqlCommand sqlCommand = new SqlCommand(sql, connection))
{
// Define the @orderID parameter and set its value.
sqlCommand.Parameters.Add(new SqlParameter("@orderID",
SqlDbType.Int));
sqlCommand.Parameters["@orderID"].Value = parsedOrderID;

try
{
connection.Open();

// Run the query by calling ExecuteReader().


using (SqlDataReader dataReader = sqlCommand.ExecuteReader())
{
// Create a data table to hold the retrieved data.
DataTable dataTable = new DataTable();

// Load the data from SqlDataReader into the data table. dataTable.Load(dataReader);

// Display the data from the data table in the data grid view.
this.dgvCustomerOrders.DataSource = dataTable;
// Close the SqlDataReader.
dataReader.Close();
}
}
catch
{
MessageBox.Show("The requested order could not be loaded into the
form.");
}
finally
{
// Close the connection.
connection.Close();
}
}
}
}
}

/// <summary>
/// Cancels an order by calling the Sales.uspCancelOrder
/// stored procedure on the database.
/// </summary>
private void btnCancelOrder_Click(object sender, EventArgs e)
{
if (IsOrderIDValid())
{
// Create the connection.
using (SqlConnection connection = new
SqlConnection(Properties.Settings.Default.connString))
{
// Create the SqlCommand object and identify it as a stored procedure.
using (SqlCommand sqlCommand = new
SqlCommand("Sales.uspCancelOrder", connection))
{
sqlCommand.CommandType = CommandType.StoredProcedure;

// Add the order ID input parameter for the stored procedure.


sqlCommand.Parameters.Add(new SqlParameter("@orderID",
SqlDbType.Int));
sqlCommand.Parameters["@orderID"].Value = parsedOrderID;

try
{
// Open the connection. connection.Open();
// Run the command to execute the stored procedure.
sqlCommand.ExecuteNonQuery();
}
catch
{
MessageBox.Show("The cancel operation was not completed.");
}
finally
{
// Close connection.
connection.Close();
}
}
}
}
}

/// <summary>
/// Fills an order by calling the Sales.uspFillOrder stored
/// procedure on the database.
/// </summary>
private void btnFillOrder_Click(object sender, EventArgs e)
{
if (IsOrderIDValid())
{
// Create the connection.
using (SqlConnection connection = new
SqlConnection(Properties.Settings.Default.connString))
{
// Create command and identify it as a stored procedure.
using (SqlCommand sqlCommand = new SqlCommand("Sales.uspFillOrder",
connection))
{
sqlCommand.CommandType = CommandType.StoredProcedure;

// Add the order ID input parameter for the stored procedure.


sqlCommand.Parameters.Add(new SqlParameter("@orderID",
SqlDbType.Int));
sqlCommand.Parameters["@orderID"].Value = parsedOrderID;

// Add the filled date input parameter for the stored procedure.
sqlCommand.Parameters.Add(new SqlParameter("@FilledDate",
SqlDbType.DateTime, 8));
sqlCommand.Parameters["@FilledDate"].Value = dtpFillDate.Value;

try
{
connection.Open();

// Execute the stored procedure.


sqlCommand.ExecuteNonQuery();
}
catch
{
MessageBox.Show("The fill operation was not completed.");
}
finally
{
// Close the connection.
connection.Close();
}
}
}
}
}

/// <summary>
/// Closes the form.
/// </summary>
private void btnFinishUpdates_Click(object sender, EventArgs e)
{
this.Close();
}
Q.8) Explain step by step how to create console application using ADO, NET? Consider any
example. (7M)(W-16)

class Program
{
static void Main(string[] args)
{
int num1; int
num2;
string operand; float
answer;

Console.Write("Please enter the first integer: ");


num1 = Convert.ToInt32(Console.ReadLine());

Console.Write("Please enter an operand (+, -, /, *): ");


operand = Console.ReadLine();

Console.Write("Please enter the second integer: "); num2


= Convert.ToInt32(Console.ReadLine());

switch (operand)
{
case "-":
answer = num1 - num2; break;
case "+":
answer = num1 + num2;
break; case
"/":
answer = num1 / num2;
break;
case "*":
answer = num1 * num2;
break;
default:
answer = 0;
break;
}
Console.WriteLine(num1.ToString() + " " + operand + " " + num2.ToString() + " = " +
answer.ToString());
Console.ReadLine();
}
}
Q.9) Write a program in C# to design calculator as console based application.(6M)(W- 18)

class Program
{
static void Main(string[] args)
{
int num1; int
num2;
string operand; float
answer;

Console.Write("Please enter the first integer: ");


num1 = Convert.ToInt32(Console.ReadLine());

Console.Write("Please enter an operand (+, -, /, *): ");


operand = Console.ReadLine();

Console.Write("Please enter the second integer: "); num2


= Convert.ToInt32(Console.ReadLine());

switch (operand)
{
case "-":
answer = num1 - num2;
break;
case "+":
answer = num1 + num2; break;
case "/":
answer = num1 / num2;
break;
case "*":
answer = num1 * num2;
break;
default:
answer = 0;
break;
}
Console.WriteLine(num1.ToString() + " " + operand + " " + num2.ToString() + " = " +
answer.ToString());
Console.ReadLine();
}
}

Tulsiramji Gaikwad-Patil College of Engineering and Technology


Wardha Road, Nagpur-441 108
NAAC Accredited

Department of Computer Science & Engineering

Semester: B.E. Eighth Semester (CBS) Subject:


Clustering & Cloud Computing Unit-6
Solution

Q.1) How the cloud application deploy on to the windows Azure cloud(7M)(S-17) Ans:
Deploying Application On Windows Azure Portal

To deploy application on Microsoft Data Center you need to have a Windows Azure Account. Windows
Azure is a paid service however you can start with free trial. To register for free account follow the below
steps.
Register for Free Account Step
1
You will be asked to login using Live ID. Provide your live id and login. If you don’t have live ID create
one to work with Windows Azure Free Trail
Next proceed through the screen to create free account.

After successful registration you will be getting a success registration message. After registration go back
to visual studio and right click on Windows Azure Project and select Package.
Next choose Service Configuration as Cloud and Build Configuration as Release and click Package

After successful package you can see Service Package File and Cloud Service Configuration file in the
folder explorer. We need to upload these two files to deploy application on Microsoft Data Center.

You will be navigated to live login page. Provide same live id and password you used to create Free Trial.
After successful authenticating you will be navigated to Management Portal.

To deploy on Microsoft Data Center, first you need to create Hosted Service. To create Hosted Service
from left tab select Hosted Service, Storage, Account and CDN
In top you will get three options. Their purpose is very much clear with their name.

Click on New Hosted Service to create a Hosted service. Provide information as below to create hosted
service.
Choose Subscription Name. It should be the same as your registered subscription of previous step.

Enter name of the service


Enter URL of the service. This URL need to be unique. On this URL you will be accessing the
application. So this application will be used at URL debugmodemyfirstservice.cloudapp.net
Choose a region from the drop down. In further post we will get into details of affinity group. In
Deployment option choose, Deploy to production environment.
Give a deployment name.
Next to upload package select browse locally. On browsing navigate to folder
YourFolderNameMyFirstWindowsAzureApplicationMyFirstWindowsAzureApplicationbinReleaseap
p.publish and choose files.
As of now for simplicity don’t add any Certificate and click on Ok to create a hosted service with package
of application created in last step. You will get a warning message. Click Yes on warning and proceed.

Now you need to wait for 5 to 10 minutes to get your application ready to use. Once service is ready you
can see ready status for the Web Role.
After stats are ready, you are successfully created and deployed first web application in Windows

========================================================================

Q.2) What is provisioning in cloud computing. How Virtual machine can be provision in Azure cloud.
(6M)(S-17)
Ans:

1. Provisioning VMM

VMM 2012 R2 must be deployed in order to provision VM’s


VMM requirements can be found at this link.
VMM step by step deployment guide can be found here.

2. Configure VMM with Hosts

Configure Host Groups as per your resources and Add Hosts to the appropriate host groups. Information
can be found here.

3. Configure VMM Networking


Deploy Logical Networks and IP Pools / Network Sites, Deploy VLANS / NVGRE where appropriate and
Deploy Virtual Networks. Information can be found at this link.
4. Configure VMM Templates

Configure Hardware Profiles, Configure Guest OS Profile and Deploy VMM Templates.

5. Configure SPF

Configure Service Account, Deploy SPF, Ensure SPF Account is a VMM Admin! And is a member off all
the appropriate groups

Q.3) Explain how window Azure maximize data availability and minimize security risks.(7M) Ans:
Downtime is a fundamental metric for measuring productivity in a data warehouse, but this number does
little to help you understand the basis of a system's availability. Focusing too much on the end- of-month
number can perpetuate a bias toward a reactive view of availability. Root-cause analysis is important for
preventing specific past problems from recurring, but it doesn't prevent new issues from causing future
downtime.
Minimize risk, maximize availability

Potentially more dangerous is the false sense of security encouraged by historically high availability. Even
perfect availability in the past provides no assurance that you are prepared to handle the risks that may lie
just ahead or to keep pace with the changing needs of your system and users.
So how can you shift your perspective to a progressive view of providing for availability needs on a
continual basis? The answer is availability management—a proactive approach to availability that applies
risk management concepts to minimize the chance of downtime and prolonged outages. Teradata
recommends four steps for successful availability management.
#1: Understand the risks

Effective availability management begins with understanding the nature of risk. "There are a variety of
occurrences that negatively impact the site, system or data, which can reduce the availability experienced
by end users. We refer to these as risk events," explains Kevin Lewis, director of Teradata Customer
Services Offer Management.
The features of effective availability management:

> Improves system productivity and quality of support

> Encourages partnering to meet strategic and tactical availability needs

> Recognizes all sources and impacts of availability risk

> Applies a simple, holistic approach to risk mitigation

> Facilitates communication between operations and management

> Includes benchmarking using an objective, best-practice assessment


> Establishes a clear improvement roadmap to meet evolving needs

The more vulnerable a system is to risk events, the greater the potential for extended outages or reduced
availability and, consequently, lost business productivity.
Data warehousing risk events can range from the barely detectable to the inconvenient to the catastrophic.
Risk events can be sorted into three familiar categories of downtime based on their type of impact:
Planned downtime is a scheduled system outage, usually during low-usage or non-critical periods
(e.g., upgrades/updates, planned maintenance, testing).
Unplanned downtime is an unanticipated loss of system, data or application access (e.g., utility
outages, human error, planned downtime overruns).

Degraded downtime is "low quality" availability in which the system is available, but performance
is slow and inefficient (e.g., poor workload management, capacity exhaustion).

Although unplanned downtime is usually the most painful, companies have a growing need to reduce
degraded and planned downtime as well. Given the variety of risk causes and impacts, follow the next step
to reduce your system's vulnerability to risk events.
#2: Assess and strategize

Although the occurrences of risk events to the Teradata system are often uncontrollable, applying a good
availability management framework mitigates their impact. To meet strategic and tactical availability
objectives, Teradata advocates a holistic system of seven attributes to address all areas that affect system
availability. These availability management attributes are the tangible real-world IT assets, tools, people
and processes that can be budgeted, assigned, administered and supervised to support system availability.
They are:
Environment. The equipment layout and physical conditions within the data center that houses the
infrastructure, including temperature, airflow, power quality and data center cleanliness

Infrastructure. The IT assets, the network architecture and configuration connecting them, and their
compatibility with one another. These assets include the production system; dual systems; backup, archive
and restore (BAR) hardware and software; test and development systems; and disaster recovery systems
Technology. The design of each system, including hardware and software versions, enabled utilities
and tools, and remote connectivity Support level. Maintenance coverage hours, response times, proactive
processes, support tools employed and the accompanying availability reports Operations. Operational
procedures and support personnel used in the daily administration of the system and database Data
protection. Processes and product features that minimize or eliminate data loss, corruption and theft; this
includes system security, fallback, hot standby nodes, hot standby disks and large cliques Recoverability.
Strategies and processes to regularly back up and archive data and to restore data and functionality in case
of data loss or disaster
As evident in this list of attributes, supporting availability goes beyond maintenance service level
agreements and downtime reporting. These attributes incorporate multiple technologies, service providers,
support functions and management areas. This span necessitates an active partnership between Teradata
and the customer to ensure all areas are adequately addressed. In addition to being
comprehensive, these attributes provide the benefit of a common language for communicating, identifying
and addressing availability management needs.
Answer the sample best-practice questions for each attribute. A "no" response to any yes/no question
represents an availability management gap. Other questions will help you assess your system's overall
availability management.
Dan Odette, Teradata Availability Center of Expertise leader, explains: "Discussing these attributes with
customers makes it easier for them to understand their system availability gaps and plan an improvement
roadmap. This approach helps customers who are unfamiliar with the technical details of the Teradata
system or IT support best practices such as the Information Technology Infrastructure Library [ITIL]."
#3: Weigh the odds

To reduce the risk of downtime and/or prolonged outages, your availability management capabilities must
be sufficient to meet your usage needs. (See figure 1, left.)

According to Chris Bowman, Teradata Technical Solutions architect, "Teradata encourages customers to
obtain a more holistic view of their system availability and take appropriate action based on benchmarking
across all of the attributes." In order to help customers accomplish this, Teradata offers an Availability
Assessment service. "We apply Teradata technological and ITIL service management best practices to
examine the people, processes, tools and architectural solutions across the seven attributes to identify
system availability risks," Bowman says.

Collect. Data is collected across all attributes, including environmental measurements, current
hardware/software configurations, historic incident data and best-practice conformity by all personnel that
support and administer the Teradata system. This includes customer management and staff, Teradata
support services, and possibly other external service providers. Much of this data can be collected remotely
by Teradata, though an assigned liaison within the customer organization is requested to facilitate access to
the system and coordinate any personnel interviews.
Analyze. Data is consolidated and analyzed by an availability management expert who has a strong
understanding of the technical details within each attribute and their collective impact on availability.
During this stage, the goal is to uncover gaps that may not be apparent because of a lack of best-practice
knowledge or organizational "silos." Silos are characterized by a lack of cross- functional coordination due
to separate decision-making hierarchies or competing organizational objectives.
Recommend. The key deliverable of an assessment is a clear list of practical recommendations for
availability management improvements. To have the maximum positive impact, recommendations must
include:
An unbiased, expert perspective of the customer's specific availability management situation Mitigation
suggestions to prevent the recurrence of historical outages Quantified benchmarking across all attributes to
pinpoint the areas of greatest vulnerability to risk events Corrective actions provided for every best-practice
shortfall Operations-level improvement actions with technical details to facilitate tactical implementation
Management-level guidance in the form of a less technical, executive scorecard to facilitate decision
making and budget prioritization Teradata collects data
across all attributes and analyzes the current effectiveness of your availability management. The result is
quantified benchmarking and actionable recommendations.
#4: Plan the next move

The recommendations from the assessment provide the basis for an availability management improvement
roadmap "Cross-functional participation by both operations and management levels is crucial for
maximizing the knowledge transfer of the assessment findings and ensuring follow- through," Odette says.
Typically, not all of the recommendations can be implemented at once because of resource and budget
constraints, so it's common to take a phased approach. Priorities are based on the assessment benchmarks,
the customer's business objectives, the planned evolution for use of the Teradata system and cost-to-benefit
considerations. Many improvements can be effectively cost-free to implement but still have a big impact.
For example, adjusting equipment layout can improve airflow, which in turn can reduce heat-related
equipment failures. Or, having the system/database administrators leverage the full capabilities of tools
already built into Teradata can prevent or reduce outages. Lewis adds, "More significant improvements
such as a disaster recovery capability or dual active systems may require greater investment and effort, but
incremental steps can be planned and enacted over time to ensure availability management keeps pace with
the customer's evolving needs." An effective availability management strategy requires a partnership
between you, as the customer, and Teradata. Together, we can apply a comprehensive framework of best
practices to proactively manage risk and provide for your ongoing availability needs.
Q.4) Give the steps to create virtual machine.(7M)(W-17)

Azure storage is one of the cloud computing PAAS(Platform as a service) service provided by the
Microsoft azure team. The storage option is one of the best computing service provided by Azure
as it supports both legacy application development using Azure SQL and modern application
development using Azure No-SQL table storage. Storage in azure can be broadly classified into
two categories based on the type of data that we are going to save.
1.Relational data Storage
2.NonRelational data storage
Relational Data Storage:
Relational data can be saved in the cloud using Azure SQL storage.
Azure SQL Storage:

This kind of storage option is used when we want to store relational data in the cloud.
This is one of the PAAS offerings from Azure built based on the SQL server relational database
technology. Quick scalability and Pay as you Use options of SQL azure encourages an
organization to store their relational data into the cloud. This type of storage option
enables the developers/organizations to migrate on-premise SQL data to Azure SQL and vice
versa for greater availability, reliability, durability, NoSqlscalability and data protection.
Non-Relational Data Storage:

This kind of cloud storage option enables the users to store their documents, media
filesNoSQLdata over the cloud that can be accessed using REST APIs. In order to work with this
kind of data, we should have Storage account in the azure. Storage account structure can be shown
below. Storage account wraps all the storage options provided by the azure like Blob storage,
Queue storage, file storage, NoSQL storage. Access keys are used to authenticate storage account.
Azure provides four types of storage options based on the data type.
1.Blob storage
2. Queue storage
3. Table Storage
4. File storage
Blob Storage is used to store unstructured data such as text or binary data that can be accessed
using HTTP or HTTPS from anywhere in the world.
Common usage of blob storage are as follows:
-For streaming audio and video
-For serving documents and images directly to the browser
-For storing data for big data analysis
-For storing data for backup, restore and disaster recovery.
Queue storage is used to transfer a large amount of data between cloud apps asynchronously. This
kind of storage is mainly used to transfer the data between apps for asynchronous communication
between cloud components. File storage is used when we want to store and share files using smb
protocol. With Azure File storage, applications running in Azure virtual machines or cloud
services can mount a file share in the cloud, just as a desktop application mounts a typical SMB
share. Azure Table storage is not Azure SQL relational data storage, Table storage is the
Microsoft’s No-SQL database which stores data in a key-value pair. This kind of storage is used to
store a large amount of data for future analysis using Hadoop
support. Following advantages makes azure storage popular in the market.
Scalability:
We can start with small size blob and we can increase the size as per the demand to an infinite
number without affecting production environment.
Secure and Reliable:
Security for azure storage data can be provided in two ways, By using Storage Account access
keys and by server level and client level encryption.
Durability & High availability:

Replication concept has been used for azure storage in order to give high availability(99.99 %
uptime) and durability. This replication concept maintains different copies of your data to different
location or region based on the replication option [Locally redundant storage, Zone- redundant
storage, Geo-redundant storage, Read-access geo-redundant storage] at the time of creating a
storage account.
How is cloud storage different from on-premise data center?
Simplicity, Scalability, Maintenance and Accessibility of data are the features which we expect
from any public cloud storage and these are main assets of azure cloud storage and which is very
difficult to get in on-premise datacenters. Simplicity: We can easily create and set up storage
objects in azure. Scalability: Storage capacity is highly scalable and elastic.
Accessibility: Data in azure storage is easily searchable and accessible through the latest web
technologies like HTTP and REST APIs. Multiprotocol (HTTP, TCP, etc) data access for modern
applications makes azure to stand in the crowd. Maintenance and Backup of data: Not required to
bother about maintenance of datacenter and backup of data everything will be taken care of the
azure team. Azure’s replication concept is used to maintain the different copies of data at different
geo location. Using this we can protect our data even if a natural disaster occurs. High availability
and disaster recovery are one of the good feature provided by azure storage which we cannot see
in on-premise datacenters.

Q.5) Explain how to deploy application using windows Azure subscription.(9M)(W- 17)(w-
16)
Deploying a Web App from PowerShell
To get started with the PowerShell, refer to ‘PowerShell’ chapter in the tutorial. In order to
deploy a website from PowerShell you will need the deployment package. You can get this from
your website developers or you if you are into web deployment you would know about creating a
deployment package. In the following sections, first you will learn how to create a deployment
package in Visual Studio and then using PowerShell cmdlets, you will deploy the package on
Azure.
Create a Deployment Package
Step 1 − Go to your website in Visual Studio.

Step 2 − Right-click on the name of the application in the solution explorer. Select ‘Publish’.

Step 3 − Create a new profile by selecting ‘New Profile’ from the dropdown. Enter the name of
the profile. There might be different options in dropdown depending on if the websites are
published before from the same computer.
Step 4 − On the next screen, choose ‘Web Deploy Package’ in Publish Method.

Step 5 − Choose a path to store the deployment package. Enter the name of site and click Next.
Step 6 − On the next screen, leave the defaults on and select ‘publish’.

After it’s done, inside the folder in your chosen location, you will find a zip file which is what you
need during deployment.
Create a Website in Azure using PowerShell
Step 1 − Enter the following cmdlets to create a website. Replace the highlighted part. This
command is going to create a website in free subscription. You can change the subscription after
the website is created.

New-AzureWebsite -name "mydeploymentdemo" -location "East US"


If cmdlet is successful, you will see all the information as shown in the above image. You can see
the URL of your website as in this example it is mydeploymentdemo.azurewebsites.net.
Step 2 − You can visit the URL to make sure everything has gone right.
Deploy Website using Deployment Package
Once the website is created in Azure, you just need to copy your website’s code. Create the zip
folder (deployment package) in your local computer.
Step 1 − Enter the following cmdlets to deploy your website.

Here in above commandlet, the name of the website just created is given and the path of the zip
file on the computer.
Step 2 − Go to your website’s URL. You can see the website as shown in the following image.

=================================================================

Q.6) Write about worker role & web role while configuring an application in windows
Azure.(5M)(W-17) (W-16)
In Azure, a Cloud Service Role is a collection of managed, load-balanced, Platform-as-a- Service
virtual machines that work together to perform common tasks. Cloud Service Roles are managed
by Azure fabric controller and provide the ultimate combination of scalability, control, and
customization

What is a Web Role?

Web Role is a Cloud Service role in Azure that is configured and customized to run web
applications developed on programming languages/technologies that are supported by
Internet Information Services (IIS), such as ASP.NET, PHP, Windows Communication
Foundation and Fast CGI.
What is a Worker Role?

Worker Role is any role in Azure that runs applications and services level tasks, which generally
do not require IIS. In Worker Roles, IIS is not installed by default. They are mainly used to
perform supporting background processes along with Web Roles and do tasks such as
automatically compressing uploaded images, run scripts when something changes in the database,
get new messages from queue and process and more.

Differences between the Web and Worker Roles The


main difference between the two is that:
 a Web Role automatically deploys and hosts your app through IIS
 a Worker Role does not use IIS and runs your app standalone

Being deployed and delivered through the Azure Service Platform, both can be managed in the
same way and can be deployed on a same Azure Instance.

In most scenarios, Web Role and Worker Role instances work together and are often used by an
application simultaneously. For example, a web role instance might accept requests from users,
then pass them to a worker role instance for processing.

Worker Roles

Azure Portal provides basic monitoring for Azure Web and Worker Roles. Users that require
advanced monitoring, auto-scaling or self-healing features for their cloud role instances, should
learn more about CloudMonix. Along with advanced features designed to keep Cloud Services
stable, CloudMonix also provides powerful dashboards, historical reporting, various integrations
to popular ITSM and other IT tools and much more. Check out this table for a detailed
comparison of CloudMonix vs native Azure monitoring features.

Q.7) Explain types of storage in windows azure. (7M)(w-16)(W-18)

Azure Storage Types

With an Azure Storage account, you can choose from two kinds of storage
services: Standard Storage which includes Blob, Table, Queue, and File storage types, and
Premium Storage – Azure VM disks.
Standard Storage account
With a Standard Storage Account, a user gets access to Blob Storage, Table Storage, File
Storage, and Queue storage. Let’s explain those just a bit better.

Azure Blob Storage

Blog Storage is basically storage for unstructured data that can include pictures, videos, music
files, documents, raw data, and log data…along with their meta-data. Blobs are stored in a
directory-like structure called a “container”. If you are familiar with AWS S3, containers work
much the same way as S3 buckets. You can store any number of blob files up to a total size of 500
TB and, like S3, you can also apply security policies. Blob storage can also be used for data or
device backup.

Blob Storage service comes with three types of blobs: block blobs, append blobs and page blobs.
You can use block blobs for documents, image files, and video file storage. Append blobs are
similar to block blobs, but are more often used for append operations like logging. Page blobs are
used for objects meant for frequent read-write operations. Page blobs are therefore used in Azure
VMs to store OS and data disks.

To access a blob from storage, the URI should be:

http://<storage-account-name>.blob.core.windows.net/<container-name>/<blob-name> For
example, to access a movie called RIO from the bluesky container of an account called carlos,
request:

http://carlos.blob.core.windows.net/ bluesky/RIO.avi
Note that container names are always in lower case.

Azure Table Storage


Table storage, as the name indicates, is preferred for tabular data, which is ideal for key-value
NoSQL data storage. Table Storage is massively scalable and extremely easy to use. Like other
NoSQL data stores, it is schema-less and accessed via a REST API. A query to table storage might
look like this:

http://<storage account>.table.core.windows.net/<table>
Azure File Storage

Azure File Storage is meant for legacy applications. Azure VMs and services share their data via
mounted file shares, while on-premise applications access the files using the File Service REST
API. Azure File Storage offers file shares in the cloud using the standard SMB protocol and
supports both SMB 3.0 and SMB 2.1.

Azure Queue Storage

The Queue Storage service is used to exchange messages between components either in the cloud
or on-premise (compare to Amazon’s SQS). You can store large numbers of messages to be shared
between independent components of applications and communicated asynchronously via HTTP or
HTTPS. Typical use cases of Queue Storage include processing backlog messages or exchanging
messages between Azure Web roles and Worker roles.
A query to Queue Storage might look like this:
http://<account>.queue.core.windows.net/<file_to_download>
Premium Storage account:

The Azure Premium Storage service is the most recent storage offering from Microsoft, in which
data are stored in Solid State Drives (SSDs) for better IO and throughput. Premium storage only
supports Page Blobs.
==============================================================

Q.8) Explain the complete Azure life cycle in detail.(6M)(W-18)

The development lifecycle of software that uses the Azure platform mainly follows two processes:

Application Development

During the application development stage the code for Azure applications is most commonly built
locally on a developer’s machine. Microsoft has recently added additional services to Azure Apps
named Azure Functions. They are a representation of ‘serverless’ computing and allow developers
to build application code directly through the Azure portal using references to a number of
different Azure services.
The application development process includes two phases: 1) Construct + Test and 2) Deploy
+ Monitor. Construct &
Test
In the development and testing phase, a Windows Azure application is built in the Visual Studio
IDE (2010 or above). Developers working on non-Microsoft applications who want to start using
Azure services can certainly do so by using their existing development platform. Community-built
libraries such as Eclipse Plugins, SDKs for Java, PHP or Ruby are available and make this
possible.

Visual Studio Code is a tool that was created as a part of Microsoft efforts to better serve
developers and recognize their needs for lighter and yet powerful/highly-configurable tools. This
source code editor is available for Windows, Mac and Linux. It comes with built-in support for
JavaScript, TypeScript and Node.js. It also has a rich ecosystem of extensions and runtimes for
other languages such as C++, C#, Python, PHP and Go.

That said, Visual Studio provides developers with the best development platform to build
Windows Azure applications or consume Azure services.

Visual Studio and the Azure SDK provide the ability to create and deploy project infrastructure
and code to Azure directly from the IDE. A developer can define the web host, website and
database for an app and deploy them along with the code without ever leaving Visual Studio.

Microsoft also proposed a specialized Azure Resource Group deployment project template in
Visual Studio that provides all the needed resources to make a deployment in a single, repeatable
operation. Azure Resource Group projects work with preconfigured and customized JSON
templates, which contain all the information needed for the resources to be deployed on Azure. In
most scenarios, where multiple developers or development teams work simultaneously on the
same Azure solution, configuration management is an essential part of the development lifecycle.
Q.9) Write down the steps involved in deployment of an application to windows Azure
cloud.(7M)(W-18)

Downtime is a fundamental metric for measuring productivity in a data warehouse, but this number does
little to help you understand the basis of a system's availability. Focusing too much on the end- of-month
number can perpetuate a bias toward a reactive view of availability. Root-cause analysis is important for
preventing specific past problems from recurring, but it doesn't prevent new issues from causing future
downtime.
Minimize risk, maximize availability

Potentially more dangerous is the false sense of security encouraged by historically high availability. Even
perfect availability in the past provides no assurance that you are prepared to handle the risks that may lie
just ahead or to keep pace with the changing needs of your system and users.
So how can you shift your perspective to a progressive view of providing for availability needs on a
continual basis? The answer is availability management—a proactive approach to availability that applies
risk management concepts to minimize the chance of downtime and prolonged outages. Teradata
recommends four steps for successful availability management.
#1: Understand the risks

Effective availability management begins with understanding the nature of risk. "There are a variety of
occurrences that negatively impact the site, system or data, which can reduce the availability experienced
by end users. We refer to these as risk events," explains Kevin Lewis, director of Teradata Customer
Services Offer Management.
The features of effective availability management:

> Improves system productivity and quality of support

> Encourages partnering to meet strategic and tactical availability needs

> Recognizes all sources and impacts of availability risk

> Applies a simple, holistic approach to risk mitigation

> Facilitates communication between operations and management

> Includes benchmarking using an objective, best-practice assessment

> Establishes a clear improvement roadmap to meet evolving needs

The more vulnerable a system is to risk events, the greater the potential for extended outages or reduced
availability and, consequently, lost business productivity.

Data warehousing risk events can range from the barely detectable to the inconvenient to the catastrophic.
Risk events can be sorted into three familiar categories of downtime based on their type of impact:
Planned downtime is a scheduled system outage, usually during low-usage or non-critical
periods (e.g., upgrades/updates, planned maintenance, testing).
Unplanned downtime is an unanticipated loss of system, data or application access (e.g.,
utility outages, human error, planned downtime overruns).

Degraded downtime is "low quality" availability in which the system is available, but performance
is slow and inefficient (e.g., poor workload management, capacity exhaustion).
Although unplanned downtime is usually the most painful, companies have a growing need to reduce
degraded and planned downtime as well. Given the variety of risk causes and impacts, follow the next step
to reduce your system's vulnerability to risk events.
#2: Assess and strategize

Although the occurrences of risk events to the Teradata system are often uncontrollable, applying a good
availability management framework mitigates their impact. To meet strategic and tactical availability
objectives, Teradata advocates a holistic system of seven attributes to address all areas that affect system
availability. These availability management attributes are the tangible real-world IT assets, tools, people
and processes that can be budgeted, assigned, administered and supervised to support system availability.
They are:
Environment. The equipment layout and physical conditions within the data center that houses the
infrastructure, including temperature, airflow, power quality and data center cleanliness
Infrastructure. The IT assets, the network architecture and configuration connecting them, and their
compatibility with one another. These assets include the production system; dual systems; backup, archive
and restore (BAR) hardware and software; test and development systems; and disaster recovery systems
Technology. The design of each system, including hardware and software versions, enabled utilities
and tools, and remote connectivity
Support level. Maintenance coverage hours, response times, proactive processes, support tools
employed and the accompanying availability reports
Operations. Operational procedures and support personnel used in the daily administration of the
system and database
Data protection. Processes and product features that minimize or eliminate data loss, corruption and
theft; this includes system security, fallback, hot standby nodes, hot standby disks and large cliques
Recoverability. Strategies and processes to regularly back up and archive data and to restore data
and functionality in case of data loss or disaster

As evident in this list of attributes, supporting availability goes beyond maintenance service level
agreements and downtime reporting. These attributes incorporate multiple technologies, service providers,
support functions and management areas. This span necessitates an active partnership between Teradata
and the customer to ensure all areas are adequately addressed. In addition to being comprehensive, these
attributes provide the benefit of a common language for communicating, identifying and addressing
availability management needs.
Answer the sample best-practice questions for each attribute. A "no" response to any yes/no
question represents an availability management gap. Other questions will help you assess your system's
overall availability management.
Dan Odette, Teradata Availability Center of Expertise leader, explains: "Discussing these attributes with
customers makes it easier for them to understand their system availability gaps and plan an improvement
roadmap. This approach helps customers who are unfamiliar with the technical details of the Teradata
system or IT support best practices such as the Information Technology Infrastructure Library [ITIL]."
#3: Weigh the odds

To reduce the risk of downtime and/or prolonged outages, your availability management capabilities must
be sufficient to meet your usage needs. (See figure 1, left.)
According to Chris Bowman, Teradata Technical Solutions architect, "Teradata encourages customers to
obtain a more holistic view of their system availability and take appropriate action based on benchmarking
across all of the attributes." In order to help customers accomplish this, Teradata offers an Availability
Assessment service. "We apply Teradata technological and ITIL service management best practices to
examine the people, processes, tools and architectural solutions across the seven attributes to identify
system availability risks," Bowman says.
Collect. Data is collected across all attributes, including environmental measurements, current
hardware/software configurations, historic incident data and best-practice conformity by all personnel that
support and administer the Teradata system. This includes customer management and staff, Teradata
support services, and possibly other external service providers. Much of this data can be collected remotely
by Teradata, though an assigned liaison within the customer organization is requested to facilitate access to
the system and coordinate any personnel interviews.
Analyze. Data is consolidated and analyzed by an availability management expert who has a strong
understanding of the technical details within each attribute and their collective impact on availability.
During this stage, the goal is to uncover gaps that may not be apparent because of a lack of best-practice
knowledge or organizational "silos." Silos are characterized by a lack of cross- functional coordination due
to separate decision-making hierarchies or competing organizational objectives.
Recommend. The key deliverable of an assessment is a clear list of practical recommendations for
availability management improvements. To have the maximum positive impact, recommendations must
include:
An unbiased, expert perspective of the customer's specific availability management situation
Mitigation suggestions to prevent the recurrence of historical outages
Quantified benchmarking across all attributes to pinpoint the areas of greatest vulnerability to risk events
Corrective actions provided for every best-practice shortfall

Operations-level improvement actions with technical details to facilitate tactical implementation

Management-level guidance in the form of a less technical, executive scorecard to facilitate decision
making and budget prioritization

Teradata collects data across all attributes and analyzes the current effectiveness of your availability
management. The result is quantified benchmarking and actionable recommendations.
#4: Plan the next move

The recommendations from the assessment provide the basis for an availability management improvement
roadmap.
"Cross-functional participation by both operations and management levels is crucial for maximizing the
knowledge transfer of the assessment findings and ensuring follow-through," Odette says.
Typically, not all of the recommendations can be implemented at once because of resource and budget
constraints, so it's common to take a phased approach. Priorities are based on the assessment benchmarks,
the customer's business objectives, the planned evolution for use of the Teradata system and cost-to-benefit
considerations.
Many improvements can be effectively cost-free to implement but still have a big impact. For example,
adjusting equipment layout can improve airflow, which in turn can reduce heat-related equipment failures.
Or, having the system/database administrators leverage the full capabilities of tools already built into
Teradata can prevent or reduce outages. Lewis adds, "More significant improvements such as a disaster
recovery capability or dual active systems may require greater investment and effort, but incremental steps
can be planned and enacted over time to ensure availability management keeps pace with the customer's
evolving needs."
An effective availability management strategy requires a partnership between you, as the customer, and
Teradata. Together, we can apply a comprehensive framework of best practices to proactively manage risk
and provide for your ongoing availability needs.

Q.10) What are the steps for creating a simple cloud application using Azure. Explain
with the help of an example.(6M)(W-18)

1. Installation of Windows Azure SDK


2. Developing First Windows Azure Web Application
3. Deploying application locally in Development Storage Fabric
4. Registration for free Windows Azure Trial
5. Deployment of the Application in Microsoft Data Center

I will start fresh with installation of Windows Azure SDK and I will conclude this post with
deployment of simple application in Windows Azure Hosted Service. I am not going to create a
complex application since purpose of this post is to walkthrough with all the steps from
installation, development, debugging to deployment. In further post we will get into more complex
applications. Proceed through rest of the post to create your first application for Windows Azure.
- -

You might also like