0% found this document useful (0 votes)
31 views115 pages

CC - Unitwise Material - Compressed

Cloud computing

Uploaded by

yuvianjali7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views115 pages

CC - Unitwise Material - Compressed

Cloud computing

Uploaded by

yuvianjali7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 115

UNIT 1 - FUNDAMENTAL CLOUD COMPUTING AND VIRTUALIZATION

3.1. CLOUD COMPUTING: ORIGIN AND INFLUENCES


The history of Cloud computing cover the history of client server computing, distributed
computing, and cloud computing.
Before Computing was come into existence, client Server Architecture was used where all the
data and control of client resides in Server side. If a single user want to access some data, firstly
user need to connect to the server and after that user will get appropriate access. But it has
many disadvantages. So, After Client Server computing, Distributed Computing was come into
existence, in this type of computing all computers are networked together with the help of this,
user can share their resources when needed. It also has certain limitations. So in order to
remove limitations faced in distributed system, cloud computing was emerged.
If a single user wants to access specific data or run a program, User need to connect to the
server and then gain appropriate access, and then user can do work on business.
Then after, distributed computing came into picture, where all the computers are networked
together and share their resources when needed.
On the basis of above computing, there was emerged of cloud computing concepts that
later implemented.
At around in 1961, John MacCharty suggested in a speech at MIT that computing can be sold
like a utility, just like a water or electricity. It was a brilliant idea, but like all brilliant ideas, it
was ahead if its time, as for the next few decades, despite interest in the model, the technology
simply was not ready for it.
But of course time has passed and the technology caught that idea and after few years we
mentioned that:
In 1999, Salesforce.com started delivering of applications to users using a simple website. The
applications were delivered to enterprises over the Internet, and this way the dream of
computing sold as utility were true.
In 2002, Amazon started Amazon Web Services, providing services like storage, computation
and even human intelligence. However, only starting with the launch of the Elastic Compute
Cloud in 2006 a truly commercial service open to everybody existed.
In 2009, Google Apps also started to provide cloud computing enterprise applications.
Of course, all the big players are present in the cloud computing evolution, some were earlier,
some were later.
In 2009, Microsoft launched Windows Azure, and companies like Oracle and HP have all joined
the game. This proves that today, cloud computing has become mainstream.
CLOUD COMPUTING
Cloud computing refers to the delivery of computing services over the internet ("the cloud") to
provide on-demand access to a wide range of Resources (Networking, Compute, Storage,
Security etc) in support of “ Pay-as-you-go” model.
Business Drivers:
 The origins and inspirations of many of the characteristics, models, and mechanisms
covered in the upcoming business drivers.
 It is important to note that these influences shaped clouds and the overall cloud
computing market from both ends.
 Business motivates organizations to adopt cloud computing in support of their cost
automation requirements.
 They have correspondingly motivated other organizations to become providers of cloud
environments and cloud technology vendors in order to create and meet the demand to
fulfill consumer needs.
Capacity Planning (Challenge) is the process of determining and fulfilling future demands of an
organization’s IT resources, products, and services. Within this context, capacity represents the
maximum amount of work that an IT resource is capable of delivering in a given period of time.
Different capacity planning strategies exist:
 Lead Strategy - adding capacity to an IT resource in anticipation of demand
 Lag Strategy - adding capacity when the IT resource reaches its full capacity
 Match Strategy - adding IT resource capacity in small increments, as demand increases
Planning for capacity can be challenging because it requires estimating usage load fluctuations.
Cost Reduction (IT Budgeting)
 Capex – Capital Expenditure
 Opex – Operational Expenditure
Costs need to be accounted for: the cost of acquiring new infrastructure, and the cost of its
ongoing ownership. Operational overhead represents a considerable share of IT budgets, often
exceeding up-front investment costs.
Common forms of infrastructure-related operating overhead include the following:
 Technical personnel required to keep the environment operational
 Upgrades and patches that introduce additional testing and deployment cycles
 Utility bills and capital expense investments for power and cooling
 Security and access control measures that need to be maintained and enforced to
protect infrastructure resources
 Administrative and accounts staff that may be required to keep track of licenses and
support arrangements
Organizational Agility
Businesses need the ability to adapt and evolve to successfully face change caused by both
internal and external factors. Organizational agility is the measure of an organization’s
responsiveness to change.
 Scaling its IT resources - High available and reliable
 Upfront infra investments
 Business Automations ( On Market Demand, Competitive Pressure, Strategic business
goals)

Technology Innovations
 Distributive Computing
 Grid Computing
 Cluster Computing
 Utility Computing

 Cloud Computing – Virtual

Technology Innovations Vs. Enabling Technologies


 Broadband Networks and Internet Architecture
 Data Center Technology
 (Modern) Virtualization Technology
 Web Technology
 Multitenant Technology
 Service Technology
KEY POINTS
• The primary business drivers that exposed the need for cloud computing and led to its
formation Include capacity planning, cost reduction, and organizational agility.
• The primary technology innovations that influenced and inspired key distinguishing features
and aspects of cloud computing include clustering, grid computing, and traditional forms of
virtualization.
3.2. BASIC CONCEPTS AND TERMINOLOGY

Cloud: A cloud refers to a distinct IT environment that is designed for the purpose of remotely
provisioning scalable and measured IT resources.

(a) Top View - Architecture (b) Detailed View - Architecture

Cloud computing architecture is a combination of service-oriented architecture and event-


driven architecture. Cloud computing architecture is divided into the following two parts -
 Front End - It contains client-side interfaces and applications that are required to access
the cloud computing platforms.
 Back End - It manages all the resources that are required to provide cloud computing
services. It includes a huge amount of data storage, security mechanism, virtual
machines, deploying models, servers, traffic control mechanisms, etc.

IT Resource: An IT resource is a physical or virtual IT-related artifact that can be either


software-based, such as a virtual server or a custom software program, or hardware-based,
such as a physical server or a network device
Cloud symbol can be used to define a boundary for a cloud-based environment that hosts and
provisions a set of IT resources. The displayed IT resources are consequently considered to be
cloud-based IT resources.

Technology architectures and various interaction scenarios involving IT resources

• The IT resources shown within the boundary of a given cloud symbol usually do not represent
all of the available IT resources hosted by that cloud. Subsets of IT resources are generally
highlighted to demonstrate a particular topic.

• Focusing on the relevant aspects of a topic requires many of these diagrams to intentionally
provide abstracted views of the underlying technology architectures. This means that only a
portion of the actual technical details are shown.

Note :- The virtual server IT resource displayed Physical servers are sometimes referred to
as physical hosts (or just hosts) in reference to the fact that they are responsible for hosting
virtual servers.

On-Premise : As a distinct and remotely accessible environment, a cloud represents an option


for the deployment of IT resources. An IT resource that is hosted in a conventional IT enterprise
within an “ organizational boundary “ (that does not specifically represent a cloud) is
considered to be located on the premises of the IT enterprise, or on-premise for short.

Note the following key points:


• An on-premise IT resource can access and interact with a cloud-based IT resource.
• An on-premise IT resource can be moved to a cloud, thereby changing it to a cloud-based IT
resource.
• Redundant deployments of an IT resource can exist in both on-premise and cloud-based
environments.

Cloud Consumers and Cloud Providers: The party that provides cloud-based IT resources is
the cloud provider. The party that uses cloud-based IT resources is the cloud consumer. These
terms represent roles usually assumed by organizations in relation to clouds and corresponding
cloud provisioning contracts.
Scaling: Scaling, from an IT resource perspective, represents the ability of the IT resource to
handle increased or decreased usage demands.
The following are types of scaling:
• Horizontal Scaling – scaling out and scaling in
• Vertical Scaling – scaling up and scaling down

Horizontal Scaling: The allocating or releasing of IT resources that are of the same type is
referred to as horizontal scaling . The horizontal allocation of resources is referred to as scaling
out and the horizontal releasing of resources is referred to as scaling in. Horizontal scaling is a
common form of scaling within cloud environments.

An IT resource (Virtual Server A) is scaled out by adding more of the same IT resources (Virtual
Servers B and C).

Vertical Scaling: vertical scaling is considered to have occurred Specifically, the replacing of an
IT resource with another that has a higher capacity is referred to as scaling up and the replacing
an IT resource with another that has a lower capacity is considered scaling down. Vertical
scaling is less common in cloud environments due to the downtime required while the
replacement is taking place.

An IT resource (a virtual server with two CPUs) is scaled up by replacing it with a more powerful
IT resource with increased capacity for data storage (a physical server with four CPUs).
A brief overview of common pros and cons associated with horizontal and vertical scaling.

Cloud Service: A cloud service is any IT resource that is made remotely accessible via a cloud.
A cloud service can exist as a simple Web-based software program with a technical interface
invoked via the use of a messaging protocol, & provide remote access for administrative tools
for all IT resources.

A cloud service that exists as a virtual server is also being accessed from outside of the cloud’s
boundary. The cloud service on the right may be accessed by a human user that has remotely
logged on to the virtual server.

NOTE:- Cloud service usage conditions are typically expressed in a service-level agreement
(SLA) that is the part of a service contract between a cloud provider and cloud consumer that
describes Quality of Service features, behaviors, and limitations of a cloud-based service or
other provisions.

An SLA provides details of various measurable characteristics such as uptime, security,


availability, reliability, and performance.
Cloud Service Consumer: Cloud service consumers can include software programs and services
capable of remotely accessing cloud services with published service, as well as workstations,
laptops and mobile devices running software capable of remotely accessing other IT resources
positioned as cloud services.

3.3. GOALS AND BENEFITS: Common measurable benefits to cloud consumers include:

• On-demand access to pay-as-you-go computing resources on a short-term basis (such as


processors by the hour), and the ability to release these computing resources when they are no
longer needed.

• The perception of having unlimited computing resources that are available on demand,
thereby reducing the need to prepare for provisioning.

• The ability to add or remove IT resources at a fine-grained level, such as modifying available
storage disk space by single gigabyte increments.

• Abstraction of the infrastructure so applications are not locked into devices or locations and
can be easily moved if needed.

Example: A sizable project tasks can complete as quickly as their application software can scale.
Using 100 servers for one hour costs the same as using one server for 100 hours.

This “elasticity” of IT resources, achieved without requiring steep initial investments to create a
large-scale computing infrastructure, can be extremely compelling.

The decision to proceed with a cloud computing adoption strategy will involve much more than
a simple comparison between the cost of leasing and the cost of purchasing.

Example: The financial benefits of dynamic scaling and the risk transference of both over-
provisioning (under-utilization) and under-provisioning (over-utilization) must also be
accounted

Increased Scalability: By providing pools of IT resources, along with tools and technologies
designed to leverage them collectively, clouds can instantly and dynamically allocate IT
resources to cloud consumers, on-demand or via the cloud consumer’s direct configuration.
This empowers cloud consumers to scale up & scale down (automatically or manually) their
cloud-based IT resources to accommodate processing fluctuations.

The ability of IT resources to always meet and fulfill unpredictable usage demands avoids
potential loss of business that can occur when usage thresholds are met.

NOTE: When associating the benefit of Increased Scalability with the capacity planning
strategies introduced earlier in the Business Drivers section, the Lag and Match Strategies are
generally more applicable due to a cloud’s ability to scale IT resources on-demand.

Increased Availability and Reliability

The availability and reliability of IT resources are directly associated with business benefits.
Outages limit the time an IT resource can be “open for business” for its customers,

 Limiting its usage and revenue generating potential.


 Runtime failures that are not immediately corrected can have a more significant impact
during high-volume usage periods.

Increasing the availability of a cloud-based IT resource to minimize or even eliminate outages,


and for increasing its reliability so as to minimize the impact of runtime failure conditions.

Specifically:

• An IT resource with increased availability is accessible for longer periods of time (for example,
22 hours out of a 24 hour day). Cloud providers generally offer “resilient” IT resources for which
they are able to guarantee high levels of availability.

• An IT resource with increased reliability is able to better avoid and recover from exception
conditions. The modular architecture of cloud environments provides extensive failover support
that increases reliability.

It is important that organizations carefully examine the SLAs offered by cloud providers when
considering the leasing of cloud-based services and IT resources. Although many cloud
environments are capable of offering remarkably high levels of availability and reliability, it
comes down to the guarantees made in the SLA that typically represent their actual contractual
obligations.

SUMMARY OF KEY POINTS

• Cloud environments are comprised of highly extensive infrastructure that offers pools of IT
resources that can be leased using a pay-for-use model whereby only the actual usage of the IT
resources is billable. When compared to equivalent on-premise environments, clouds provide
the potential for reduced initial investments and operational costs proportional to measured
usage.

• The inherent ability of a cloud to scale IT resources enables organizations to accommodate


unpredictable usage fluctuations without being limited by pre-defined thresholds that may turn
away usage requests from customers. Conversely, the ability of a cloud to decrease required
scaling is a feature that relates directly to the proportional costs benefit.

• By leveraging cloud environments to make IT resources highly available and reliable,


organizations are able to increase quality-of-service guarantees to customers and further
reduce or avoid potential loss of business resulting from unanticipated runtime failures.

3.4. RISKS AND CHALLENGES

Several of the most critical cloud computing challenges pertaining mostly to cloud consumers
that use IT resources located in public clouds are presented and examined.

Increased Security Vulnerabilities: Cloud computing introduces several security vulnerabilities


that organizations must address to protect their data and applications. While cloud service
providers invest heavily in security measures, the shared responsibility model means that users
also have a role in securing their cloud-based assets.

1. Data Breaches: Cloud storage and databases may become targets for cybercriminals
aiming to steal sensitive information. Weak authentication, misconfigured permissions,
and unpatched vulnerabilities can expose data to unauthorized access.
2. Insecure APIs: Cloud services rely on Application Programming Interfaces (APIs) to
interact with applications and data. If these APIs have security flaws or are not properly
protected, attackers can exploit them to gain unauthorized access or manipulate data.
3. Denial-of-Service (DoS) Attacks: Cloud resources can be targeted with DoS attacks,
overwhelming the service and causing disruptions for legitimate users. Service
unavailability can have severe consequences for businesses that rely on cloud
applications.
4. Data Loss: Despite robust backup systems, data loss can still occur due to technical
failures, accidental deletions, or cyberattacks. Organizations need to implement proper
data backup and disaster recovery strategies.
5. Shared Infrastructure Risks: Cloud providers use shared infrastructure to host multiple
customers' data and applications. If one customer's environment is compromised, there
is a risk of cross-tenant data breaches.
6. Compliance and Regulatory Risks: Storing data in the cloud may raise compliance
challenges, especially when data crosses international borders. Organizations must
ensure their cloud providers adhere to relevant regulations and standards.
7. Inadequate Identity and Access Management (IAM): Weak or misconfigured IAM
practices can lead to unauthorized access, privilege escalation, and data exposure.
Properly managing user identities and access permissions is crucial.

Data security becomes shared with the cloud provider. The remote usage of IT resources
requires an expansion of trust boundaries by the cloud consumer to include the external cloud.

Another consequence of overlapping trust boundaries relates to the cloud provider’s privileged
access to cloud consumer data.

The overlapping of trust boundaries and the increased exposure of data can provide malicious
cloud consumers with greater opportunities to attack IT resources and steal or damage
business data.

Two organizations accessing the same cloud service are required to extend their respective
trust boundaries to the cloud, resulting in overlapping trust boundaries. It can be challenging
for the cloud provider to offer security mechanisms that accommodate the security
requirements of both cloud service consumers.
Reduced Operational Governance Control: Cloud consumers are usually allotted a level of
governance control that is lower than that over on-premise IT resources. This can introduce
risks associated with how the cloud provider operates its cloud, as well as the external
connections that are required for communication between the cloud and the cloud consumer.

• An unreliable cloud provider may not maintain the guarantees it makes in the SLAs that were
published for its cloud services. This can jeopardize the quality of the cloud consumer solutions
that rely on these cloud services.

• Longer geographic distances between the cloud consumer and cloud provider can require
additional network hops that introduce fluctuating latency and potential bandwidth
constraints.

An unreliable network connection compromises the quality of communication between cloud


consumer and cloud provider environments. A cloud governance system is established through
SLAs, given the “as-a-service” nature of cloud computing. A cloud consumer must keep track of
the actual service level being offered and the other warranties that are made by the cloud
provider.

Limited Portability Between Cloud Providers: Lack of standards within the cloud computing
industry, public clouds are commonly proprietary to various extents. For cloud consumers that
have custom-built solutions with dependencies on these proprietary environments, it can be
challenging to move from one cloud provider to another.
A cloud consumer’s application has a decreased level of portability when assessing a potential
migration from Cloud A to Cloud B, because the cloud provider of Cloud B does not support the
same security technologies as Cloud A.

Several factors contribute to limited portability between cloud providers:

1. Proprietary Technologies: Cloud providers often develop proprietary technologies and


services that are unique to their platforms. These proprietary features might not have
direct equivalents in other cloud environments, making it challenging to migrate
applications without significant modifications or rewrites.
2. Networking and Connectivity: Cloud providers offer unique networking and
connectivity solutions, making it challenging to replicate the same network
configurations and connectivity across different providers. This can impact application
performance and overall user experience.
3. Service Dependencies: Applications deployed on one cloud provider may become
tightly integrated with various platform-specific services. Rebuilding or replacing these
services on another provider's platform may not be straightforward, leading to
additional development efforts and complexities.
4. Licensing and Legal Issues: Some applications or components may be subject to
licensing restrictions or legal agreements that limit their use on specific cloud platforms.
This can hinder the portability of certain workloads between providers.
5. Cloud Provider Policies: Each cloud provider has its own pricing models, billing
structures, and terms of service. Translating and managing these policies when moving
workloads to a different provider can be time-consuming and may lead to unexpected
cost variations.
6. Training and Familiarity: IT teams and developers become accustomed to a particular
cloud provider's tools, interfaces, and processes. Moving to a new provider might
require additional training and adjustment to the new environment.

Multi-Regional Compliance and Legal Issues: Multi-regional compliance and legal issues are
significant concerns for organizations using cloud computing services, especially when dealing
with data that crosses international borders. Different countries and regions have varying data
protection and privacy laws, which can create complexities for cloud users.

Multi-regional compliance and legal challenges, organizations must adopt a comprehensive


approach that includes:

 Conducting thorough research on the data protection and privacy laws of the regions
where they operate and where their cloud provider's data centers are located.
 Choosing cloud providers that are compliant with relevant international and regional
standards and regulations.
 Ensuring proper data classification and control measures to manage data based on its
sensitivity and regulatory requirements.
 Implementing appropriate data encryption and access controls to protect data privacy
and security.
 Drafting clear and comprehensive contracts with cloud providers that outline
responsibilities, data handling processes, and compliance requirements.
 Regularly monitoring changes in laws and regulations in different regions and adjusting
cloud strategies accordingly.

CLOUD KEY POINTS

• Cloud environments can introduce distinct security challenges, some of which pertain to
overlapping trust boundaries imposed by a cloud provider sharing IT resources with multiple
cloud consumers.

• A cloud consumer’s operational governance can be limited within cloud environments due to
the control exercised by a cloud provider over its platforms.

• The portability of cloud-based IT resources can be inhibited by dependencies upon


proprietary characteristics imposed by a cloud.

• The geographical location of data and IT resources can be out of a cloud consumer’s control
when hosted by a third-party cloud provider. This can introduce various legal and regulatory
compliance concerns.
ROLES & RESPONSIBILITIES OF CLOUD COMPUTING

In cloud computing, roles and boundaries define the responsibilities and limitations of various
entities involved in the cloud environment, including cloud service providers (CSPs) and cloud
customers (organizations or individuals using the cloud services).

1. Cloud Service Provider (CSP):


 Role: The CSP is the entity that owns and operates the cloud infrastructure and
provides cloud services to customers. They manage and maintain the underlying
hardware, software, and networking required to deliver cloud services.
 Responsibilities: CSPs are responsible for ensuring the availability, performance,
and security of the cloud infrastructure. They also handle maintenance, updates,
and scaling of the resources.
 Boundaries: The CSP's responsibility typically ends at the infrastructure level, and
they may not be responsible for securing customer data and applications.
2. Cloud Customer (Organization or Individual):
 Role: The cloud customer is the entity that uses the cloud services provided by
the CSP. They deploy and manage their applications, data, and workloads in the
cloud environment.
 Responsibilities: Cloud customers are responsible for managing and securing
their data, applications, and configurations within the cloud environment. They
must implement security measures and access controls to protect their assets.
 Boundaries: The customer is responsible for the security and management of
their data and applications but may rely on the CSP for the underlying
infrastructure's security.
3. Shared Responsibility Model:
 Role: The shared responsibility model defines the division of security
responsibilities between the CSP and the cloud customer.
 Responsibilities: The CSP is responsible for securing the cloud infrastructure,
including the physical data centers, networking, and virtualization layers. The
cloud customer is responsible for securing their data, applications, and operating
systems running on the cloud platform.
 Boundaries: The boundary between CSP and customer responsibilities depends
on the service model (IaaS, PaaS, SaaS) and the specific offerings of the CSP.
4. Third-Party Providers:
 Role: In some cases, cloud customers may use third-party providers to enhance
their cloud services, such as third-party security services or monitoring tools.
 Responsibilities: Third-party providers are responsible for the specific services
they offer and the agreed-upon scope of work.
 Boundaries: The boundaries are defined by the terms of the contract or service-
level agreement between the cloud customer and the third-party provider.
CLOUD COMPUTING CHARACTERISTICS

Cloud computing exhibits several key characteristics that differentiate it from traditional on-
premises computing models.

1. On-Demand Self-Service: Cloud computing allows users to provision and manage


computing resources (e.g., virtual machines, storage, applications) as needed, without
requiring human intervention from the service provider. Users can access and deploy
resources with minimal delays.
2. Broad Network Access: Cloud services are accessible over the internet from a wide
range of devices, including desktop computers, laptops, smartphones, and tablets. Users
can access cloud resources from anywhere with an internet connection.
3. Resource Pooling: Cloud providers pool computing resources to serve multiple users
concurrently. Resources, such as processing power, storage, and network bandwidth,
are dynamically allocated based on demand. This pooling allows for better resource
utilization and cost efficiency.
4. Rapid Elasticity: Cloud services can rapidly scale up or down to meet changing demand.
Users can easily adjust their resource allocation, often automatically, in response to
varying workloads. This scalability ensures efficient resource utilization and cost
optimization.
5. Measured Service: Cloud systems automatically control and optimize resource use,
providing transparency for both the provider and the customer. Usage of cloud
resources is monitored, measured, and reported, enabling accurate billing and usage
tracking.
6. Multi-Tenancy: Cloud providers host multiple customers (or tenants) on the same
physical infrastructure, with each customer's data and applications logically isolated
from others. This multi-tenancy enables cost sharing and resource efficiency.
7. Ubiquitous Access to Services: Cloud services can be accessed from various devices and
locations, offering flexibility and convenience for users to access their data and
applications without being tied to a specific physical location.
8. High Availability and Redundancy: Cloud providers typically offer high availability by
replicating data and services across multiple data centers. This redundancy helps ensure
continuous availability and data integrity in case of hardware failures or disasters.
9. Managed Service: Cloud providers handle the management of the underlying
infrastructure, including hardware, networking, and maintenance. This allows users to
focus on using and developing applications rather than managing the physical
infrastructure.

Note:- These essential characteristics define the core principles of cloud computing and form
the basis for the benefits that cloud services offer, such as flexibility, cost-effectiveness,
scalability, and reduced IT management burden. Understanding these characteristics is crucial
for organizations considering adopting cloud computing to make informed decisions about the
most suitable cloud deployment models and service offerings for their specific needs.
3.1 Virtualization allows the creation of a secure, customizable, and isolated execution
environment for running applications. For example, we can run Windows OS on top of a virtual
machine, which itself is running on Linux OS.
Virtualization is a technique, which allows to share a single physical instance of a resource or an
application among multiple customers and organizations. It does by assigning a logical name to
a physical storage and providing a pointer to that physical resource when demanded.
Virtualization technologies have gained renewed interested recently due to the confluence of
several phenomena:
 Increased performance and computing capacity
- Virtual Machines are having high computing power.
 Underutilized hardware and software resources.
- Limited use of increased performance & computing
 Lack of space
- Continuous need for additional for capacity
 Greening initiatives
- Reduce carbon footprints
- Reducing the number of servers, reduce power consumption.
 Rise of administrative costs.
- Power and cooling costs are higher then IT equipments.
3.2 CHARACTERISTICS OF VIRTUALIZED ENVIRONMENTS
Virtualization is a broad concept that refers to the creation of a virtual version of something,
whether hardware, a software environment, storage, or a network.
In a virtualized environment there are three major components: Guest, Host, and Virtualization
layer.
 The Guest represents the system component that interacts with the virtualization layer
rather than with the host, as would normally happen.
 The Host represents the original environment where the guest is supposed to be
managed.
 The Virtualization layer is responsible for recreating the same or a different
environment where the guest will operate

Cloud VM: Virtualization allows


multiple virtual machines (VMs) or
containers to run on a single physical
server, effectively abstracting the
underlying hardware resources and
providing a flexible and scalable
infrastructure for cloud computing.
Virtualization provides a great
opportunity to build elastically scalable
systems that can provision additional
capability with minimum costs
Virtualization offers several features or characteristics as listed below:
 Distribution of resources
 Accessibility of server resources
 Resource Isolation
 Security and authenticity
 Aggregation

Increased Security: Cloud computing can lead to increased security when implemented
correctly. While security concerns have historically been a major challenge for cloud adoption,
cloud service providers have made significant advancements in addressing security issues.
When properly configured and managed, cloud environments can provide robust security
measures that surpass those of many on-premises setups.
- Ability to control the execution of a guest
- Guest is executed in emulated environment.
- Virtual Machine Manager control and filter the activity of the guest.
- Hiding of resources & having no effect on other users/guest environment.
Managed execution: Virtualization of the execution environment not only allows increased
security, but a wider range of features also can be implemented. In particular sharing,
aggregation, emulation, and isolation are the most relevant features

Portability in virtualization and cloud computing refers to the ability to move applications,
workloads, and data seamlessly between different virtualized environments or cloud platforms.
It allows organizations to avoid vendor lock-in and provides the flexibility to adapt to changing
business requirements or leverage the strengths of various cloud providers. Portability is an
important consideration for cloud users who want to avoid dependency on a single cloud
provider and maintain control over their applications and data.
3.3 TAXONOMY OF VIRTUALIZATION TECHNIQUES

Virtualization covers a wide range of emulation techniques that are applied to different areas of
computing.
Virtualization is mainly used to emulate the execution environment, storage, and networks. The
execution environment is classified into two:
Execution Environments: To provide support for the execution of the programs eg. OS, and
Application.
 Process-level techniques are implemented on top of an existing operating system,
which has full control of the hardware.
 System-level techniques are implemented directly on hardware and do not require - or
require a minimum of support from - an existing operating system

Storage: Storage virtualization is a system administration practice that allows decoupling the
physical organization of the hardware from its logical representation.

Networks: Network virtualization combines hardware appliances and specific software for the
creation and management of a virtual network.

Among these categories, execution virtualization constitutes the oldest, most popular, and
most developed area. Therefore, it deserves major investigation and a further categorization.
We can divide these execution virtualization techniques into two major categories by
considering the type of host they require.
3.3.1.1 Machine reference model

The Machine Reference Model (MRM) in virtualization refers to the abstract


representation of the underlying physical hardware and its capabilities, as seen by the
virtualization layer

 Machine reference model


 It defines the interfaces between the levels of abstractions, which hide implementation
details. Virtualization techniques actually replace one of the layers and intercept the
calls that are directed toward it.
 Hardware is expressed in terms of the Instruction Set Architecture (ISA).

 ISA for the processor, registers, memory, and interrupt management.

 Application Binary Interface (ABI) separates the OS layer from the application and
libraries which are managed by the OS.

 System calls are defined & allows portabilities of applications and libraries across OS.
 API – it interfaces applications to libraries and/or the underlying OS.

 The layered approach simplifies the development and implementation of a computing


system.

 ISA has been divided into two security classes:–

 Nonprivileged instructions: That can be used without interfering with other tasks
because they do not access shared resources. Ex. Arithmetic, floating & fixed point.

 Privileged instructions: They are executed under specific restrictions and are mostly
used for sensitive operations, which expose (behavior-sensitive) or modify (control
sensitive) the privileged state.
 Behavior-sensitive = operate on the I/O
 Control-sensitive = alter the state of the CPU register.
 Privileged Hierarchy:
"Security Ring" in virtualization refers to a layered security approach that aims to protect and
isolate different components within a virtualized environment. Each security ring represents a
level of trust, with inner rings having higher levels of privilege and access, while outer rings
have lower privilege and more restricted access.

This model helps establish security boundaries and prevent unauthorized access to critical
components. The security ring concept is commonly used in operating systems and
virtualization platforms to ensure the integrity and isolation of virtual machines (VMs) and
other components.

Here is a typical representation of the security rings in virtualization:

Ring 0 (Innermost Circle):


 Ring 0 known as the "Kernel Mode" or "Hypervisor Mode," Ring 0 is the most privileged
level in the security ring hierarchy.
 The hypervisor or virtualization platform operates in Ring 0, managing the hardware
resources and providing the foundation for virtualization.
 The hypervisor has direct access to physical resources and controls VM creation,
scheduling, and resource allocation.
Ring 1:

 Ring 1 contains the components that interact closely with the hypervisor and have
higher privileges than the guest operating systems.
 Some virtualization technologies may use a second-level hypervisor or "hypervisor
helper" components in Ring 1 to enhance VM performance or security.
Ring 2:

 Components in Ring 2 typically include device drivers and low-level virtualization


services that interact with VMs and the hypervisor.
 Device drivers in Ring 2 facilitate communication between the VMs and the underlying
physical hardware.
Ring 3 (Outermost Circle):

 Ring 3 known as the "User Mode," Ring 3 is the least privileged level in the security ring
hierarchy.
 Guest operating systems and applications run in Ring 3.
 The hypervisor enforces isolation between VMs and restricts direct access to hardware
resources from Ring 3, providing security and preventing interference between VMs.


3.3.1.2 Hardware-level virtualization

Hardware-level virtualization is a virtualization technique that provides an abstract execution


environment in terms of computer hardware on top of which a guest operating system can be
run .
Hardware-level virtualization is also called system virtualization, since it provides ISA to virtual
machines, which is the representation of the hardware interface of a system.
Hardware-level virtualization is also called system virtualization.
Hypervisors : A fundamental element of hardware virtualization is the hypervisor, or virtual
machine manager (VMM). It recreates a hardware environment in which guest operating
systems are installed. There are two major types of hypervisors: Type I and Type II
Type I : hypervisors run directly on top of the hardware. Therefore, they take the place of the
operating systems and interact directly with underlying hardware . This type of hypervisor is
also called a native virtual machine since it runs natively on hardware.

Type II : hypervisors require the support of an operating system to provide virtualization


services. This means that they are programs managed by the operating system, which interact
with it hardware for guest operating systems. This type of hypervisor is also called a hosted
virtual machine since it is hosted within an operating system
Three properties have to be satisfied for VM :
• Equivalence. A guest running under the control of a virtual machine manager should exhibit
the same behavior as when it is executed directly on the physical host.
• Resource control. The virtual machine manager should be in complete control of virtualized
resources.
• Efficiency. A statistically dominant fraction of the machine instructions should be executed
without intervention from the virtual machine manager.

Fig: A hypervisor reference architecture.


Hardware Virtualization Techniques
 Full virtualization : Full virtualization refers to the ability to run a program, most likely
an operating system, directly on top of a virtual machine and without any modification,
as though it were run on the raw hardware. To make this possible, virtual machine
manager are required to provide a complete emulation of the entire underlying
hardware .
 Para - virtualization :This is a not-transparent virtualization solution that allows
implementing thin virtual machine managers. Paravirtualization techniques expose a
software interface to the virtualmachine that is slightly modified from the host and, as a
consequence, guests need to be modified. The aim of paravirtualization is to provide the
capability to demand the execution of performance-critical operations directly on the
host .
 Partial virtualization : Partial virtualization provides a partial emulation of the
underlying hardware, thus not allowing the complete execution of the guest operating
system in complete isolation. Partial virtualization allows many applications to run
transparently, but not all the features of the operating system can be supported, as
happens with full virtualization
Operating System-Level Virtualization:
It offers the opportunity to create different and separated execution environments for
applications that are managed concurrently. Differently from hardware virtualization, there is
no virtual machine manager or hypervisor, and the virtualization is done within a single
operating system, where the OS kernel allows for multiple isolated user space instances
3.3.1.3 Programming language-level virtualization
 Programming language-level virtualization is mostly used to achieve ease of deployment
of applications, managed execution, and portability across different platforms and
operating systems
 The main advantage of programming-level virtual machines, also called process virtual
machines, is the ability to provide a uniform execution environment across different
platforms. Programs compiled into bytecode can be executed on any operating system
and platform for which a virtual machine able to execute that code has been provided
3.3.1.4 Application-level virtualization
 The application-level virtualization is used when there is a desire to virtualize only one
application .
 Application virtualization software allows users to access and use an application from a
separate computer than the one on which the application is installed.
3.3.2 Other types of virtualization
Other types of virtualization provide an abstract environment to interact with. These
mainly cover storage, networking, and client/server interaction.
Storage virtualization: Storage virtualization is a technology that abstracts physical
storage resources and presents them as a virtualized pool of storage, decoupled from
the underlying hardware. It allows for the centralized management and utilization of
storage across multiple storage devices, making it easier to provision, allocate, and
manage storage resources. Storage virtualization enhances storage efficiency, simplifies
data management, and provides flexibility in deploying and scaling storage in modern
data centers and storage infrastructures.
Key aspects of storage virtualization include:
 Virtual Storage,
 Centralized Management,
 Data Protection and Replication
Network virtualization: Network Virtualization is a process of logically grouping physical
networks and making them operate as single or multiple independent networks called
Virtual Networks. It combines hardware appliances and specific software for the
creation and management of a virtual network
Key aspects of storage virtualization include:
 Virtual Networks,
 Network Hypervisor or Controller,
 Traffic Isolation and Segmentation.
Desktop virtualization: Desktop virtualization, also known as Virtual Desktop
Infrastructure (VDI), is a technology that virtualizes and centralizes the desktop
computing environment. It enables users to access their desktops and applications from
remote devices, such as thin clients, laptops, or even smart phones, over a network
connection. Instead of running the desktop environment locally on their devices, users
interact with a virtual desktop that runs on a centralized server or data center.
Key aspects of storage virtualization include:
 Virtual Desktops
 User Session Management
 Web Server Hosting
Application server virtualization Application server virtualization, also known as
application virtualization, is a technology that enables applications to run in isolated
environments separate from the underlying operating system and hardware. It allows
applications to be encapsulated, along with their dependencies and runtime
environment, into a virtual container. These containers can then be executed on
different systems without the need for installation or modification of the host operating
system in extend supports load balancing.
Key aspects of storage virtualization include:
 Virtual Application (or) Serverless Applications
 Application Streaming
 App Updates and Rollback

3.4 VIRTUALIZATION AND CLOUD COMPUTING


Virtualization migration refers to the process of moving virtualized resources, such as virtual
machines (VMs) or containers, from one physical host or cloud environment to another. It is
a crucial aspect of virtualization management and allows organizations to optimize resource
usage, achieve better performance, enhance availability, and adapt to changing business
needs.

The migrate phase is where the actual process of moving data, applications, and other
workloads to the cloud occurs. This phase can involve a variety of techniques, including lift-
and-shift (moving an application to the cloud without modification), refactoring (modifying
an application to take advantage of cloud-native features), or even completely rebuilding
applications.

There are several types of virtualization migrations, each serving different purposes and
scenarios:
1. Live Migration: Live migration, also known as VM migration or VMotion (in the case of
VMware), allows VMs to be moved from one physical host to another without any
disruption to the running applications or services. Live migration ensures continuous
availability and seamless resource management during server maintenance, load
balancing, or hardware upgrades.
2. Storage Migration: Storage migration involves moving virtualized data, such as VM disks
or container images, from one storage system to another. This type of migration can be
useful for load balancing storage resources, migrating to higher-performance storage, or
consolidating data onto a more efficient storage solution.
3. Cross-Hypervisor Migration: Cross-hypervisor migration involves moving VMs between
different virtualization platforms or hypervisors. It allows organizations to switch to a
different virtualization technology without the need to reconfigure or rebuild their VMs.
4. P2V (Physical to Virtual) Migration: P2V migration is the process of converting physical
machines into virtual machines. It involves creating VMs that replicate the configuration
and contents of the physical servers, allowing organizations to consolidate physical
servers onto virtual infrastructure.
5. V2V (Virtual to Virtual) Migration: V2V migration refers to the movement of VMs from
one virtualization platform to another. It may be necessary when changing virtualization
providers or consolidating VMs onto a single virtualization platform.
6. Application Container Migration: For containerized applications, container migration
involves moving containers from one host to another within a container orchestration
platform, such as Kubernetes. Container migration ensures application availability and
resource optimization within the container cluster.
Benefits of Migrating to the Cloud
 Scalability
 Cost
 Performance
 Digital experience

3.5 Pros and cons of virtualization

S.No PROS or Advantages CONS or Disadvantages


1 Resource Utilization: Virtualization Overhead: Virtualization introduces some
allows for better utilization of performance overhead due to the
hardware resources by running virtualization layer, which may slightly
multiple virtual machines (VMs) or impact application performance.
containers on a single physical host.
This leads to increased efficiency and
cost savings.

2 Cost Savings: Virtualization reduces Complexity: Managing virtualized


hardware requirements, leading to environments can be more complex than
lower hardware costs, reduced power traditional environments, requiring
consumption, and savings on data specialized skills and knowledge.
center space and cooling.

3 Flexibility and Scalability: Virtualization Resource Contention: If not properly


makes it easy to scale up or down managed, multiple VMs or containers on
resources based on demand. VMs and the same host may compete for resources,
containers can be easily provisioned, leading to resource contention and
cloned, or removed to adapt to performance issues.
changing workloads.
4 Isolation and Security: Virtualization Single Point of Failure: If the virtualization
provides isolation between VMs or host fails, multiple VMs or containers
containers, enhancing security. If one running on it may be affected, potentially
VM is compromised, it does not affect leading to a significant outage.
others, limiting the impact of security
breaches.

5 Application Compatibility: Licensing Costs: Some virtualization


Virtualization allows running multiple solutions may require additional licensing
operating systems and applications on costs, which can impact the overall cost-
the same physical host, making it easier effectiveness.
to support diverse workloads.

6 High Availability: Virtualization Compatibility Issues: Some applications or


platforms offer features like live hardware may not be fully compatible with
migration and fault tolerance, enabling virtualization, leading to potential
continuous operation and minimizing challenges during migration.
downtime during hardware
maintenance or failures.

7 Backup and Disaster Recovery: Backup and Recovery Complexity: While


Virtualization simplifies backup and virtualization simplifies backup and
recovery processes as VMs and recovery, managing backup schedules and
containers can be easily backed up, storage requirements for multiple VMs or
restored, or replicated to a secondary containers can still be complex.
site for disaster recovery purposes.

8 Cloud Adoption: Virtualization forms Security Risks: While virtualization


the foundation of cloud computing, enhances security in many ways, improper
enabling the creation of virtualized configuration or vulnerabilities in the
cloud services and resources. virtualization layer can introduce new
security risks.

3.5.1 Performance degradation


Performance degradation is one of the potential disadvantages of virtualization. While
virtualization offers many benefits, the presence of a virtualization layer between the
hardware and the virtual machines (VMs) or containers can introduce some
performance overhead.
 Maintaining the status of virtual processors
 Support of privileged instructions (trap and simulate privileged instructions)
 Support of paging within VM
 Console functions

3.5.2 Inefficiency and degraded user experience


Inefficiency and degraded user experience are potential challenges that can arise in
virtualized environments. While virtualization offers numerous benefits, such as
resource consolidation and flexibility, certain factors can lead to performance issues and
a suboptimal user experience.
 Inefficient Resource Programs
 Network Congestion:
 Guest OS Configuration

Inefficiency and degraded user experience in virtualized environments, organizations


should adopt best practices for resource allocation, performance monitoring, and
workload balancing.
3.5.3 Security holes and new threats
 Virtualization opens the door to a new and unexpected form of phishing.
 The capability of emulating a host in a completely transparent manner led the way to
malicious programs that are designed to extract sensitive information from the guest.
 Virtualization introduces new security considerations and potential vulnerabilities
that organizations must address to ensure the integrity and protection of their
virtualized environments.
 In hardware virtualization, malicious programs can preload themselves before the
operating system and act as a thin virtual machine manager toward

Examples of these kinds of malware are BluePill and SubVirt. BluePill, malware targeting
the AMD processor family, moves the execution of the installed OS within a virtual
machine.

3.6 Technology examples


A wide range of virtualization technology is available especially for vitalizing computing
environments comments & under Technology we discuss the most relevant technologies
and approaches utilized in the field. Cloud-specific solutions.

3.6.1 Xen: paravirtualization

 Xen is an open source hypervisor based on paravirtualization. It is the most popular


application of paravirtualization.

 Xen has been extended to compatible with full virtualization using hardware-assisted
virtualization. It enables high performance to execute guest operating system.
 This is probably done by removing the performance loss while executing the instructions
requiring significant handling and by modifying portion of the guest operating system
executed by Xen, with reference to the execution of such instructions.
Hence this especially support x86, which is the most used architecture on commodity machines
and servers.
The Xen Architecture and its mapping onto a classic x86 privilege model. A Xen based system is
handled by Xen hypervisor, which is executed in the most privileged mode and maintains the
access of guest operating system to the basic hardware.
Guest operating system are run between domains, which represents virtual machine instances.
In addition, particular control software, which has privileged access to the host and handles all
other guest OS, runs in a special domain called Domain 0. This the only one loaded once the
virtual machine manager has fully booted, and hosts an HTTP server that delivers requests for
virtual machine creation, configuration, and termination. This component establishes the
primary version of a shared virtual machine manager (VMM), which is a necessary part of Cloud
computing system delivering Infrastructure-as-a-Service (IaaS) solution.
Various x86 implementation support four distinct security levels, termed as rings, i.e.,
Ring 0, Ring 1, Ring 2, Ring 3,
Ring 0 represents the level having most privilege and Ring 3 represents the level having least
privilege. Almost all the frequently used Operating system, except for OS/2, uses only two levels
i.e. Ring 0 for the Kernel code and Ring 3 for user application and non-privilege OS program.
This provides a chance to the Xen to implement paravirtualization. This enables Xen to control
unchanged the Application Binary Interface (ABI) thus allowing a simple shift to Xen-virtualized
solutions, from an application perspective.
Due to the structure of x86 instruction set, some instructions allow code execution in Ring 3 to
switch to Ring 0 (Kernel mode). Such an operation is done at hardware level, and hence
between a virtualized environment, it will lead to a TRAP or a silent fault, thus preventing the
general operation of the guest OS as it is now running in Ring 1.
This condition is basically occurred by a subset of system calls. To eliminate this situation,
implementation in operating system requires a modification and all the sensitive system calls
needs re-implementation with hypercalls. Here, hypercalls are the particular calls revealed by
the virtual machine (VM) interface of Xen and by use of it, Xen hypervisor tends to catch the
execution of all the sensitive instructions, manage them, and return the control to the guest OS
with the help of a supplied handler.
Paravirtualization demands the OS codebase be changed, and hence all operating systems can
not be referred to as guest OS in a Xen-based environment. This condition holds where
hardware-assisted virtualization can not be free, which enables to run the hypervisor in Ring 1
and the guest OS in Ring 0. Hence, Xen shows some limitations in terms of legacy hardware and
in terms of legacy OS.
In fact, these are not possible to modify to be run in Ring 1 safely as their codebase is not
reachable, and concurrently, the primary hardware hasn’t any support to execute them in a
more privileged mode than Ring 0. Open source OS like Linux can be simply modified as its code
is openly available, and Xen delivers full support to virtualization, while components of
Windows are basically not compatible with Xen, unless hardware-assisted virtualization is
available. As new releases of OS are designed to be virtualized, the problem is getting resolved
and new hardware supports x86 virtualization.
Advantages
 Xen server is developed over open-source Xen hypervisor and it uses a combination of
hardware-based virtualization and paravirtualization. This tightly coupled collaboration
between the operating system and virtualized platform enables the system to develop
lighter and flexible hypervisor that delivers their functionalities in an optimized manner.
 Xen supports balancing of large workload efficiently that capture CPU, Memory, disk
input-output and network input-output of data. It offers two modes to handle this
workload: Performance enhancement, and For handling data density.
 It also comes equipped with a special storage feature that we call Citrix storage link.
Which allows a system administrator to uses the features of arrays from Giant
companies- Hp, Netapp, Dell Equal logic etc.
 It also supports multiple processor, Iive migration one machine to another, physical
server to virtual machine or virtual server to virtual machine conversion tools,
centralized multiserver management, real time performance monitoring over window
and linux.
Disadvantages
 Xen is more reliable over linux rather than on window.
 Xen relies on 3rd-party component to manage the resources like drivers, storage,
backup, recovery & fault tolerance.
 Xen deployment could be a burden some on your Linux kernel system as time passes.
 Xen sometimes may cause increase in load on your resources by high input-output rate
and and may cause starvation of other Vm’s.

3.6.2 VMware: full virtualization


VMware’s technology is based on the concept of full virtualization, where the underlying
hardware is replicated and made available to the guest operating system
Binary Translation with Full Virtualization
Depending on implementation technologies, hardware virtualization can be classified into two
categories: full virtualization and host-based virtualization.
Full virtualization does not need to modify the host OS. It relies on binary translation to trap
and to virtualize the execution of certain sensitive, nonvirtualizable instructions.
A virtuali-zation software layer is built between the host OS and guest OS.
These two classes of VM architec-ture are introduced next.
1. Full Virtualization: With full virtualization, noncritical instructions run on the hardware
directly while critical instructions are discovered and replaced with traps into the VMM to be
emulated by software. Both the hypervisor and VMM approaches are considered full
virtualization. Why are only critical instructions trapped into the VMM? This is because binary
translation can incur a large performance overhead. Noncritical instructions do not control
hardware or threaten the security of the system, but critical instructions do. Therefore, running
noncritical instructions on hardware not only can promote efficiency, but also can ensure
system security.
2. Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software companies. VMware
puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the instruction stream and
identifies the privileged, control- and behavior-sensitive instructions. When these instructions
are identified, they are trapped into the VMM, which emulates the behavior of these
instructions. The method used in this emulation is called binary translation. Therefore, full vir-
tualization combines binary translation and direct execution. The guest OS is completely
decoupled from the underlying hardware. Consequently, the guest OS is unaware that it is
being virtualized.
The performance of full virtualization may not be ideal, because it involves binary translation
which is rather time-consuming. In particular, the full virtualization of I/O-intensive applications
is a really a big challenge. Binary translation employs a code cache to store translated hot
instructions to improve performance, but it increases the cost of memory usage. At the time of
this writing, the performance of full virtualization on the x86 architecture is typically 80 percent
to 97 percent that of the host machine.
3.6.2.1 Virtualization solutions

Desktop Host-Based Virtualization


The virtualization environment is created by an application installed in guest operating systems,
which provides those operating systems with full hardware virtualization of the underlying
hardware. This is done by installing a specific driver in the host operating system that provides
two main services:
• It deploys a virtual machine manager that can run in privileged mode.
It provides hooks for the VMware application to process specific I/O requests eventually by
relaying such requests to the host operating system via system calls

VMware workstation architecture VMware GSX server architecture


Workstation virtualization: VMware Workstation's architecture provides a powerful and user-
friendly virtualization solution for desktop environments, enabling users to create and manage
multiple VMs with different operating systems on a single physical host. The combination of the
hypervisor, VMM, virtual hardware layer, and VMware Tools allows for seamless and efficient
virtualization of guest operating systems within the VMware Workstation environment.
Server virtualization: VMware provided solutions for server virtualization with different
approaches over time. Initial support for server virtualization was provided by VMware GSX
server, which replicates the approach used for end-user computers and introduces remote
management and scripting capabili- ties.

VMware Cloud Solution stack VMware ESXi server architecture

VMware Cloud Solution stack included several products and services that helped organizations
build, manage, and optimize their cloud infrastructure. However, please note that the specific
details or new developments beyond that date might not be covered in this response.
A pool of virtualized servers is tied together and remotely managed as a whole by VMware
vSphere.
1. VMware vSphere: This is the foundation of VMware's cloud infrastructure suite. vSphere
is a virtualization platform that enables the creation and management of virtual
machines (VMs) on physical servers, providing robust resource management and
scalability.
2. VMware vCenter: This management platform allows administrators to centrally manage
and monitor VMware vSphere environments, including VMs, hosts, and data centers.
VMware ESXi server architecture is a bare-metal hypervisor and the foundation of VMware's
virtualization technology. It allows you to run multiple virtual machines (VMs) on a single
physical server. Here's an overview of the architecture of VMware ESXi:
Virtual Machines: ESXi allows the creation and execution of multiple virtual machines. Each VM
is an isolated software container that runs its own operating system and applications.
Virtual Networking: ESXi provides virtual networking capabilities, enabling the creation of
virtual switches, port groups, and network adapters for VMs. It also supports VLANs, NIC
teaming, and other advanced network features.
Virtual Storage: ESXi enables the creation of virtual storage using various storage technologies
such as VMFS (Virtual Machine File System) or NFS (Network File System). Virtual disks are
stored as files on the underlying physical storage but are presented to VMs
3.6.3 Hyper-V :Hyper-V provides Windows users the ability to start their own virtual machine.
In this virtual machine, a complete hardware infrastructure with RAM, hard disk space,
processor power, and other components can be virtualized. A separate operating system runs
on this basis, which does not necessarily have to be Windows. It is very popular, for example, to
run an open-source distribution of Linux in a virtual machine.
The physical host system can be mapped to multiple virtual guest systems (child partitions) that
share the host hardware (parent partition). Microsoft has created its own hypervisor,
Microsoft Hyper-V’s Architecture
Hyper-V allows x64 versions of Windows to host one or more virtual machines, which in turn
contain a fully configured operating system. These “child” systems are treated as partitions. The
term is otherwise known from hard disk partitioning - and Hyper-V virtualization works in a
similar way. Each virtual machine is an isolated unit next to the “parent” partition, the actual
operating system.
The individual partitions are orchestrated by the hypervisor. The subordinate partitions can be
created and managed via an interface (Hypercall API) in the parent system. However, the
isolation is always maintained. Child systems are assigned virtual hardware resources but can
never access the physical hardware of the parent.
To request hardware resources, child partitions use VMBus. This is a channel that
enables communication between partitions. Child systems can request resources from the
parent, but theoretically they can also communicate with each other.
The partitions run services that handle the requests and responses that run over the VMBus.
The host system runs the Virtualization Service Provider (VSP), the subordinate partitions run
the Virtualization Service Clients (VSC).
Working:-

Hyper-V features a Type 1 hypervisor-based architecture. The hypervisor virtualizes processors


and memory. It provides mechanisms for the virtualization stack in the root partition to manage
child partitions, virtual machines (VMs) and expose services such as I/O (input/output) devices
to the VMs.

The root partition owns and has direct access to the physical I/O devices. The virtualization
stack in the root partition provides a memory manager for VMs, management APIs, and
virtualized I/O devices. It also implements emulated devices, such as the integrated device
electronics (IDE) disk controller and PS/2 input device port. And it supports Hyper-V-specific
synthetic devices for increased performance and reduced overhead.

The Hyper-V-specific I/O architecture consists of virtualization service providers (VSPs) in the
root partition and virtualization service clients (VSCs) in the child partition. Each service is
exposed as a device over VM Bus, which acts as an I/O bus and enables high-performance
communication between VMs that use mechanisms such as shared memory. The guest
operating system's Plug and Play manager enumerates these devices, including VM Bus, and
loads the appropriate device drivers, virtual service clients. Services other than I/O are also
exposed through this architecture.
The hypervisor is the component that directly manages the underlying hardware (processors
and memory). It is logically defined by the following components:

• Hypercalls interface. This is the entry point for all the partitions for the execution of
sensitive instructions. This is an implementation of the paravirtualization approach already
discussed with Xen. This interface is used by drivers in the partitioned operating system to
contact the hypervisor using the standard Windows calling convention. The parent partition
also uses this interface to create child partitions.
• Memory service routines (MSRs). These are the set of functionalities that control the
memory and its access from partitions. By leveraging hardware-assisted virtualization, the
hypervisor uses the Input/Output Memory Management Unit (I/O MMU or IOMMU) to fast-
track access to devices from partitions by translating virtual memory addresses.
• Advanced programmable interrupt controller (APIC). This component represents the
interrupt controller, which manages the signals coming from the underlying hardware when
some event occurs (timer expired, I/O ready, exceptions and traps).
• Scheduler. This component schedules the virtual processors to run on available physical
processors. The scheduling is controlled by policies that are set by the parent partition.
• Address manager. This component is used to manage the virtual network addresses that
are allocated to each guest operating system.
• Partition manager. This component is in charge of performing partition creation,
finalization, destruction, enumeration, and configurations. Its services are available through
the hypercalls interface API previously discussed.

Child partition Any virtual machine that is created by the root partition.
The parent partition executes the host operating system and implements the
Parent partition virtualization stack that complements the activity of the hypervisor in running
guest operating systems.
guest Software that is running in a partition. It can be a full-featured operating system or
a small, special-purpose kernel. The hypervisor is guest-agnostic.
hypervisor A layer of software that sits above the hardware and below one or more operating
systems. Its primary job is to provide isolated execution environments called
partitions. Each partition has its own set of virtualized hardware resources (central
processing unit or CPU, memory, and devices). The hypervisor controls and
arbitrates access to the underlying hardware.
root partition The root partition that is created first and owns all the resources that the
hypervisor does not, including most devices and system memory. The root
partition hosts the virtualization stack and creates and manages the child
partitions.
Hyper-V-specific A virtualized device with no physical hardware analog, so guests may need a driver
device (virtualization service client) to that Hyper-V-specific device. The driver can use
virtual machine bus (VMBus) to communicate with the virtualized device software
in the root partition.
virtual machine A virtual computer that was created by software emulation and has the same
characteristics as a real computer.
virtual network (also referred to as a virtual switch) A virtual version of a physical network switch.
switch A virtual network can be configured to provide access to local or external network
resources for one or more virtual machines.
virtual processor A virtual abstraction of a processor that is scheduled to run on a logical processor.
A virtual machine can have one or more virtual processors.
virtualization A software module that a guest loads to consume a resource or service. For I/O
service client devices, the virtualization service client can be a device driver that the operating
(VSC) system kernel loads.
virtualization A provider exposed by the virtualization stack in the root partition that provides
service provider resources or services such as I/O to a child partition.
(VSP)
virtualization A collection of software components in the root partition that work together to
stack support virtual machines. The virtualization stack works with and sits above the
hypervisor. It also provides management capabilities.
VMBus Channel-based communication mechanism used for inter-partition communication
and device enumeration on systems with multiple active virtualized partitions. The
VMBus is installed with Hyper-V Integration Services.
UNIT II - UNDERSTANDING CLOUD MODELS AND ARCHITECTURES

Cloud Models: NIST model


Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to
a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. This cloud model is composed of five
essential characteristics, three service models, and four deployment models.

Essential Characteristics:
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human interaction
with each service provider.
Broad network access. Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand. (e.g., country, state, or datacenter). Examples
of resources include storage, processing, memory, and network bandwidth.
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can be
appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by
leveraging a metering capability1 at some level of abstraction appropriate to the type of service
(e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported,
Cloud Cube model
The four dimensions of the Cloud Cube Model are shown in Figure 1.2 and listed here:

Physical location of the data: Internal (I) / External (E) determines your organization’s
boundaries.
Ownership: Proprietary (P) / Open (O) is a measure of not only the technology ownership, but
of interoperability, ease of data transfer, and degree of vendor application lock-in.

Security boundary: Perimeterised (Per) / De-perimiterised (D-p) is a measure of whether the


operation is inside or outside the security boundary or network firewall.

Sourcing: Insourced or Outsourced means whether the service is provided by the cus- tomer or
the service provider

Deployment models: Public, Private, Hybrid and Community


Cloud Deployment Model functions as a virtual computing environment with a deployment
architecture that varies depending on the amount of data you want to store and who has
access to the infrastructure.
Public cloud. The cloud infrastructure is provisioned for open use by the general public. It may
be owned, managed, and operated by a business, academic, or government organization, or
some combination of them. It exists on the premises of the cloud provider.
Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization
comprising multiple consumers (e.g., business units). It may be owned, managed, and operated
by the organization, a third party, or some combination of them, and it may exist on or off
premises.
Community cloud. The cloud infrastructure is provisioned for exclusive use by a specific
community of consumers from organizations that have shared concerns (e.g., mission, security
requirements, policy, and compliance considerations). It may be owned, managed, and
operated by one or more of the organizations in the community, a third party, or some
combination of them, and it may exist on or off premises.
Hybrid cloud. The cloud infrastructure is a composition of two or more distinct cloud
infrastructures (private, community, or public) that remain unique entities, but are bound
together by standardized or proprietary technology that enables data and application
portability (e.g., cloud bursting for load balancing between clouds)

1.Public Cloud 2. Private cloud

3. Community cloud
4. Hybrid cloud

Service models: laas, Paas and Saas

Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s
applications running on a cloud infrastructure . The applications are accessible from various
client devices through either a thin client interface, such as a web browser (e.g., web-based
email), or a program interface. The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, storage, or even individual
application capabilities, with the possible exception of limited userspecific application
configuration settings
Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx, ZenDesk,
Slack, and GoToMeeting.
Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the
cloud infrastructure consumer-created or acquired applications created using programming
languages, libraries, services, and tools supported by the provider.3 The consumer does not
manage or control the underlying cloud infrastructure including network, servers, operating
systems, or storage, but has control over the deployed applications and possibly configuration
settings for the application-hosting environment.
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine,
Apache Stratos, Magento Commerce Cloud, and OpenShift.
Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating systems
and applications. The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage, and deployed applications; and possibly
limited control of select networking components (e.g., host firewalls).
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google Compute
Engine (GCE), Rackspace, and Cisco Metacloud.

Understanding Cloud Architecture:


Cloud computing builds on the architecture developed for staging large distributed network
applications on the Internet. Cloud architecture can couple software running on virtualized
hardware in multiple locations to provide an on- demand service to user-facing hardware and
software
Many descriptions of cloud computing describe it in terms of two architectural layers:
1. A client as a front end
2. The “cloud” as a backend
Cloud computing architecture is a combination of service-oriented architecture and event-
driven architecture.

Components of Cloud Computing Architecture: Client Infrastructure: Client Infrastructure is a


Front end component. It provides GUI (Graphical User Interface) to interact with the cloud.

 Application: The application may be any software or platform that a client wants to
access.
 Service: A Cloud Services manages that which type of service you access according to
the client’s requirement.
 Runtime Cloud: Runtime Cloud provides the execution and runtime environment to the
virtual machines.
 Storage: Storage is one of the most important components of cloud computing. It
provides a huge amount of storage capacity in the cloud to store and manage data.
 Infrastructure:It provides services on the host level, application level, and network level.
Cloud infrastructure includes hardware and software components such as servers,
storage, network devices, virtualization software, and other storage resources that are
needed to support the cloud computing model.
 Management: Management is used to manage components such as application, service,
runtime cloud, storage, infrastructure, and other security issues in the backend and
establish coordination between them.
 Security: Security is an in-built back end component of cloud computing. It implements
a security mechanism in the back end.

Cloud computing offers the following three type of services:


i. Software as a Service (SaaS)
ii. Platform as a Service (PaaS)
iii. Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS)
 In the IaaS service all the required infrastructure solutions are designed & deployed
in meet to requirement for any software solution.
Ex:- Design an infrastructure for website portal with proper DB & Security
 A PaaS or SaaS service provider gets the same benefits from a com- posable system that
a user does—these things, among others:
 Easier to assemble systems
 Cheaper system development
 More reliable operation
 A larger pool of qualified developers
 A logical design methodology

Exploring the Cloud Computing Stack: Composability, Infrastructure, Platforms, Virtual


Appliances, Communication Protocols, Applications;

Composability:
A composable system uses components to assemble services that can be tailored for a specific
purpose using standard parts. A composable component must be:
 Modular: It is a self-contained and independent unit that is cooperative, reusable, and
replaceable.
 Stateless: A transaction is executed without regard to other transactions or requests

Infrastructure
Infrastructure as a Service (IaaS) providers rely on virtual machine technology to deliver servers
that can run applications. VM instance have characteristics that often can be described in terms
of real servers delivering a certain number of microprocessor (CPU) cycles, memory access, and
network bandwidth to customers

The software that runs in the virtual machines is what defines the utility of the cloud computing
system.

Cloud computing stack that is designated as the server.


Here apart from APIs everything is encapsulated in to VM Server but the actual use of API relay
on Programmer & project runtime.

Platforms: Platform as a Service (PaaS) providers offer services meant to provide developers
with different capabilities
 l Salesforce.com’s Force.com Platform
 l Windows Azure Platform
 l Google Apps and the Google AppEngine
These three services offer all the hosted hardware and software needed to build and deploy
Web applications or services that are custom built by the developer

A platform in the cloud is a software layer that is used to create higher levels of service, These
three services offer all the hosted hardware and software needed to build and deploy Web
applications or services that are custom built by the developer

Platforms often come replete with tools and utilities to aid in application design and
deployment. & most often we find developer tools for team collaboration, testing tools,
instrumentation for measuring program performance and attributes, versioning, database and
Web service integration, and storage tools.

Virtual Appliance Applications such as a Web server or database server that can run on a virtual
machine image are referred to as virtual appliances

Virtual Appliances may expose itself to users through an API, so too an application built in the
cloud using a platform service would encapsulate the service through its own API. Many
platforms offer user interface development tools based on HTML, JavaScript, or some other
technology. Web becomes more media-oriented, many developers have chosen to work with
rich Internet environments such as Adobe Flash, Flex, or Air, or alternatives such as Windows
Silverlight

A virtual appliance is software that installs as middleware onto a virtual machine.


VMware’s Virtual Appliance marketplace:

 VirtualBox: is a virtual machine technology now owned by Oracle that can run various
operating systems and serves as a host for a variety of virtual appliances.
 Vmachines : is a site with desktop, server, and security- related operating systems that
run on VMware.

Communication Protocols:- Cloud uses services available over the Internet communicating
using the stan- dard Internet protocol suite underpinned by the HTTP and HTTPS transfer
protocols

In use of Inter Process communication(IPC) enables many client/server protocols have been
applied to distributed networking over the years. Various forms of RPC (Remote Procedure Call)
implementations (including DCOM, Java RMI, and CORBA) attempt to solve the problem of
engaging services

Protocols used often in Communicating in connecting Virtual Machines

1. RDP – 3389 –login to windows VM


2. SSH -22 – Login to Linux VM
3. HTTP – Allow Web Traffic
4. HTTPS – allow Web Traffic in secure mode

Applications:- In Nature all Websites or any Distributed apis are written in Web Tech & this be
the application which designed to work in web . Variety of web apps may be different but
common idea was to host application in public via Internet using web.

CONNECTING TO THE CLOUD:


Clients can connect to a cloud service in a number of different ways. These are the two most
common means:
1. A Web browser
2. A proprietary application
Cloud application can be running on a server, a PC, a mobile device, or a cell phone. these application
types is that they are exchanging data over an inherently insecure and transient medium.
There are three basic methods for securely connecting over a connection:

 Use a secure protocol to transfer data such as SSL (HTTPS), FTPS, or IPsec, or connect
using a secure shell such as SSH to connect a client to the cloud.
 Create a virtual connection using a virtual private network (VPN), or with a remote data
transfer protocol such as Microsoft RDP or Citrix ICA, where the data is protected by a
tunneling mechanism.
 Encrypt the data so that even if the data is intercepted or sniffed, the data will not be
meaningful.
THE JOLICLOUD NETBOOK OS
Joli OS, developed by Jolicloud, provides file sharing and access to Web applications (apps) and
desktops from the cloud. Based on the Ubuntu Linux kernel, Joli OS was designed to give
netbook and low-end processors the ability to utilize Web app and basic computing services
without hardware upgrades.

 Joli OS is installed as a thin client on a host desktop and provisions a variety of Web apps
from the cloud, including standard Web browsers, Gmail, Dropbox, Google Docs and
Flickr.
 Joli OS hosts a number of apps that may be accessed and easily added to the cloud
desktop via the default launcher. Joli OS also provides social bookmarking capabilities
for user sharing of popular apps and services.

Jolicloud concentrates on building a social platform with automatic software updates and
installs. The application launcher is built in HTML 5 and comes preinstalled with Gmail, Skype,
Twitter,Firefox, and other applications.

Any HTML 5 browser can be used to work with the Jolicloud interface. Jolicloud maintains a
library or App Directory of over 700 applications as part of an app store. When you click to
select an application, the company both installs and updates the applica- tion going forward,
just as the iPhone manages applications on that device.

The Jolicloud interface.


CHROMIUM OS - THE BROWSER AS AN OPERATING SYSTEM.
Chrome OS is an operating system developed by Google. It is designed primarily for use with
web applications and cloud computing, and it is based on the open-source Chromium OS
project. Chrome OS is known for its simplicity, speed, and security. Here are some key features
and benefits of Chrome OS:
1. Web-Centric: Chrome OS is centered around the web, and most of its applications and
services are cloud-based. This means that you can access your files, documents, and
applications from any device with an internet connection, making it highly convenient
and portable.
2. Fast Boot Times: Chrome OS is optimized for quick boot times. It allows you to start up
your Chromebook or Chrome OS device in a matter of seconds, making it ideal for users
who need instant access to their information.
3. Automatic Updates: Chrome OS automatically updates itself in the background,
ensuring that you are always using the latest version with the latest security patches
and features. This helps keep your device secure and up-to-date without requiring any
manual intervention.
4. Security: Chrome OS is designed with security in mind. Each application runs in its own
sandbox, isolating it from other parts of the system, reducing the risk of malware and
other security threats. Additionally, features like Verified Boot and automatic updates
help protect against potential vulnerabilities.
5. User-Friendly Interface: Chrome OS has a simple and intuitive interface that is easy to
navigate, making it accessible for both experienced and novice users.
6. Lightweight and Efficient: Chrome OS is lightweight, which means it can run smoothly
on low-powered hardware, making it ideal for budget-friendly devices like
Chromebooks.
7. Google Ecosystem Integration: Chrome OS integrates seamlessly with Google's
ecosystem, including Google Drive, Gmail, Google Docs, and other Google services. This
integration allows for easy synchronization and access to your data across different
devices.
8. Offline Capabilities: While Chrome OS is heavily focused on cloud-based applications,
many apps have offline capabilities, allowing you to work or entertain yourself even
when you're not connected to the internet.
9. Affordability: Chromebooks, which are laptops running Chrome OS, are often more
affordable than traditional laptops, making them an attractive option for budget-
conscious users.
10. Guest Mode: Chrome OS includes a guest mode feature, allowing others to use your
Chromebook without accessing your personal data.
It's worth noting that while Chrome OS is well-suited for users who primarily work and live in
the browser and use web applications, it may not be suitable for everyone, especially those
who require specialized software or heavy offline capabilities. However, it has gained popularity
in education, business, and casual use cases due to its simplicity, security, and ease of use.
UNIT III

Understanding Cloud Services and Applications Infrastructure as a Service (IaaS):


IaaS workloads, Pods, aggregation, and silos;

Defining Infrastructure as a Service (IaaS)

Infrastructure as a Service (IaaS) is a cloud computing service model that provides


virtualized computing resources over the internet. In an IaaS environment, instead of
owning and managing physical hardware, businesses and individuals can rent or lease
virtualized infrastructure components, such as virtual machines (VMs), storage, and
networking, from a cloud service provider. These resources are typically hosted in data
centers and can be accessed and managed remotely.

IaaS workloads

The fundamental unit of virtualized client in an IaaS deployment is called a workload. A


workload simulates the ability of a certain type of real or physical server to do an amount
of work. The work done can be measured by the number of Transactions Per Minute (TPM)
or a similar metric against a certain type of system. In addition to throughput, a workload
has certain other attributes such as Disk I/Os measured in Input/Output Per Second IOPS,
the amount of RAM consumed under load in MB, network throughput and latency, and so
forth. In a hosted application environment, a client’s application runs on a dedicated server
inside a server rack or perhaps as a standalone server in a room full of servers. In cloud
computing, a provisioned server called an instance is reserved by a customer, and the
necessary amount of computing resources needed to achieve that type of physical server is
allocated to the client’s needs.
NOTE:- Diagram shows how three virtual private server instances are partitioned in an
IaaS stack. The three workloads require three different sizes of computers: small, medium,
and large.

Consider a transactional ecommerce (WEBSITE ) system, for which a typical stack contains
the following components:

 Web server
 Application server
 File server
 Database
 Transaction engine

This Website system has several different workloads that are operating: queries against the
database, processing of business logic, and serving up clients’ Web pages

IMP NOTE:- Amazon Web Services offers a classic Service Oriented Architecture (SOA)
approach to IaaS. Where Service Oriented Architecture approach used to building for
distributed apllication .

Infrastructure as a Service (IaaS) is a versatile cloud computing model that can support a
wide range of workloads across different industries and use cases. Here are some common
IaaS workloads:

1. Web Hosting: IaaS is often used to host websites and web applications. Users can
create virtual machines, configure web servers, and scale resources based on traffic
demands.

2. Development and Testing Environments: Developers and quality assurance


teams can use IaaS to quickly provision and manage virtual environments for
software development, testing, and debugging purposes.
3. Data Storage and Backup: IaaS providers offer scalable and durable storage
solutions that are ideal for data storage and backup. Users can store large amounts
of data and take advantage of features like data redundancy and automatic backups.

4. Big Data and Analytics: IaaS is well-suited for big data processing and analytics
workloads. Users can deploy clusters of virtual machines to analyze large datasets
and run data processing frameworks like Hadoop or Spark.

5. High-Performance Computing (HPC): IaaS can support HPC workloads, such as


scientific simulations, weather forecasting, and molecular modeling, by providing
access to high-performance computing clusters.

6. Virtual Desktop Infrastructure (VDI): Organizations can use IaaS to deploy virtual
desktops for remote or distributed teams, reducing the need for physical hardware
and providing secure access to desktop environments.

7. E-commerce: E-commerce websites and applications can leverage IaaS to handle


spikes in traffic during sales events, ensuring high availability and performance.

8. Container Orchestration: IaaS can be used as the underlying infrastructure for


container orchestration platforms like Kubernetes, enabling the deployment and
management of containerized applications at scale.

These are just some examples of the diverse workloads that can be supported by
Infrastructure as a Service. The flexibility and scalability of IaaS make it a valuable option
for organizations looking to optimize their IT infrastructure and meet specific computing
needs.
Pods, aggregation, and silos

Pods, aggregation, and silos are concepts often used in different contexts, including
technology, business, and organizational structures. Here's an explanation of each term:

1. Pods:

 Technology: In the context of container orchestration, like Kubernetes, a


"pod" is the smallest deployable unit that can contain one or more
containers. Containers within a pod share the same network namespace and
storage volumes, making them suitable for co-located services that need to
communicate closely or share data. Pods are used to group related
containers and ensure they run on the same host.

2. Aggregation:

 Technology: Aggregation refers to the process of collecting and summarizing


data from multiple sources into a single view or dataset. It's commonly used
in data analysis, reporting, and monitoring to simplify complex data
structures and make it easier to work with the information.

 Business: In a business context, aggregation can also refer to the


consolidation of data or resources to achieve economies of scale. For
example, an aggregator in the travel industry might collect flight and hotel
information from various sources and present it in one place for users to
book conveniently.

3. Silos:

 Technology: In technology and data management, "silos" refer to isolated or


separated systems or databases that do not easily share data or resources
with other systems. This lack of integration can lead to inefficiencies and
difficulties in accessing and utilizing data across different parts of an
organization.

 Business/Organizational: In a broader context, "silos" can refer to isolated


departments or teams within an organization that don't collaborate
effectively with one another. This can lead to communication barriers and
hinder the overall productivity and innovation of the organization.
In summarize, "pods" refer to a technical concept used in container orchestration,
"aggregation" is about collecting and summarizing data or resources from various sources,
and "silos" pertain to isolated or separated systems, departments, or teams that do not
collaborate efficiently.

Platform as a Service (PaaS)

Platform as a Service (PaaS) is a cloud computing service model that provides a platform
and environment for developers to create customized solutions with context of build,
deploy, and manage applications without having to manage the underlying infrastructure.

Ex:- Google’s App Engine

Platforms can be based on specific types of development languages, application


frameworks, or other constructs.A PaaS offering provides the tools and development
environment to deploy applications on another vendor’s application. Often a PaaS tool is a
fully integrated development environment; that is, all the tools and services are part of the
PaaS service. To be useful as a cloud computing offering.

Ex:- Any Webhosting (Third party ) solutions.

PaaS systems must offer a way to create user interfaces, and thus support standards such
as HTLM, JavaScript, or other rich media technologies In a PaaS model, customers may
interact with the software to enter and retrieve data, perform actions, get results, and to
the degree that the vendor allows it, customize the platform involvedThe customer takes
no responsibility for maintaining the hardware, the software, or the development of the
applications and is responsible only for his interaction with the platform. The vendor is
responsible for all the operational aspects of the service, for maintenance, and for
managing the product(s) lifecycle.
PaaS abstracts the complexities of infrastructure management, allowing developers to
focus on coding and application development. Here are key characteristics and components
of Platform as a Service:

1. Application Development Platform: PaaS provides a set of tools, frameworks,


libraries, and runtime environments that developers can use to develop, test, and
deploy their applications. This includes programming languages like Java, Python,
and .NET, as well as databases, web servers, and development tools.

2. Middleware and Services: PaaS often includes middleware services like databases
(DBaaS), messaging systems, caching, and identity management. These services are
pre-configured and readily available for developers, reducing the time and effort
required to set up and manage these components.

3. Scalability and Elasticity: PaaS platforms typically offer automatic scaling


capabilities. Applications can scale horizontally by adding more instances or
vertically by increasing resource allocation, ensuring that the application can handle
varying workloads efficiently.

4. Deployment and Management: PaaS platforms provide tools and services for
deploying applications to the cloud. Developers can easily manage application
lifecycles, update code, and roll back changes as needed.

5. DevOps and Collaboration: PaaS encourages collaboration between development


and operations teams. It often integrates with DevOps tools, enabling continuous
integration and continuous deployment (CI/CD) pipelines for streamlined
application delivery.

6. Abstraction of Infrastructure: PaaS abstracts the underlying infrastructure,


including servers, storage, and networking. Developers don't need to worry about
provisioning, configuring, or maintaining these components, allowing them to focus
solely on application development.

7. Multi-Tenancy: PaaS platforms are typically multi-tenant, meaning multiple users


and applications can share the same underlying infrastructure while remaining
isolated and secure.

8. Cost-Efficiency: PaaS often follows a pay-as-you-go pricing model, where users are
billed based on the resources and services they consume. This can result in cost
savings compared to managing on-premises infrastructure.

9. Security and Compliance: PaaS providers implement security measures and


compliance standards to protect applications and data. However, users are still
responsible for securing their application code and configurations.

10. Vendor Lock-In: Adopting a specific PaaS platform may tie developers to that
provider's ecosystem and APIs. Careful consideration is needed to assess the
potential vendor lock-in and the portability of applications.
Software as a Service (SaaS): SaaS characteristics, Open SaaS and SOA, Salesforce.com
and CRM SaaS;

Software as a Service (SaaS) is a cloud computing model that delivers software applications
over the internet on a subscription basis. In this model, software is hosted and maintained
by a third-party provider, making it accessible to users from any device with an internet
connection.

Software as a Service (SaaS) applications are cloud-based software solutions. These


applications cover a wide range of functionality and are accessible from various devices
with an internet connection.

 Microsoft 365 (formerly Office 365): Includes software like Word, Excel,
PowerPoint, and cloud-based collaboration tools.
 Google Workspace (formerly G Suite): Offers applications like Google Docs,
Sheets, and Gmail for productivity and communication.
 Salesforce: A popular CRM platform that helps businesses manage sales, customer
interactions, and marketing.
 WordPress.com: A popular platform for website creation and content
management.
 Google Analytics: Provides web analytics and reporting on website and app
performance.
 SAP Business ByDesign: A cloud-based ERP solution for small and medium-sized
enterprises.
 Zoom: A widely used video conferencing and communication platform.

SaaS characteristics
All Software as a Service (SaaS) applications share the following characteristics:

1. The software is available over the Internet globally through a browser on demand.

2. The typical license is subscription-based or usage-based and is billed on a recurring


basis. In a small number of cases a flat fee may be changed, often coupled with a
maintenance fee.

Table below shows how different licensing models compare.

3. The software and the service are monitored and maintained by the vendor, regardless of
where all the different software components are running. There may be executable client-
side code, but the user isn’t responsible for maintaining that code or its interaction with the
service.

4. Reduced distribution and maintenance costs and minimal end-user system costs
generally make SaaS applications cheaper to use than their shrink-wrapped versions.

5. Such applications feature automated upgrades, updates, and patch management and
much faster rollout of changes.

6. SaaS applications often have a much lower barrier to entry than their locally installed
competitors, a known recurring cost, and they scale on demand (a property of cloud
computing in general).

7. All users have the same version of the software so each user’s software is compatible
with another’s. 8. SaaS supports multiple users and provides a shared data model through a
single-instance, multi-tenancy model.

SaaS ecosystem offers advantages such as reduced upfront costs, ease of deployment, and
accessibility. It is widely used by businesses of all sizes and has transformed the way
software is delivered and consumed.

Open SaaS and SOA

Open SaaS (Open Software as a Service): Open SaaS refers to a specific approach within the
Software as a Service (SaaS) model that emphasizes flexibility, customization, and
openness. Unlike traditional SaaS solutions that offer fixed, closed, and often proprietary
software, Open SaaS provides a more open and extensible platform. This allows users to
tailor the software to their specific needs and integrate it with other applications or
services.

Key characteristics of Open SaaS include:

1. Customization: Open SaaS platforms allow users to customize and configure the
software to meet their unique requirements. This might include adjusting
workflows, adding new features, or modifying existing ones.

2. Integration: Open SaaS solutions offer open APIs (Application Programming


Interfaces) that enable seamless integration with other software and services. This
is particularly valuable for businesses that rely on multiple software tools.

3. Community Collaboration: Open SaaS often fosters a community of developers


and users who can contribute to the platform's development and share
customizations and extensions.

4. Flexibility: Users have the flexibility to adapt the software to evolving business
needs, which is beneficial for industries and organizations with specialized
requirements.
Service-Oriented Architecture (SOA): Service-Oriented Architecture (SOA) is an
architectural style for designing and building software systems. It focuses on organizing
software components as services, which are independent, self-contained units of
functionality. These services can communicate with each other over a network, and they
are designed to be reusable and interoperable. SOA principles are not limited to SaaS; they
can be applied in various software development contexts, including on-premises systems.

Key concepts in SOA include:

1. Services: Services in SOA are modular, self-contained, and well-defined units of


functionality. They can be accessed and used by other software components.

2. Interoperability: SOA emphasizes the importance of making services


interoperable, allowing different software systems to communicate and work
together seamlessly.

3. Reusability: Services are designed to be reusable across various applications and


scenarios, reducing duplication of effort and improving efficiency.

4. Standards: SOA often relies on standardized protocols and technologies to enable


communication and integration between services.

A considerable amount of SaaS software is based on open source software. When open
source software is used in a SaaS, you may hear it referred to as Open SaaS.

The advantages of using open source software are that systems are much cheaper to deploy
because you don’t have to purchase the operating system or software, there is less vendor
lock-in, and applications are more portable.

The popularity of open source software, from Linux to APACHE, MySQL, and Perl (the
LAMP platform) on the Internet, and the number of people who are trained in open source
software make Open SaaS an attractive proposition.

The impact of Open SaaS will likely translate into better profitability for the companies that
deploy open source software in the cloud, resulting in lower development costs and more
robust solutions.
Three essentials components:

 An interactive user interface, which is usually created with HTML/XHTML, Ajax,


JavaScript, or CSS.
 Web services that can be accessed using an API, and whose data can be bound and
transported by Web service protocols such as SOAP, REST, XML/HTTP, XML/RPC,
and JSON/RPC.
 Data transfer in the form of XML, KML (Keyhole Markup Language), JSON (JavaScript
Object Notation), or the like.

Salesforce.com and CRM SaaS .

Salesforce.com is a well-known provider of Customer Relationship Management (CRM)


software delivered as a Software as a Service (SaaS). Salesforce is a pioneer and one of the
market leaders in the CRM industry, offering a wide range of cloud-based CRM solutions for
businesses of all sizes. Here's an overview of Salesforce and its CRM SaaS offerings:

Salesforce.com: Salesforce.com, often referred to simply as Salesforce, is a cloud-based


customer relationship management software company founded in 1999. It has grown to
become one of the most prominent SaaS providers, particularly in the CRM domain.
Salesforce's CRM platform is known for its flexibility, scalability, and extensive set of
features. Key aspects of Salesforce.com include:

1. CRM Solutions: Salesforce offers a suite of CRM solutions that cover sales,
marketing, customer service, and analytics. These solutions are designed to help
businesses manage and analyze customer interactions and data.
2. Cloud-Based Delivery: Salesforce CRM is delivered as a cloud service, allowing
users to access it from anywhere with an internet connection. This cloud-based
approach eliminates the need for businesses to set up and maintain on-premises
CRM software and infrastructure.

3. Customization: Salesforce provides extensive customization options, enabling


businesses to tailor the CRM platform to their specific needs. This includes creating
custom fields, workflows, and applications.

4. Integration: Salesforce offers a wide range of pre-built integrations and an open


API, making it easy to connect with other business applications, including marketing
automation, e-commerce, and productivity tools.

5. Automation: Salesforce CRM includes automation features, such as workflow


automation and process automation, to streamline repetitive tasks and improve
efficiency.

6. AI and Analytics: Salesforce incorporates artificial intelligence (AI) and analytics to


help businesses make data-driven decisions, predict customer behaviors, and
optimize their sales and marketing efforts.

7. Community and Marketplace: Salesforce has a thriving community of users,


developers, and partners. The Salesforce AppExchange is a marketplace for third-
party applications and integrations built on the Salesforce platform.

8. Security and Compliance: Salesforce places a strong emphasis on security and


compliance, providing tools and features to protect customer data and ensure
regulatory compliance.

Salesforce CRM offers several editions tailored to different business needs and sizes,
including small businesses, mid-sized enterprises, and large corporations.

Identity as a Service (IDaaS): Identity, Networked identity service classes, Identity


system codes of conduct, IDaaS interoperability; Compliance as a Service (CaaS).

Defining Identity as a Service (IDaaS)

Identity as a Service (IDaaS) is a cloud-based service that provides identity and access
management solutions as a service. IDaaS is designed to help organizations manage and
secure user identities and control access to their systems and resources. It offers a range of
features and tools for identity verification, authentication, authorization, and user
provisioning, all delivered via the cloud. Here are the key components and aspects of IDaaS:

1. User Authentication: IDaaS platforms offer various authentication methods,


including username and password, multi-factor authentication (MFA), single sign-on
(SSO), and biometrics, to verify the identity of users accessing applications and
systems.
2. Authorization and Access Control: IDaaS solutions enable organizations to define
and enforce access policies, ensuring that users have the appropriate permissions to
access specific resources. This includes role-based access control (RBAC) and fine-
grained access controls.

3. Single Sign-On (SSO): SSO allows users to access multiple applications and services
with a single set of login credentials. With IDaaS, users can authenticate once and
gain access to multiple resources without the need to re-enter their credentials.

4. Identity Federation: IDaaS supports identity federation, which allows users to


access resources across multiple organizations without the need to create separate
accounts for each organization. Federation is often used for business-to-business
(B2B) and business-to-consumer (B2C) scenarios.

5. Security and Compliance: IDaaS solutions offer security features like encryption,
threat detection, and real-time monitoring to protect user identities and data. They
also help organizations comply with data privacy and regulatory requirements.

6. Multi-Tenancy: IDaaS providers offer multi-tenancy support, allowing


organizations to manage user identities for different departments, subsidiaries, or
customers within a single platform.

IDaaS is particularly valuable for businesses and organizations looking to enhance security,
streamline user management, and provide a better user experience for both employees and
customers.

What is an identity?

An identity refers to the digital representation of a user, service, or entity that is interacting
with cloud resources and services. Identity management in the cloud is crucial for
controlling access, ensuring security, and managing permissions within cloud
environments.

1. User Identity: User identities are associated with individual users or employees
who need access to cloud resources. User identities are typically linked to user
accounts, which are used to authenticate and authorize access.

2. Single Sign-On (SSO): SSO is a mechanism that allows users to access multiple
cloud services and applications with a single set of login credentials. It simplifies the
authentication process and enhances security by reducing the need for users to
remember multiple passwords.

3. Access Control: Identity and access management (IAM) is a critical aspect of cloud
security. It involves defining policies and rules that specify what each identity (user
or service) is allowed to do within the cloud environment. These permissions are
typically defined using roles, groups, and policies.
4. Multi-Factor Authentication (MFA): MFA adds an additional layer of security to
identity verification by requiring users to provide multiple forms of authentication,
such as something they know (password) and something they have (a mobile app or
hardware token).

5. Token-Based Authentication: In the cloud, access to resources is often controlled


using tokens. When a user or service is authenticated, they receive a token that can
be presented to gain access to resources. These tokens are short-lived and can be
revoked if needed.

6. Role-Based Access Control (RBAC): RBAC is a method for controlling access based
on roles and permissions. Users or services are assigned roles, and these roles
determine what actions they can perform within the cloud environment.

Networked identity service classes refer to different categories or types of identity


services used to manage and control access to resources, applications, and data in a
networked environment. These services help organizations establish and maintain secure
and efficient identity and access management solutions. Here are some common networked
identity service classes:

 Identity as a Service (IDaaS) may include any of the following:


 Authentication services (identity verification)
 Directory services l Federated identity
 Identity governance
 Identity and profile management
 Policies, roles, and enforcement
 Provisioning (external policy administration)
 Registration
 Risk and event monitoring, including audits
 Single sign-on services (pass-through authentication)

Identity system codes of conduct

Identity system codes of conduct are ethical guidelines and principles that organizations,
service providers, and individuals involved in identity management should follow. These
codes of conduct help ensure the responsible and ethical use of identity information and
systems, as well as protect the privacy, security, and rights of individuals.

In working with IDaaS software, evaluate IDaaS applications on the following basis:

 User control for consent: Users control their identity and must consent to the use
of their information.
 Minimal Disclosure: The minimal amount of information should be disclosed for an
intended use.
 Justifiable access: Only parties who have a justified use of the information
contained in a digital identity and have a trusted identity relationship with the
owner of the information may be given access to that information.
 Directional Exposure: An ID system must support bidirectional identification for a
public entity so that it is discoverable and a unidirectional identifier for private
entities, thus protecting the private ID.
 Interoperability: A cloud computing ID system must interoperate with other
identity services from other identity providers.
 Unambiguous human identification: An IDaaS application must provide an
unambiguous mechanism for allowing a human to interact with a system while
protecting that user against an identity attack.
 Consistency of Service: An IDaaS service must be simple to use, consistent across
all its uses, and able to operate in different contexts using different technologies.

IDaaS interoperability
 User authentication
 Authorization markup languages

Interoperability in the context of Identity as a Service (IDaaS) refers to the ability of


different IDaaS solutions, identity providers, and identity-related systems to work together
seamlessly and exchange identity information and authentication data effectively. It is
crucial for ensuring that users can access various applications, services, and resources
across multiple platforms
Cloud computing IDaaS applications must rely on a set of developing industry standards to
provide interoperability. The following are among the more important of these services:

 User centric authentication (usually in the form of information cards): The


OpenID and CardSpace specifications support this type of data object.
 The XACML Policy Language: This is a general-purpose authorization policy
language that allows a distributed ID system to write and enforce custom policy
expressions. XACML can work with SAML; when SAML presents a request for ID
authorization, XACML checks the ID request against its policies and either allows or
denies the request.
 The SPML Provisioning Language: This is an XML request/response language that
is used to integrate and interoperate service provisioning requests. SPML is a
standard of OASIS’s Provision Services Technical Committee (PSTC) that conforms
to the SOA architecture.
 The XDAS Audit System: The Distributed Audit Service provides accountability for
users accessing a system, and the detection of security policy violations when
attempts are made to access the system by unauthorized users or by users accessing
the system in an unauthorized way
User authentication

OpenID is a developing industry standard for authenticating “end users” by storing their digital identity
in a common format.

Any software application that complies with the standard accepts an OpenID that is authenticated by a
trusted provider. A very impressive group of cloud computing vendors serve as identity providers (or
OpenID providers Facebook, Google etc

These are samples of trusted providers and their URL formats: l

 Blogger: .blogger.com or .blogspot.com


 MySpace: myspace.com/
 Google: https://www.google.com/accounts/o8/id
 Google Profile: google.com/profiles/ l
 Microsoft: accountservices.passport.net/ l
 MyOpenID: .myopenid.com l
 Verisign: .pip.verisinglabs.com l
 WordPress: .wordpress.com l Yahoo!: openid.yahoo.com

Authorization markup languages

Authorization markup languages are used to define and manage access control policies within various
systems and applications. These markup languages provide a standardized way to specify permissions
and access rights for users or entities within a given system. Here are some of the commonly used
authorization markup languages:

1. XACML (eXtensible Access Control Markup Language): XACML is an OASIS standard that
provides a flexible and extensible framework for access control policies. It allows administrators
to define policies for authorization, including rules for granting or denying access based on
various attributes and conditions.

2. SAML (Security Assertion Markup Language): SAML is an XML-based standard for exchanging
authentication and authorization data between parties, particularly between an identity
provider (IdP) and a service provider (SP). While SAML is primarily focused on authentication, it
includes authorization-related assertions as well.
3. ABAC (Attribute-Based Access Control): ABAC is a model for access control where access
decisions are based on attributes associated with the user, the resource, and the environment.
While not a specific markup language, ABAC policies can be expressed using languages like
XACML.

4. ALFA (Abbreviated Language for Authorization): ALFA is a specialized language designed for
writing access control policies for XACML. It simplifies the process of defining policies by
providing a more human-readable and concise syntax.

5. REL (Request and Evaluation Language): REL is used in the context of XACML and is a language
for specifying the authorization requests and decision evaluation logic. It allows for specifying
the conditions under which a request should be granted or denied.

6. NGAC (Next Generation Access Control) Policy Language: NGAC is a policy language used to
define access control policies based on attributes and relationships. It provides a framework for
defining and enforcing fine-grained access control policies.
Compliance as a Service (CaaS) is a cloud-based service model that focuses on helping
organizations manage and maintain compliance with relevant regulatory, industry-specific,
and internal requirements. CaaS leverages cloud technology and services to streamline and
automate compliance processes, making it more efficient and cost-effective for businesses.
Here are key aspects and features of Compliance as a Service:

In order to implement CaaS, some companies are organizing what might be referred to as
“vertical clouds,” clouds that specialize in a vertical market. Examples of vertical clouds
that advertise CaaS capabilities include the following:

 Athenahealth (http://www.athenahealth.com/) for the medical industry


 Bankserv (http://www.bankserv.com/) for the banking industry
 ClearPoint PCI Compliance-as-a-Service for merchant transactions under the
Payment Card Industry Data Security Standard
 FedCloud (http://www.fedcloud.com/) for government
 Rackserve PCI Compliant Cloud (http://www.rackspace.com/; another PCI CaaS
service)
Capacity Planning: Capacity planning is a critical process in IT and infrastructure
management that involves assessing and managing resources to ensure that a system or
application can meet performance and scalability requirements. To effectively conduct
capacity planning, it's essential to define baselines, metrics, and consider various aspects of
system and network capacity. Here are key concepts related to capacity planning:

Capacity planning is an iterative process with the following steps:

1. Determine the characteristics of the present system.


2. Measure the workload for the different resources in the system: CPU, RAM, disk,
network, and so forth.
3. Load the system until it is overloaded, determine when it breaks, and specify what is
required to maintain acceptable performance. Knowing when systems fail under
load and what factor(s) is responsible for the failure is the critical step in capacity
planning.
4. Predict the future based on historical trends and other factors.
5. Deploy or tear down resources to meet your predictions.
6. Iterate Steps 1 through 5 repeatedly.

Defining Baseline and Metrics

A baseline represents the reference point or starting level for measuring performance,
utilization, or any other relevant metric related to an IT system or infrastructure

Key components of a baseline include:

 Resource Utilization: CPU, memory, disk, network usage, etc.


 Performance Metrics: Response times, throughput, transaction rates, etc.
 Workload Patterns: Usage patterns during peak and off-peak times.

Developers create cloud-based applications and Web sites based on a LAMP solution stack,
let’s use those applications for example

 Linux, the operating system


 Apache HTTP Server (http://httpd.apache.org/), the Web server based on the work
of the Apache Software Foundation
 MySQL (http://www.mysql.com), the database server developed by the Swedish
company MySQL AB, owned by Oracle Corporation through its acquisition of Sun
Microsystems
 PHP (http://www.php.net/), the Hypertext Preprocessor scripting language
developed by The PHP Group

LAMP is good to use as an example because it offers a system with two applications
(APACHE and MySQL) that can be combined or run separately on servers.
Baseline Measurements:

Let’s assume that a capacity planner is working with a system that has a Web site based on
APACHE, and let’s assume the site is processing database transactions using MySQL.

There are two important overall workload metrics in this LAMP system:

1. Page views or hits on the Web site, as measured in hits per second
2. Transactions completed on the database server, as measured by transactions per
second or perhaps by queries per second

System Metrics: System metrics are quantitative measures that assess the performance
and resource utilization of a system. Common system metrics include CPU utilization,
memory usage, disk I/O, network bandwidth, and response time.

A machine instance (physical or virtual) is primarily defined by four essential resources:

1. CPU
2. Memory (RAM)
3. Disk
4. Network connectivity
Load Testing: Load testing involves simulating user or application traffic to evaluate how a
system performs under different levels of load. It helps determine how well a system can
handle increased workloads.

Load testing seeks to answer the following questions:

1. What is the maximum load that my current system can support?


2. Which resource(s) represents current system that limits the system’s performance?
This parameter is referred to as the resource ceiling. Depending upon a server’s
configuration
3. Can I alter the configuration of my server in order to increase capacity?
4. How does this server’s performance relate to your other servers that might have
different characteristics?

You may want to consider these load generation tools as well:

 HP LodeRunner (https://h10078.www1.hp.com/cda/hpms/display/main/
hpms_content.jsp?zn=bto&cp=1-11-126-17^8_4000_100__)
 IBM Rational Performance Tester (http://www-01.ibm.com/software/
awdtools/tester/performance/)
 JMeter (http://jakarta.apache.org/jmeter)

Resource Ceilings:Resource ceilings are predefined limits set for various system resources
(e.g., CPU, memory, disk space) to prevent resource exhaustion and maintain system
stability.
Server and Instance Types: Server and instance types refer to the specifications of the
hardware or virtual machines (VMs) used to host applications and services. These
specifications include CPU, memory, storage, and network capacity.

An Amazon Machine Instance (AMI) is described as follows:

 Micro Instance: 633 MB memory, 1 to 2 EC2 Compute Units (1 virtual core, using 2 CUs for short
periodic bursts) with either a 32-bit or 64-bit platform
 Small Instance (Default): 1.7GB memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2
Compute Unit), 160GB instance storage (150GB plus 10GB root partition), 32-bit platform, I/O
Performance: Moderate, and API name: m1.small
 High-Memory Quadruple Extra Large Instance: 68.4GB of memory, 26 EC2 Compute Units (8
virtual cores with 3.25 EC2 Compute Units each), 1,690GB of instance storage, 64-bit platform,
I/O Performance: High, and API name: m2.4xlarge
 High-CPU Extra Large Instance: 7GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5
EC2 Compute Units each), 1,690GB of instance storage, 64-bit platform, I/O Performance: High,
API name: c1.xlarge

Network Capacity and Scaling: Network capacity refers to the ability of a network
infrastructure to handle data traffic, including bandwidth, latency, and packet processing
capacity. Monitoring network metrics is essential for capacity planning.

If any cloud-computing system resource is difficult to plan for, it is network capacity. There
are three aspects to assessing network capacity:

1. Network traffic to and from the network interface at the server, be it a physical or
virtual interface or server
2. Network traffic from the cloud to the network interface
3. Network traffic from the cloud through your ISP to your local network interface
(your computer)

Cloud’s network performance, which is a measurement of WAN traffic. A WAN’s capacity is a function of
many factors: l Overall system traffic (competing services)

1. Routing and switching protocols l Traffic types (transfer protocols)


2. Network interconnect technologies (wiring)
3. The amount of bandwidth that the cloud vendor purchased from an Internet backbone provider

Scaling: Scaling involves adjusting the capacity of a system or network to accommodate


changing workloads. It can be vertical scaling (adding more resources to an existing
component) or horizontal scaling (adding more instances or nodes).

Effective capacity planning requires continuous monitoring of system metrics, load testing
under various conditions, and adjusting resources and infrastructure as needed to ensure
optimal performance and scalability. It's an ongoing process that helps organizations avoid
performance issues, downtime, and resource bottlenecks as their systems grow and evolve.
`
UNIT IV – EXPLORING PLATFORM AS A SERVICE (PaaS)

PaaS Application Frameworks: Drupal, Eccentex AppBase 3.0, LongJump, Squarespace, WaveMaker
and Wolf Frameworks.

Platform as a Service (PaaS) application frameworks are cloud-based platforms that provide developers
with the tools, libraries, and infrastructure needed to build, deploy, and manage applications.

Application frameworks provide a means for creating SaaS hosted applications using a unified
development environment or an integrated development environment (IDE).

These frameworks are much of the underlying infrastructure and allow developers to focus on writing
code and building applications rather than worrying about server management or hardware
provisioning.

Example :- Web sites are now based on the notion of information management and they are referred to
as content management systems (CMS). Web site as a CMS adds a number of special features to the
concept that includes rich user interaction, multiple data sources, and extensive customization and
extensibility.

Here are some popular PaaS application frameworks:


 Google App Engine
 Microsoft Azure App Service
 Salesforce Lightning Platform
Common Characteristics PaaS Application Frameworks:
 They separate data-handling from presentation (user interface).
 They offer tools for establishing business objects or entities and the relationships between
them.
 They support the incorporation of business rules, logic, and actions.
 They provide tools for creating data entry controls (forms), views, and reports.
 They provide instrumentation, tools for measuring application performance.
 They support packaging and deployment of applications.

Drupal

Drupal (http://drupal.org/) is a content management system (CMS) that is used as the backend to a
large number of Web sites worldwide.

The software is an open-source project that was created in the PHP programming language. Drupal is
really a programming environment for managing content, and it has elements of blogging and
collaboration software as part of its distribution Drupal is in this section because it is a highly extensible
way to create Web sites with rich features. Druplas has a large developer community that has created
nearly 6,000 third-party add-ons called contrib modules.

Technology: -

 Drupal applications running on any Web server that can run PHP 4.4.0 and later. The most
common deployments are on Apache
 Drupal on Microsoft IIS and other Unix Web servers.
`
 Drupal must be used with a database. Apache Xamp |Lamp installations are a standard Web
deployment platform, the database most often used is MySQL. & Other SQL databases work
equally well.

The Drupal core by itself contains a number of modules that provide for the following:

 Auto-updates
 Blogs, forums, polls, and RSS feeds
 Multiple site management
 OpenID authentication
 Performance optimization through caching and throttling
 Search
 User interface creation tools
 User-level access controls and profiles
 Themes
 Traffic management
 Workflow control with events and triggers
 Analytics and Reporting

The Drupal CMS was chosen as an example of this type of PaaS because it is so extensively used and has
broad industry impact, and it is a full-strength developer tool.

Eccentex AppBase 3.0 (https://www.eccentex.com/)

Eccentex had released the AppBase 3.0, which is a platform designed for building and deploying
business process management (BPM) and case management applications. AppBase is known for its low-
code/no-code capabilities, which allow organizations to create and customize applications without
extensive coding.

Here are some key features and capabilities of Eccentex's AppBase 3.0:

1. Low-Code Development: AppBase offers a low-code development environment, which enables


users to design and build applications with minimal coding. This makes it accessible to a broader
range of users, including business analysts and non-technical stakeholders.
`
2. BPM and Case Management: AppBase is particularly well-suited for BPM and case management
applications. It provides tools for modeling and automating business processes and cases,
helping organizations streamline their operations and improve efficiency.

3. Integration Capabilities: It supports integration with various data sources, systems, and third-
party services. This enables organizations to connect AppBase with their existing IT
infrastructure and other software solutions.

4. User Interface Customization: AppBase provides tools for customizing the user interface,
allowing organizations to create applications with tailored interfaces that match their branding
and user experience requirements.

5. Analytics and Reporting: The platform includes features for tracking and analyzing the
performance of processes and cases. Users can generate reports and dashboards to gain insights
into their operations.

6. Security and Compliance: Security is a critical aspect of AppBase. It offers role-based access
control and helps organizations maintain compliance with industry regulations and data
protection standards.

7. Cloud and On-Premises Deployment: Organizations can choose to deploy AppBase in the cloud
or on-premises, providing flexibility in how they host and manage their applications.

8. Configurable Workflows: Users can define and configure workflows to match their specific
business processes, making it possible to adapt to changing requirements and new challenges.

AppBase includes a set of different tools for building these applications, including the following:

 Business Objects Build: This object database has the ability to create rich data objects and
create relationships between them.
 Presentation Builder: This user interface (UI) builder allows you to drag and drop visual controls
for creating Web forms and data entry screens and to include the logic necessary to automate
what the user sees.
 Business Process Designer: This tool is used to create business logic for your application. With it,
you can manage workflow, integrate modules, create rules, and validate data.
 Dashboard Designer: This instrumentation tool displays the real-time parameters of your
application in a visual form.
 Report Builder: This output design tool lets you sort, aggregate, display, and format report
information based on the data in your application.
 Security Roles Management: This allows you to assign access rights to different objects in the
system, to data sets, fields, desktop tabs, and reports. Security roles can be assigned in groups
without users, and users can be added later as the application is deployed.
`

Applications that you create are deployed with the AppBase Application Revision Management console.
The applications you create in AppBase, according to the company, may be integrated with Amazon S3
Web Services (storage), Google AppEngine (PaaS), Microsoft Windows Azure (PaaS), Facebook, and
Twitter.
`
LongJump

LongJump was a cloud-based Platform as a Service (PaaS) and application platform that allowed
organizations to build and deploy web-based applications without the need for extensive coding. It was
known for its rapid application development and customization capabilities.

 LongJump creates browser-based Web applications that are database-enabled


 LongJump comes with an Object Model Viewer, forms, reports, layout tools, dashboards, and
site management tools.
 Access control is based on role- and rule-based access, and it allows for data-sharing between
teams and between tenants.
 LongJump comes with a security policy engine that has user and group privileges,
authentication, IP range blocking, SSO, and LDAP interoperability.
 Applications are packaged using a packaging framework that can support a catalog system, XML
package file descriptions, and a distribution engine.
 LongJump extends Java and uses a Model-View-Controller architecture (MVC) for its framework
in the Developer Suite.
 The platform uses Java Server Pages (JSP), Java, and JavaScript for its various components and
its actions with objects built with Java classes. Objects created in custom classes are referenced
using POJO (Plain Old Java Object).
 Localization is supported using a module called the Translation Workbench that includes
specified labels, errors, text, controls, and messaging text files (and header files) that allow them
to be modified by a translation service to support additional languages.
 The development environment supports the Eclipse (http://www.eclipse.org/) plug-in for
creating widgets using Java standard edition.
`
Squarespace

Squarespace is a popular Software as a Service (SaaS) platform that allows individuals and businesses to
create and maintain websites. SaaS refers to a software distribution model where applications are
hosted by a third-party provider and made available to customers over the internet.

Squarespace (http://www.squarespace.com is an example of a next-generation Web site builder and


deployment tool that has elements of a PaaS development environment.

The applications are built using visual tools and deployed on hosted infrastructure.
Squarespace presents as:
 A blogging tool
 A social media integration tool
 A photo gallery
 A form builder and data collector
 An item list manager
A traffic and site management and analysis tool The platform has more than 20 core modules that you
can add to your Web site. Squarespace sites can be managed on the company’s iPhone app.

Note:- With Squarespace, users have created some very visually beautiful sites.
All personal Web sites, portfolios, and business brand identification Squarespace positions itself as a
competitor to other sites of their which are full content management system with a variety of useful
features.

WaveMaker

WaveMaker (http://www.wavemaker.com/) is a visual rapid application development environment for


creating Java-based Web and cloud Ajax applications.

The software is open-source and offered under the Apache license.

WaveMaker is a WYSIWYG (What You See is What You Get) drag-and-drop environment that runs inside
a browser.

WaveMaker use to build applications is described as the Model-View-Controller system of application


architecture.
`
Note:- In this regard, WaveMaker has some similarities to PowerBuilder
(http://www.sybase.com/products/ internetappdevttools/powerbuilder).

WaveMaker is a framework that creates applications that can interoperate with other Java frameworks
and LDAP systems, including the following:

 Dojo Toolkit 1.0 (http://dojotoolkit.org/), a JavaScript library or toolbox


 LDAP directories l Microsoft Active Directory
 POJO (Plain Old Java Object)
 Spring Framework (http://www.springsource.org/)

The visual builder tool is called Visual Ajax Studio, and the development server is called the WaveMaker

 Rapid Deployment Server for Java applications. When you develop within the Visual Ajax Studio,
a feature called LiveLayout allows you to create applications while viewing live data.
 The data schema is prepared within a part of the tool called LiveForms.
 Mashups can be created using the Mashup Tool, which integrates applications using Java
Services, SOAP, REST, and RSS to access databases.
 Applications developed in WaveMaker run on standard Java servers such as Tomcat,
DojoToolkit, Spring, and Hibernate.

NOTE:- A new version of WaveMaker also runs on Amazon EC2, and the development environment can
be loaded on an EC2 instance as one of its machine images.

Wolf Frameworks (http://www. wolfframeworks.com/)

Many application frameworks like Google AppEngine and the Windows Azure Platform are tied to the
platform on which they run. So we can’t build an AppEngine application and port it to Windows Azure
without completely rewriting the application.

There isn’t any particular necessity to build an application framework in this way, but it suits the
purpose of these particular vendors:

 Google to have a universe of Google applications that build on the Google infrastructure,
 Microsoft to provide another platform on which to extend .NET Framework applications

If you are building an application on top of an IaaS vendor such as AWS, GoGrid, or RackSpace, what you
really developer want application development frameworks that are open, standards-based, and
portable.

Wolf Frameworks is an example of a PaaS vendor offering a platform on which you can build an SaaS
solution that is open and cross-platform.

Wolf Frameworks is based on the three core Windows SOA standard technologies of cloud computing:

 AJAX, asynchronous Java


 XML
 NET Framework
`
Wolf Frameworks home page.

Wolf Essentials:

 Wolf Frameworks uses a C# engine and supports both Microsoft SQL Server and MySQL
database.
 Applications that you build in Wolf are browser-based applications
 Wolf applications can be built without the need to write technical code.
 Wolf allows application data to be written to the client’s database server and data can be
imported or exported from a variety of data formats.
 In Wolf, Design of the software application that can be build in XML, Wolf supports forms,
search, business logic and rules, charts, reports, dashboards, and both custom and external Web
pages.

Security:-

 Connections to the datacenter are over a 128-bit encrypted SSL,with authentication, access
control, and a transaction history and audit trail.
 Security to multiple modules can be made available through a Single Sign-On (SSO) mechanism.
`
WOLF platform architecture feature enable Wolf developers to create a classic multitenant SOA
application without the need for high-level developer skills. These applications are interoperable,
portable from one Windows virtual machine to another, and support embedded business applications.
You can store your Wolf applications on a private server or in the Wolf cloud.

Exploring Platform as a Service using Google Web Services:

Exploring Google Applications involves understanding and using various suite of services, Tools and
resources that facilitate application development, deployment, and management provided by Google.

overview of some popular Google applications:

1. Google Search: The most well-known Google application, the search engine helps users find
information on the internet.

2. Gmail: Google's email service, offering features like threaded conversations, powerful search
capabilities, and integration with other Google services.

3. Google Drive: A cloud storage service that allows users to store and share files. It includes
Google Docs, Sheets, and Slides for document creation and collaboration.

4. Google Docs: An online word processing application that enables collaborative editing and
sharing of documents.

5. Google Sheets: A cloud-based spreadsheet application for creating, editing, and sharing
spreadsheets.

6. Google Slides: An online presentation tool for creating and sharing slideshows.

7. Google Calendar: A web-based calendar application that allows users to schedule events, set
reminders, and share calendars with others.

8. Google Photos: A cloud-based service for storing, organizing, and sharing photos and videos.

9. Google Maps: A mapping service that provides directions, local business information, and
street-level views.

10. Google Chrome: A popular web browser developed by Google known for its speed, simplicity,
and synchronization features.

11. Google Meet: A video conferencing service that allows users to host virtual meetings, webinars,
and collaborative sessions.

12. Google Classroom: An online platform designed for educational purposes, enabling teachers to
create and manage classes, assignments, and communication with students.

13. Google Analytics: A web analytics service that tracks and reports website traffic, providing
insights into user behavior and website performance.

14. Google Translate: A language translation service that supports text, speech, and image
translation across multiple languages.
`

Surveying the Google Application Portfolio,

Google has a diverse and extensive application portfolio that spans various categories, including
productivity, communication, collaboration, and entertainment.

Across worldwide on Google’s one million plus servers in nearly 30 datacenters. Roughly 17 of the 48
services listed leverage Google’s search engine in some specific way. Some of these search-related sites
search through selected content such as Books, Images, Scholar, Trends, and more. Other sites such as
Blog Search, Finance, News, and some others take the search results and format them into an
Aggregation

INDEXED SEARCH

Google Search consists of two key components: indexing and ranking. Understanding how these
processes work can provide insights into how Google retrieves and presents search results.

1. Crawling: Google uses automated programs called crawlers or spiders to browse the web and
discover new and updated pages.

2. Indexing: Once a page is discovered, Google's crawler analyzes the content, including text,
images, and other elements.The information is then added to Google's index, a massive
database containing information about all the pages the crawler has visited.
`
3. Ranking Signals: Google's algorithms analyze various factors to determine the relevance and
quality of a page. These factors are known as ranking signals.

 Examples of ranking signals include keywords, content quality, page speed, mobile-
friendliness, backlinks, and user experience.

THE DARK WEB

Online content that isn’t indexed by search engines belongs to what has come to be called the “Deep
Web”

The Deep Web includes:

 Database generated Web pages or dynamic content


 Pages without links
 Private or limited access Web pages and sites
 Information contained in sources available through executable code such as JavaScript
 Documents and files that aren’t in a form that can be searched, which includes not only media
files, but information in non-standard file formats

AGGREGATION AND DISINTERMEDIATION

Aggregation involves the collection and organization of information from various sources into a
centralized platform or service. Google, as a search engine and a suite of services, is an excellent
example of an aggregation platform.

1. Google Search:
 Google aggregates information from billions of web pages and presents relevant results
based on user queries.
2. Google News:
 Aggregates news articles from various sources, categorizes them, and presents them in
one place.
3. Google Maps:
 Aggregates geographical data, business information, and user-generated content to
provide a comprehensive mapping service.
4. Google Shopping:
 Aggregates product information and prices from various online retailers to help users
compare and shop.

Disintermediation involves the removal of intermediaries or middlemen in a process, allowing direct


interaction between producers and consumers. While Google often aggregates information, it can also
facilitate disintermediation in certain contexts.
1. Direct Access to Information:
 Through Google Search, users can access information on websites directly without the
need for traditional intermediaries like libraries or physical directories.
2. Online Advertising: Google's advertising platforms allow businesses to directly reach their target
audience without relying on traditional advertising agencies.
`
3. Google Reviews:
 Users can access reviews and ratings directly on Google, potentially bypassing
traditional review sites or intermediaries.
4. YouTube Creators:
 YouTube, owned by Google, provides a platform for content creators to reach audiences
directly without traditional media intermediaries.
5. Google Workspace:
 With tools like Google Drive and Google Docs, collaboration and communication can
occur directly between team members, reducing the need for intermediaries in
document management.
ENTERPRISE OFFERINGS As Google has built out its portfolio, it has released special versions of its
products for the enterprise. The following are among Google’s products aimed at the enterprise market:
 GOOGLE COMMERCE SEARCH (http://www.google.com/commercesearch/): This is a search
service for online retailers that markets their products in their site searches with a number of
navigation, filtering, promotion, and analytical functions.
 GOOGLE SITE SEARCH (http://www.google.com/sitesearch/): Google sells its search engine
customized for enterprises under the Google Site Search service banner. The user enters a
search string in the site’s search, and Google returns the results from that site.
 GOOGLE SEARCH APPLIANCE (http://www.google.com/enterprise/gsa): This server can be
deployed within an organization to speed up Internet searching support to Google Analytics and
Google Sitemaps.
 GOOGLE MINI (http://www.google.com/enterprise/mini/): The Mini is the smaller version of the
GSA that stores 300,000 indexed documents.
GOOGLE TRANSLATE
Google Translate is a free online language translation service developed by Google. It allows users to
translate text, documents, and web pages from one language to another.

Google Translate can be accessed directly at http://translate.google.com/ , where you can select the
language pair to be translated. You can do the following:
 Enter text directly into the text box, and click the Translate button to have the text translated. If
you select the Detect Language option, Translate tries to determine the language automatically
and translate it into English.
 Enter a URL for a Web page to have Google display a copy of the translated Web page.
 Enter a phonetic equivalent for script languages.
 Upload a document to the page to have it translated
`
Features of G-Translate
1. Language Translation:
 Google Translate supports the translation of text between numerous languages. It
covers a wide range of languages, including major world languages and many regional or
less common languages.
2. Text Translation:
 Users can enter text in one language, and Google Translate will provide the
corresponding translation in the selected target language. It supports both written and
typed input.
3. Document Translation:
 Google Translate allows users to upload documents for translation. It supports various
file formats, including Word documents, PDFs, and more.
4. Website Translation:
 Users can enter the URL of a website, and Google Translate will attempt to translate the
entire webpage into the selected language. This feature is useful for getting a general
understanding of the content on a foreign-language website.
5. Speech Translation:
 Google Translate can translate spoken words and phrases. Users can speak into their
device's microphone, and the service will provide the translated text and, in some cases,
an option to listen to the translation.
6. Handwriting Input:
 For some languages, Google Translate allows users to draw characters on a touchscreen
device, and it will attempt to recognize and translate the handwritten text.

GOOGLE ANALYTICS
Google Analytics is a web analytics service provided by Google that allows website owners, marketers,
and analysts to track and report on website traffic. It provides valuable insights into how users interact
`
with a website, enabling businesses to make informed decisions to improve their online presence and
marketing efforts.
1. Website Traffic Analysis:
 Google Analytics tracks the number of visitors, page views, and sessions on a website. It
provides an overview of overall traffic and user engagement.
2. Audience Insights:
 The platform provides detailed information about the website's audience, including
demographics, interests, geographic location, and the devices used to access the site.
3. Acquisition Reports:
 This section shows how users found the website. It breaks down traffic sources into
channels such as organic search, direct, referral, and paid search. It also tracks campaign
performance.
4. Behavior Reports:
 Google Analytics tracks user behavior on the site, showing popular pages, average time
spent on the site, and the sequence of pages visited. It helps in understanding how users
navigate through the website.
5. Conversion Tracking:
 Businesses can set up conversion goals to track specific actions users take on the site,
such as making a purchase, filling out a form, or signing up for a newsletter. E-
commerce tracking is also available for online stores.
6. Event Tracking:
 With event tracking, website owners can monitor specific interactions on a site that may
not be automatically tracked as a pageview, such as clicks on buttons, video views, or
downloads.
7. Custom Reports and Dashboards:
 Users can create custom reports and dashboards to focus on specific metrics and
visualizations that are relevant to their business goals. This allows for a more
personalized and efficient analysis.
8. Real-Time Reporting:
 Google Analytics offers real-time reporting, allowing users to see current site activity,
including active users, traffic sources, and popular content.
9. User Flow Analysis:
 The User Flow report visualizes how users move through a website, helping to identify
common paths and potential bottlenecks in the user journey.
`
10. Mobile Analytics: Google Analytics provides insights into how users interact with a website on
different devices, including mobile phones and tablets.

GOOGLE ADWORDS
 Google AdWords (http://www.google.com/AdWords) is a targeted ad service based on
matching advertisers and their keywords to users and their search profiles.
 This service transformed Google from a competent search engine into an industry giant and is
responsible for the majority of Google’s revenue stream.
 In general AdWords’ two largest competitors are Microsoft adcenter (http://
adcenter.microsoft.com/) and Yahoo! Search Marketing (http://searchmarketing. yahoo.com/).

Google Ads is an online advertising platform where advertisers pay to display brief advertisements,
service offerings, product listings, video content, and generate mobile application installs within the
Google ad network to web users.
Here are some key features of Google Ads:
1. Keyword Targeting:
 Advertisers can choose specific keywords related to their products or services. Ads are
then shown to users who search for those keywords on Google.
`
2. Ad Formats:
 Google Ads supports various ad formats, including text ads, display ads, video ads, and
app promotion ads. The format you choose depends on your advertising goals.
3. Campaign Types:
 Google Ads offers different campaign types, such as Search Campaigns, Display
Campaigns, Video Campaigns, Shopping Campaigns, and App Campaigns. Each type
targets specific advertising goals and platforms.
4. Bidding Options:
 Advertisers can set bids for their ads, indicating the maximum amount they are willing
to pay for a click (Cost Per Click or CPC) or for a thousand impressions (Cost Per Mille or
CPM).
5. Ad Extensions:
 These are additional pieces of information that can be added to your ads to provide
more context and encourage users to engage. Examples include site link extensions,
callout extensions, and location extensions.
6. Quality Score:
 Google uses a Quality Score to determine the relevance and quality of your ads. The
higher your Quality Score, the better your ad position and the lower your cost per click.
7. Targeting Options:
 Advertisers can target their ads based on factors such as location, language, device,
demographics, and user behavior. This allows for precise targeting of the desired
audience.
8. Ad Auction:
 Google Ads operates on an auction system. When a user searches for a keyword, Google
runs an ad auction to determine which ads will be shown and in what order. The auction
considers factors like bid amount, Quality Score, and ad relevance.
9. Conversion Tracking:
 Advertisers can set up conversion tracking to measure the success of their ad
campaigns. This involves tracking actions such as form submissions, purchases, or phone
calls generated by the ads.
10. Budget Control:
 Advertisers can set daily or campaign-level budgets to control how much they spend on
their advertising.
`

Google Toolkit and Working with the Google App Engine.

Google has an extensive program that supports developers who want to leverage Google’s cloudbased
applications and services.

Google has a number of areas in which it offers development services, including the following:

l AJAX APIs (http://code.google.com/intl/en/apis/ajax/) are used to build widgets and other applets
commonly found in places like iGoogle. AJAX provides access to dynamic information using JavaScript
and HTML.

 Android (http://developer.android.com/index.html) is a phone operating system development.


 Google App Engine (http://appengine.google.com/) is Google’s Platform as a Service (PaaS)
development and deployment system for cloud computing applications.
 Google Apps Marketplace (http://code.google.com/intl/en/googleapps/ marketplace/) offers
application development tools and a distribution channel for cloud-based applications.
 Google Web Toolkit (GWT; http://code.google.com/webtoolkit) is a set of development tools
for browser-based applications. GWT is an open-source platform that has been used to create
Google Wave and Google AdWords. GWT allows developers to create AJAX applications using
Java or with the GWT compiler using JavaScript.

Note:- There are many G-API listed are available but few are enlisted below
`

Working with the Google App Engine.

Google App Engine (GAE) is a platform-as-a-service (PaaS) cloud computing platform for developing and
hosting web applications in Google's data centers. It allows developers to build and deploy applications
without dealing with the underlying infrastructure.

Key Concepts:

1. App Engine Standard vs. App Engine Flexible:

 App Engine offers two environments: Standard and Flexible. The Standard environment
is a fully managed platform with automatic scaling, while the Flexible environment
provides more flexibility but requires you to manage the underlying infrastructure.

2. Languages and Runtimes:

 App Engine supports multiple programming languages, including Python, Java, Node.js,
Go, and others. Each runtime has its own set of libraries and features.

GAE supports the following major features:

 Dynamic Web services based on common standards


 Automatic scaling and load balancing
 Authentication using Google’s Accounts API
 Persistent storage, with query access sorting and transaction management features
 Task queues and task scheduling
 A client-side development environment for simulating GAE on your local system
 One of either two runtime environments: Java or Python

To encourage developers to write applications using GAE, Google allows for free application
development and deployment up to a certain level of resource consumption. Resource limits are
described on Google’s quota page at http://code.google.com/appengine/docs/ quotas.html, and the
quota changes from time to time.
`
Google uses the following pricing scheme:

 CPU time measured in CPU hours is $0.10 per hour.


 Stored data measured in GB per month is $0.15 per GB/month.
 Incoming bandwidth measured in GB is $0.10 per GB.
 Outgoing bandwidth measured in GB is $0.12 per GB.
 Recipients e-mailed is $0.0001 per recipient

Steps to Work with Google App Engine:

1. Project Setup:

 Begin by creating a new project on the Google Cloud Console.

 Enable the App Engine API for your project.

2. Install the Google Cloud SDK:

 Download and install the Google Cloud SDK on your local machine. This SDK includes the
gcloud command-line tool for interacting with Google Cloud services.

3. App Configuration:

 Create an app.yaml configuration file in the root of your project. This file defines
settings for your App Engine application, including runtime, version, and scaling settings.

4. Develop Locally:

 Use the local development server provided by the SDK to test your application locally
before deploying it. This helps identify and fix issues before they reach the production
environment.

5. Deployment:

 Deploy your application to App Engine using the gcloud app deploy command. This
uploads your application code, configuration files, and dependencies to the App Engine
environment.

6. Scaling:

 App Engine automatically scales your application based on demand. You can configure
automatic scaling settings in your app.yaml file or use manual scaling if needed.

7. Monitoring and Logging:

 Use Google Cloud Monitoring and Logging to monitor the performance of your
application. You can set up alerts, view metrics, and access logs for debugging.
`
8. Services and Versions:

 App Engine allows you to deploy multiple services within the same project. Each service
can have multiple versions, making it easy to deploy updates without affecting the
entire application.

Exploring Platform as a Service using Microsoft Cloud Services:

Exploring Microsoft Cloud Services

Microsoft Cloud Services, commonly referred to as Microsoft Azure, is a comprehensive suite of cloud
computing services offered by Microsoft. Azure provides a variety of tools and services for building,
deploying, and managing applications and services through Microsoft's global network of data centers.

Azure is a virtualized infrastructure to which a set of additional enterprise services has been layered on
top, including:

 A virtualization service called Azure AppFabric that creates an application hosting environment.
AppFabric (formerly .NET Services) is a cloud-enabled version of the .NET Framework.
 A high capacity non-relational storage facility called Storage.
 A set of virtual machine instances called Compute(VM).
 A cloud-enabled version of SQL Server called SQL Azure Database.
 A database marketplace based on SQL Azure Database
 An xRM (Anything Relations Management) service called Dynamics CRM based on Microsoft
Dynamics. A document and collaboration service based on SharePoint called SharePoint
Services.
 Windows Live Services, a collection of services that runs on Windows Live, which can be used in
applications that run in the Azure cloud.
`
Microsoft Azure-

Windows Azure is a virtualized Windows infrastructure run by Microsoft on a set of datacenters around
the world

Core Services and Features:

1. Compute Services:

 Virtual Machines (VMs): Deploy and manage virtual machines in the cloud, supporting
both Windows and Linux.

 Azure App Service: A platform for building, deploying, and scaling web apps.

2. Storage Services:

 Azure Blob Storage: Object storage solution for the cloud.

 Azure File Storage: Managed file shares for cloud or on-premises deployments.

 Azure Table Storage: NoSQL data store for semi-structured data.

3. Database Services:

 Azure SQL Database: Fully managed relational database service.

 Cosmos DB: Globally distributed, multi-model database service for NoSQL data.

4. Networking:

 Azure Virtual Network: Connect and isolate Azure resources.

 Azure Load Balancer: Balances incoming network traffic across multiple servers.

 Azure VPN Gateway: Establish secure connections between on-premises and Azure
resources.

5. Identity and Access Management:

 Azure Active Directory (AD): Identity and access management service for applications
and services.

6. Security and Compliance:

 Azure Security Center: Unified security management system.

 Azure Policy: Implement and enforce policies on resources.


`

Windows Azure AppFabric

Windows AppFabric was a set of integrated technologies in Windows Server and Azure designed to
make it easier to build, scale, and manage applications.
`
These steps are associated with Access Control:

1. The client requests authentication from Access Control.

2. Access Control creates a token based on the stored rules for server application.

3. A token is signed and returned to the client application.

4. The client presents the token to the service application.

5. The server application verifies the signature and uses the token to decide what the client application
is allowed to do.

Windows Azure AppFabric Access Control Service (ACS) was a component of the Windows Azure
platform that provided a way to integrate identity and access control into web applications and services.
It was designed to simplify the process of managing authentication and authorization, especially in
scenarios where applications needed to interact with users from various identity providers.

Azure Content Delivery Network

The Windows Azure Content Delivery Network (CDN) is a worldwide content caching and delivery
system for Windows Azure blob content.

Any storage account can be enabled for CDN. In order to share information stored in an Azure blob, you
need to place the blob in a public blob container that is accessible to anyone using an anonymous sign-
in.

 Windows Azure Blob services URL: http://.blob.core.windows. net//


 Windows Azure CDN URL: http://.vo.msecnd.net//

SQL Azure

Azure SQL Database is a fully managed relational database service provided by Microsoft Azure. It is
based on the Microsoft SQL Server database engine and is designed to simplify database management
tasks, reduce administrative overhead, and enhance scalability and availability.
`
key aspects of Azure SQL Database:

1. Serverless Managed Service

2. Deployment Options

3. Scalability

4. Security

5. High Availability

6. Built-in Intelligence

MICROSOFT AZURE PRICING


Microsoft Azure pricing is based on a pay-as-you-go model, allowing customers to pay only for the
services they use. Azure offers a variety of services, and the cost can vary based on factors such as
resource consumption, usage patterns, and the specific features and capabilities of each service.
1. Pricing Calculator
2. Free Tier and Trials
3. Pay-as-You-Go Pricing
4. Reserved Instances
5. Discounts for Startups and Nonprofits
6. Monitoring and Alerts
MICROSOFT LIVE SERVICES
Windows Live is a collection of cloud-based applications and services, some of which can be used inside
applications that run on Windows Azure Platform. Some Windows Live applications run as standalone
applications and are available to users directly through a browser

Messenger Connect was released as part of the Windows Live Wave 4 at the end of June 2010, and it
unites APIs such as Windows Live ID, Windows Live Contacts, and Windows Live Messenger Web Toolkit
`
into a single API. Messenger Connect works with ASP.NET, Windows Presentation Foundation (WPF),
Java, Adobe Flash, PHP, and Microsoft’s Silverlight graphics rendering technology through four different
methods:

 Messenger Connect REST API Service


 Messenger Connect .NET and Silverlight Libraries
 Messenger Connection JavaScript Libraries and Controls
 Web activity feeds, either RSS 2.0 or ATOM

Live Essentials

Windows Live includes several popular cloud-based services. The two best known and most widely used
are Windows Live Hotmail and Windows Live Messenger, with more than 300 million users worldwide.
Windows Live is based around five core services:

 E-mail
 Instant Messaging
 Photos
 Social Networking
 Online Storage
You can access Windows Live services in one of the following ways:
 By navigating to the service from the command on the navigation bar at the top of Windows
Live
 By directly entering the URL of the service
 By selecting the application from the Windows Live Essentials folder on the Start menu
`

Live Home

Windows Live Essentials currently includes the following:


 l Family Safety
 l Windows Live Messenger
 l Photo Gallery
 l Mail l Movie Maker
`
UNIT-V
EXPLORING INFRASTRUCTURE
AS A SERVICE (IaaS)
T1: UNDERSTANDING AMAZON WEB SERVICES: Amazon Web Services (AWS) has a fascinating history
that traces its roots back to the early 2000s when Amazon.com was looking for ways to expand its business
and capitalize on its growing IT infrastructure. Here is a brief history of AWS:

1. Inception (Early 2000s):


 The idea for AWS began to take shape within Amazon.com when the company's engineers
realized that they could leverage their internal IT infrastructure, which was already
substantial due to the demands of the growing e-commerce giant.
2. Launch of AWS (March 14, 2006):
 AWS was officially launched on March 14, 2006, with the release of Amazon Simple Queue
Service (SQS) and Amazon Simple Storage Service (S3). These services marked the beginning
of AWS's cloud offerings.
3. Elastic Compute Cloud (EC2) (August 25, 2006):
 AWS introduced Amazon Elastic Compute Cloud (EC2), allowing customers to rent virtual
servers on-demand, which was a groundbreaking development in cloud computing. EC2
allowed businesses to scale their computing resources as needed.
4. Expansion and Innovation (Late 2000s to Early 2010s):
 AWS continued to expand its portfolio of services, adding Amazon RDS (Relational Database
Service), Amazon DynamoDB (NoSQL database service), and Amazon CloudFront (content
delivery network) among others.
 AWS also launched its first region outside of the United States in 2010 with the
establishment of the AWS EU (Ireland) region.
5. Mass Adoption (Mid-2010s):
 AWS experienced significant growth as more businesses and startups adopted its cloud
services. Companies of all sizes, from small startups to large enterprises, began using AWS to
power their applications and infrastructure.
6. Acquisitions and Expansions (2015-2017):
 AWS made strategic acquisitions, including Annapurna Labs, a semiconductor company, and
Cloud9 IDE, an integrated development environment. These acquisitions helped AWS further
expand its capabilities.
 In 2016, AWS launched its first AI and machine learning services, including Amazon Lex and
Amazon Polly.
7. Competition and Market Dominance (2010s):
 AWS faced increasing competition from other cloud providers like Microsoft Azure and
Google Cloud Platform (GCP). Despite the competition, AWS maintained its position as the
leading cloud provider.
8. Global Expansion (2010s):
 AWS expanded its global infrastructure with multiple regions and Availability Zones
worldwide. This expansion allowed customers to host their applications and data in regions
that were geographically closer to their users.
9. Innovation in AI and Machine Learning (2018-2019):
 AWS introduced services like Amazon SageMaker, a machine learning platform, and AWS
DeepRacer, a machine learning racing car, to make AI and ML more accessible to developers.
10. Ongoing Innovation (2020s):
 AWS continues to innovate and expand its offerings, with a focus on areas such as edge
computing, quantum computing, and sustainability. AWS also remains a leader in cloud
security and compliance.
------------------------------------------------------------------------------------------------------------------------------
T2: AMAZON WEB SERVICE COMPONENTS AND SERVICES

Amazon Web Services is comprised of the following components, listed roughly in their order of
importance:

 Amazon Elastic Compute Cloud


o Amazon Simple Queue Service
o Amazon Simple Notification Service
o Amazon CloudWatch
o Load Balancing
 Amazon Simple Storage System
 Amazon Elastic Block Store
 Amazon SimpleDB
 Amazon Relational Database Service
 Amazon Cloudfront

1.Amazon Elastic Compute Cloud (EC2; http://aws.amazon.com/ec2/), is the central application in the
AWS portfolio. It enables the creation, use, and management of virtual private servers running the Linux
or Windows operating system over a Xen hypervisor. Amazon Machine Instances are sized at various
levels and rented on a computing/ hour basis. Spread over data centers worldwide, EC2 applications
may be created that are highly scalable, redundant, and fault tolerant. EC2 is described more fully the
next section. A number of tools are used to support EC2 services:

 Amazon Simple Queue Service (SQS; http://aws.amazon.com/sqs/) is a message queue or


transaction system for distributed Internet-based applications. See “Examining the Simple
Queue Service (SQS)” later in this chapter for a description of this AWS feature. In a loosely
coupled SOA system, a transaction manager is required to ensure that messages are not lost
when a component isn’t available.
 Amazon Simple Notification Service (SNS; http://aws.amazon.com/sns/) is a Web service that
can publish messages from an application and deliver them to other applications or to
subscribers. SNS provides a method for triggering actions, allowing clients or applications to
subscribe to information (like RSS), or polling for new or changed information or perform
updates.
 EC2 can be monitored by Amazon CloudWatch (http://aws.amazon.com/ cloudwatch/), which
provides a console or command line view of resource utilization, site Key Performance Indexes
(performance metrics), and operational indicators for factors such as processor demand, disk
utilization, and network I/O. The metrics obtained by CloudWatch may be used to enable a
feature called Auto Scaling (http://aws.amazon.com/autoscaling/) that can automatically scale
an EC2 site based on a set of rules that you create. Autoscaling is part of Amazon Cloudwatch
and available at no additional charge.
 Amazon Machine Instances (AMIs) in EC2 can be load balanced using the Elastic Load Balancing
(http://aws.amazon.com/elasticloadbalancing/) feature. The Load Balancing feature can detect
when an instance is failing and reroute traffic to a healthy instance, even an instance in other
AWS zones. The Amazon CloudWatch metrics request count and request latency that show up in
the AWS console are used to support Elastic Load Balancing.

2.Amazon Simple Storage System (S3; http://aws.amazon.com/s3/) is an online backup and storage
system, which is described in “Working with Amazon Simple Storage System (S3)” later in this chapter. A
high speed data transfer feature called AWS Import/Export (http://aws.amazon. com/importexport/)
can transfer data to and from AWS using Amazon’s own internal network to portable storage devices.

3.Amazon Elastic Block Store (EBS; http://aws.amazon.com/ebs/) is a system for creating virtual disks
(volume) or block level storage devices that can be used for Amazon Machine Instances in EC2.

4.Amazon SimpleDB (http://aws.amazon.com/simpledb/) is a structured data store that supports


indexing and data queries to both EC2 and S3. SimpleDB isn’t a full database implementation, as you
learn in “Exploring SimpleDB (S3)” later in this chapter; it stores data in “buckets” and without requiring
the creation of a database schema. This design allows SimpleDB to scale easily. SimpleDB interoperates
with both Amazon EC2 and Amazon S3.

5.Amazon Relational Database Service (RDS; http://aws.amazon.com/rds/) allows you to create


instances of the MySQL database to support your Web sites and the many applications that rely on data-
driven services. MySQL is the “M” in the ubiquitous LAMP Web services platform (for Linux, APACHE,
MySQL, and PERL), and the inclusion of this service allows developers to port applications, their source
code, and databases directly over to AWS, preserving their previous investment in these technologies.
RDS provides features such as automated software patching, database backups, and automated
database scaling via an API call

6.Amazon Cloudfront (http://aws.amazon.com/cloudfront/) is an edge-storage or content-delivery


system that caches data in different physical locations so that user access to data is enhanced through
faster data transfer speeds and lower latency. Cloudfront is similar to systems such as Akamai.com, but
is proprietary to Amazon.com and is set up to work with Amazon Simple Storage System (Amazon S3).
Cloudfront is currently in beta, but has been well received in the trade press. See “Defining Cloudfront”
later in this chapter for more details.

While the list above represents the most important of the AWS offerings, it is only a partial list—a list
that is continually growing and very dynamic. A number of services and utilities support Amazon
partners or the AWS infrastructure itself.

7.Alexa Web Information Service ( http://aws.amazon.com/awis/ )


Alexa Top Sites http://aws.amazon.com/alexatopsites/) :-
Alexa Top Sites are two services that collect and expose information about the structure and traffic
patterns of Web sites. This information can be used to build or structure Web sites, access related sites,
analyze historical patterns for growth and relationships, and perform data analysis on site information.
Alexa Top Sites can rank sites based on their usage and be used to structure awareness of site popularity
into the structure of Web service you build.
8.Amazon Associates Web Services (A2S) is the machinery for interacting with Amazon’s vast product
data and eCommerce catalog function. This service, which was called Amazon E-Commerce Service
(ECS), is the means for vendors to add their products to the Amazon.com site and take orders and
payments.

9.Amazon DevPay (http://aws.amazon.com/devpay/) is a billing and account management service that


can be used by businesses that run applications on top of AWS. DevPay provides a developer API that
eliminates the need for application developers to build order pipelines, because Amazon does the billing
based on your prices and then uses Amazon Payments to collect the payments.

10.Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) is an interactive data


analysis tool for performing indexing, data mining, file analysis, log file analysis, machine learning,
financial analysis, and scientific and bioinformatics research. Elastic MapReduce is built on top of a
Hadoop framework using the Elastic Compute Cloud (EC2) and Simple Storage Service (S3).

11.Amazon Mechanical Turk (http://aws.amazon.com/mturk/) is a means for accessing human


researchers or consultants to help solve problems on a contractual or temporary basis. Problems solved
by this human workforce have included object identification, video or audio recording, data duplication,
and data research. Amazon.com calls this type of work Human Intelligence Tasks (HITs). The Mechanical
Turk is currently in beta.

12.AWS Multi-Factor Authentication (AWS MFA; http://aws.amazon.com/mfa/) is a special feature that


uses an authentication device you have in your possession to provide access to your AWS account
settings. This hardware key generates a pseudo-random sixdigit number when you press a button that
you enter into your logon. This gives you two layers of protection: your user id and password (things you
know) and the code from your hardware key (something you have). This multifactor security feature can
be extended to Cloudfront and Amazon S3 Secure access to your EC2 AMIs is controlled by passwords,
Kerberos, and 509 Certificates

13.Amazon Flexible Payments Service (FPS; http://aws.amazon.com/fps/) is a payments-transfer


infrastructure that provides access for developers to charge Amazon’s customers for their purchases.
Using FPS, goods, services, donations, money transfers, and recurring payments can be fulfilled. FPS is
exposed as an API that sorts transactions into packages called Quick Starts that make this service easy to
implement.

14.Amazon Fulfillment Web Services (FWS; http://aws.amazon.com/fws/) allows merchants to fill


orders through Amazon.com fulfillment service, with Amazon handling the physical delivery of items on
the merchant’s behalf. Merchant inventory is prepositioned in Amazon’s fulfillment centers, and
Amazon packs and ships the items. There is no charge for using Amazon FWS; fees for the Fulfillment by
Amazon (FBA; http:// www.amazon.com/gp/seller/fba/fulfillment-by-amazon.html) service apply.
Between FBA and FWS, you can create a nearly virtual store on Amazon.com.

15.Amazon Virtual Private Cloud (VPC; http://aws.amazon.com/vpc/) provides a bridge between a


company’s existing network and the AWS cloud. VPC connects your network resources to a set of AWS
systems over a Virtual Private Network (VPN) connection and extends security systems, firewalls, and
management systems to include their provisioned AWS servers. Amazon VPC is integrated with Amazon
EC2, but Amazon plans to extend the capabilities of VPC to integrate with other systems in the Amazon
cloud computing portfolio.

16.AWS Premium Support (http://aws.amazon.com/premiumsupport/) is Amazon’s technical support


and consulting business. Through AWS Premium Support, subscribers to AWS can get help building or
supporting applications that use EC2, S3, Cloudfront, VPC, SQS, SNS, SimpleDB, RDS, and the other
services listed above. Service plans are available on a per-incidence, monthly, or unlimited basis at
different levels of service

-----------------------------------------------------------------------------------------------------------------------------
T3: WORKING WITH THE ELASTIC COMPUTE CLOUD (EC2)
Amazon Elastic Compute Cloud (EC2) is a virtual server platform that allows users to create and run
virtual machines on Amazon’s server farm. With EC2, you can launch and run server instances called
Amazon Machine Images (AMIs) running different operating systems such as Red Hat Linux and
Windows on servers that have different performance profiles. You can add or subtract virtual servers
elastically as needed; cluster, replicate, and load balance servers; and locate your different servers in
different data centers or “zones” throughout the world to provide fault tolerance. The term elastic
refers to the ability to size your capacity quickly as needed.

Consider a situation where you want to create an Internet platform that provides the following:

 A high transaction level for a Web application


 A system that optimizes performance between servers in your system
 Data driver information services
 Network security
 The ability to grow your service on demand

Implementing that type of service might require a rack of components that included the following:

 An application server with access to a large RAM allocation


 A load balancer, usually in the form of a hardware appliance such as F5’s BIG-IP
 A database server
 Firewalls and network switches
 Additional rack capacity at the ISP

Amazon Machine Images AMIs are operating systems running on the Xen virtualization hypervisor.
Each virtual private server is accorded a size rating called its EC2 Compute Unit

 Standard Instances: The standard instances are deemed to be suitable for standard server
applications.
 High Memory Instances: High memory instances are useful for large data throughput
applications such as SQL Server databases and data caching and retrieval.
 High CPU Instances: The high CPU instance category is best used for applications that are
processor- or compute-intensive. Applications of this type include rendering, encoding, data
analysis, and others.
Pricing models:- The pricing of these different AMI types depends on the operating system used,
which data center the AMI is located in (you can select its location), and the amount of time that the
AMI runs. Rates are quoted based on an hourly rate. Additional charges are applied for:

 The amount of data transferred


 Whether Elastic IP Addresses are assigned
 Your virtual private server’s use of Amazon Elastic Block Storage (EBS)
 Whether you use Elastic Load Balancing for two or more servers
 Other features

AMIs that have been saved and shut down incurs a small one-time fee, but do not incur additional
hourly fees.
The three different pricing models for EC2 AMIs are as follows:
 On-Demand Instance: This is the hourly rate with no long-term commitment.
 Reserved Instances: This is a purchase of a contract for each instance you use with a
significantly lower hourly usage charge after you have paid for the reservation.
 Spot Instance: This is a method for bidding on unused EC2 capacity based on the current spot
price. This feature offers a significantly lower price, but it varies over time or may not be
available when there is no excess capacity
NOTE:- The AWS Simple Monthly Calculator help you estimate your monthly charges.
http://calculator.s3. amazonaws.com/calc5.html

System images and software: Choose & use a template AMI system image with the operating system
of your choice or create your own system image that contains your custom applications, code libraries,
settings, and data. Security can be set through passwords, Kerberos tickets, or certificates.
These operating systems are offered:
 Red Hat Enterprise Linux OS
 OpenSuse Linux OS
 Ubuntu Linux OS
 Sun OpenSolaris OS
 Fedora OS
 Gentoo Linux OS
 Oracle Enterprise Linux OS
 Windows Server 2003/2008 32-bit and 64-bit up to Data Center Edition OS
 Debian OS
Note:- When you create a virtual private server, you can use the Elastic IP Address feature to create
what amounts to a static IPv4 address to your server. This address can be mapped to any of your AMIs
and is associated with your AWS account.

There are currently many different EC2 service zones or regions:


 Asia Pacific Mumbai( ap-south-1)
 Asia Pacific Singapore (ap-southeast-1)
 US East (Northern Virginia)
 US West (Northern California)

Creating an AWS account


1. Visit the AWS website: Go to the AWS official website at https://aws.amazon.com/.
2. Click on "Sign Up": Look for the "Create an AWS Account" or "Sign Up" button on the AWS
homepage and click on it.
3. Provide your email address: Enter your email address, and choose whether you're creating an
AWS account for personal or business use.
4. Fill in the required information: You'll need to provide your name, company (if applicable), and
contact information.
5. Choose an AWS support plan: AWS offers different support plans, including a free tier with
limited services. Select the plan that best suits your needs.
6. Enter payment information: You'll be asked to provide your payment details, including a credit
card number. AWS may charge a small verification fee, which will be refunded later.
7. Verify your identity: AWS may ask you to verify your identity through a phone call or text
message.
8. Accept the AWS Customer Agreement: Read and accept the AWS Customer Agreement and the
AWS Service Terms.
9. Complete the registration: Once you've provided all the necessary information and accepted the
terms, your AWS account will be created.
Creating an instance on EC2
1. Create VPC Network with IP range for your VPC (e.g., 10.0.0.0/16).
2. Create Subnet with IP range for your VPC (e.g., 10.0.1.0/24 | 10.0.2.0/24).
3. Create an Internet Gateway (IGW) & associate VPC
4. Create Route Table & associate with VPC
5. Edit routes & add a new route with destination 0.0.0.0/0 and target as the IGW
6. Associate private Subnet with Route Table, so subnets will become public
7. Launch Ec2 Instance in Public subnet to create a new EC2 instance.
8. Choose an Amazon Machine Image (AMI) for your instance, e.g., Amazon Linux 2.
9. Select an instance type with 2GB RAM and 200GB storage, like t2.micro or t3.micro.
10. Configure instance details, including selecting the VPC you created earlier and choosing the
public subnet.
11. Create a Key pair ( PEM | PPK )- & it is used to securely connect to your instance using SSH |RDP
12. Configure Firewall security groups to allow HTTP (80) | HTTPS (443)and SSH ( 22) traffic.
13. Review and launch the instance.
-------------------------------------------------------------------------------------------------------------------------------
T3: WORKING WITH AMAZON STORAGE SYSTEMS

Creating an Amazon Machine Instance or provision it with a certain amount of storage. That storage is
temporal; It only exists for as long as your instance is running. All of the data contained in that storage is
lost when the instance is suspended or terminated, as the storage is reassigned to the pool for other
AWS users to use. For this and other reasons you need to have access to persistent storage(S3 BUCKET )

1.Amazon Simple Storage System (S3): Amazon S3’s cloud-based storage system allows you to store
data objects ranging in size from 1 byte up to 5GB in a flat namespace. In S3, storage containers are
referred to as buckets, and buckets serve the function of a directory, although there is no object
hierarchy to a bucket, and you save objects and not files to it. It is important that you do not associate
the concept of a file system with S3, because files are not supported; only objects are stored.
Additionally, not needed to “mount” a bucket as you do a file system.

You can do the following with S3 buckets through the APIs:

 Create, edit, or delete existing buckets


 Upload new objects to a bucket and download them
 Search for and find objects and buckets
 Find metadata associate with objects and buckets
 Specify where a bucket should be stored
 Make buckets and objects available for public access
Amazon Simple Storage Service (S3) provides secure, durable, and highly scalable object storage. To
upload data such as photos, videos, and static documents, you must first create a logical storage bucket
in one of the AWS regions. Then you can upload any number of objects to it. Buckets and objects are
resources, and Amazon S3 provides both APIs and a web console to manage them.

Amazon S3 can be used alone or together with other AWS services such as Amazon EC2, Amazon Elastic
Block Store (Amazon EBS), and Amazon Glacier, as well as third-party storage repositories and gateways.
Amazon S3 provides cost-effective object storage for a wide variety of use cases including web
applications, content distribution, backup and archiving, disaster recovery, and big data analytics.

Creating a backup process in Amazon S3 involves a few key steps to ensure your data is securely backed
up and can be easily restored when needed. Here's a general outline of the process:

1. Set Up an Amazon S3 Bucket:


 Log in to your AWS Management Console.
 Create the bucket.
2. Configure Data Backup:
 Decide what data you want to back up to your S3 bucket. This could include files, databases,
server logs, and more.
 Depending on your data source, you might use AWS services like AWS Backup, AWS DataSync, or
write custom scripts to transfer data to your S3 bucket.
3. Data Transfer to S3:
 Use the AWS CLI, SDKs, or other transfer methods to upload your data to the S3 bucket.
 Ensure that you organize your data within the bucket with appropriate folder structures to make
it easy to locate and restore.
4. Enable Versioning (Optional):
 Enabling versioning in your S3 bucket allows you to store multiple versions of an object. This can
be beneficial for accidental deletions or changes.
 To enable versioning, go to your bucket's properties in the AWS Management Console, and under
the "Versioning" tab, click "Enable versioning."
5. Data Encryption:
 Consider enabling server-side encryption (SSE) for your S3 objects to protect your data at rest.
SSE can use AWS-managed keys (SSE-S3) or AWS Key Management Service (KMS) keys (SSE-KMS)
for encryption.
 You can also implement client-side encryption if you want to encrypt data before uploading it to
S3.
6. Access Control:
 Define access control policies using bucket policies, access control lists (ACLs), and IAM (Identity
and Access Management) to restrict who can access and manage your data in the S3 bucket.
 Ensure that you maintain proper permissions for backup and restore processes.
7. Backup Frequency and Retention:
 Determine your backup frequency. This could be daily, weekly, or according to a specific
schedule.
 Set retention policies to define how long you want to keep backups. AWS Backup, if used, can
help manage retention policies.
8. Monitoring and Alerts:
 Configure monitoring and alerts using AWS CloudWatch to track the health and status of your
backups.
 Set up notifications to alert you in case of backup failures or other issues.
9. Testing Backup and Restore:
 Periodically test your backup and restore processes to ensure that they work as expected. This
will help you verify the integrity of your backups.
10. Disaster Recovery Plan:
 Develop a disaster recovery plan that outlines the steps to follow in case of data loss or other
disasters.
 Test your disaster recovery plan to ensure that you can successfully recover data from your
backups.
NOTE:- Amazon S3 is highly reliable, it is not highly available. You can definitely get your data back from
S3 at some point with guaranteed 100% fidelity, but the service is not always connected and experiences
service outages. By comparison, an EBS volume is offered with an annual failure rate of 0.1% to 0.5%,
about a factor of 10 better than typical disk drives you use in your own physical servers

2.Amazon Elastic Block Store (EBS): Amazon Elastic Block Store (Amazon EBS) is a block storage service
provided by Amazon Web Services (AWS) that allows you to create and attach persistent block storage
volumes to your Amazon EC2 (Elastic Compute Cloud) instances. EBS volumes are designed for high
availability and durability and provide scalable and reliable block-level storage for your EC2 instances.

Here are some key features and concepts associated with Amazon EBS:

1. Volume Types:
 Amazon EBS offers different volume types optimized for various workloads:
 General Purpose (SSD): Provides a balance of price and performance. Suitable
for a wide range of workloads.
 Provisioned IOPS (SSD): Designed for I/O-intensive applications, allowing you to
provision a specific number of IOPS (input/output operations per second).
 Cold HDD: Offers low-cost storage for infrequently accessed data.
 Throughput Optimized HDD: Designed for big data and data warehousing
workloads that require high throughput.
 I/O Optimized HDD: Designed for big data and data warehousing workloads that
require high IOPS.
 You can choose the most appropriate volume type based on your application's
performance and cost requirements.
2. Volume Size and Attach/Detach:
 EBS volumes can range in size from 1 GB to 16 TB, depending on the volume type.
 You can attach and detach EBS volumes from EC2 instances, allowing you to move data
between instances or resize volumes as needed.
3. Snapshots:
 EBS snapshots are point-in-time copies of your EBS volumes.
 You can use snapshots to back up your data, create new volumes, and migrate data to
other AWS regions.
 Snapshots are incremental, meaning that only changed data is stored, which helps in
reducing storage costs.
4. Encryption:
 EBS volumes support encryption at rest using AWS Key Management Service (KMS) keys.
 You can encrypt both the root volume of an EC2 instance and additional data volumes.
5. Availability and Durability:
 EBS volumes are designed for high availability and durability. They are replicated within
an Availability Zone (AZ) to protect against component failures.
 You can also create EBS snapshots and copy them to different regions for added data
resilience.
6. Performance Scaling:
 For performance-intensive workloads, you can dynamically resize and scale EBS volumes
to meet the performance requirements of your applications.
 Provisioned IOPS volumes allow you to provision a specific level of performance.
7. Multi-Attach (Beta):
 Some EBS volume types support multi-attach, allowing you to attach a single volume to
multiple EC2 instances simultaneously.
 This can be useful for shared storage scenarios.
8. Lifecycle Management:
 EBS offers features like EBS Lifecycle Manager to automate the creation, retention, and
deletion of snapshots based on policies.
9. Use Cases:
 Amazon EBS is commonly used for various use cases, including database storage, file
storage, boot volumes for EC2 instances, and application data storage.
10. Pricing:
 EBS pricing is based on the volume type, size, and region. You pay for the provisioned
storage capacity and the volume type's performance characteristics.

NOTE:- Amazon EBS plays a critical role in providing scalable and persistent storage for AWS EC2
instances, making it an essential component for running various workloads in the AWS cloud.

3.Amazon CloudFront

Amazon CloudFront is a content delivery network (CDN) service provided by Amazon Web Services
(AWS). It is designed to distribute content, including web pages, media files, and application data, to
users worldwide with low-latency and high data transfer speeds. CloudFront uses a global network of
edge locations to cache and deliver content to users from the nearest location, reducing latency and
improving the overall user experience.
Key features and concepts associated with Amazon CloudFront include:

1. Content Delivery: CloudFront accelerates the delivery of your content by caching it at edge
locations around the world. When a user requests content, CloudFront serves it from the
nearest edge location, reducing the round-trip time and improving load times.
2. Edge Locations: CloudFront has a network of edge locations strategically located in multiple
regions worldwide. These edge locations are where your cached content is stored and served
from. AWS continuously adds new edge locations to expand its global reach.
3. Distribution: To use CloudFront, you create a distribution, which is a collection of settings and
configuration information related to how CloudFront should cache and serve your content.
There are two types of distributions:
 Web Distribution: Used for websites and web applications.
 RTMP (Real-Time Messaging Protocol) Distribution: Used for streaming media over
Adobe Flash Media Server.
4. Origin: An origin is the source of your content. It can be an Amazon S3 bucket, an EC2 instance,
a load balancer, or even a custom HTTP server. CloudFront retrieves content from the origin and
caches it at edge locations.
5. Cache Behavior: You can define cache behaviors to specify how CloudFront should handle
requests for different types of content. For example, you can configure different TTLs (Time to
Live) for various file types.
6. HTTPS Support: CloudFront supports HTTPS to secure the transmission of data between your
users and the edge locations. You can use AWS Certificate Manager (ACM) to provision free
SSL/TLS certificates.
7. Logging and Monitoring: CloudFront provides access logs that can be sent to Amazon S3 or
Amazon CloudWatch for monitoring and analysis. You can track viewer activity and performance
metrics.
8. Customization: You can customize the behavior of CloudFront using features like
Lambda@Edge, which allows you to run serverless functions at the edge locations to modify
content or responses dynamically.
9. Security: You can use AWS Identity and Access Management (IAM) to control access to your
CloudFront distributions. You can also use AWS Web Application Firewall (WAF) to protect
against web application attacks.
10. Geo-Restriction: CloudFront allows you to restrict access to your content based on geographic
locations, helping you comply with content distribution regulations.
11. Cost Management: CloudFront pricing is based on data transfer and the number of requests.
You can use AWS Cost Explorer to monitor and manage your CloudFront costs.
WORKING MODEL AMAZON CLOUDFRONT

NOTE:- CloudFront is a highly scalable and globally distributed CDN service that can significantly improve
the performance, availability, and security of your web applications and content delivery. It is widely
used by websites, mobile apps, and streaming platforms to deliver content efficiently to users
worldwide.

T4: UNDERSTANDING AMAZON DATABASE SERVICES:

Amazon offers two different types of database services:


 Amazon SimpleDB, (Non-relational)
 Amazon Relational Database Service (Amazon RDS)
Dynamic data access is a central element of Web services, particularly “Web 2.0” services, so although
AMIs support several of the major databases.

1.Amazon SimpleDB
Amazon SimpleDB, also known as Amazon Simple Database, was a fully managed NoSQL database
service offered by Amazon Web Services (AWS). AWS announced that they were retiring Amazon
SimpleDB, and they were no longer accepting new sign-ups for the service.

Here are some key characteristics and features of Amazon SimpleDB as it existed before its retirement:
1. Schema-less: Amazon SimpleDB was a schema-less database, meaning you could store data
without predefining a fixed schema. This made it flexible for handling various types of data.
2. Data Attributes: Instead of tables, SimpleDB used domains to store data. Each domain could
have multiple data attributes, which were key-value pairs.
3. Automatic Scaling: SimpleDB automatically scaled to handle increasing workloads by
distributing data across multiple servers.
4. High Availability: It provided high availability with data replication across multiple Availability
Zones within a region.
5. Query Language: SimpleDB used a query language called SimpleDB Query Language (SQL),
which allowed for querying and filtering data based on attribute values.
6. Consistency Model: It offered eventual consistency for read operations, which means that data
might not immediately reflect updates but would eventually converge to a consistent state.
7. Limited Indexing: SimpleDB supported indexing of attributes, which allowed for efficient
querying of data.
8. Usage-Based Pricing: Billing was based on actual usage, including the amount of data stored,
the number of requests, and data transfer.

2.Amazon Relational Database Service (RDS)


Amazon Relational Database Service (Amazon RDS) is a fully managed relational database service
provided by Amazon Web Services (AWS). It makes it easier to set up, operate, and scale a relational
database in the cloud. Amazon RDS supports various database engines, including:

1. MySQL: A popular open-source relational database management system.


2. PostgreSQL: Another powerful open-source relational database system known for its robustness
and extensibility.
3. MariaDB: A community-developed fork of MySQL, designed for high performance and reliability.
4. Oracle: A commercial relational database management system known for its scalability and
advanced features.
5. Microsoft SQL Server: A commercial database management system from Microsoft with robust
enterprise-level capabilities.
6. Amazon Aurora: Amazon's own relational database engine compatible with MySQL and
PostgreSQL, offering high performance, availability, and scalability.

Amazon RDS simplifies database management tasks such as provisioning, patching, backup, recovery,
and scaling, allowing developers and database administrators to focus on application development
rather than infrastructure management.
Key features of Amazon RDS include:

1. Automated Backups: Amazon RDS automatically takes daily backups of your database and
allows you to retain backups for a specified period, making data recovery easier.
2. High Availability: Amazon RDS provides options for high availability, including Multi-AZ
deployments, which replicate your database across multiple availability zones for failover
protection.
3. Scalability: You can easily scale your database instance vertically by changing its instance type or
horizontally by adding read replicas to offload read traffic.
4. Security: Amazon RDS offers security features like network isolation, encryption at rest and in
transit, IAM database authentication, and automated software patching to enhance database
security.
5. Monitoring and Metrics: You can use Amazon CloudWatch to monitor database performance
and set up alarms to be notified of any issues.
6. Database Engine Compatibility: Amazon RDS provides options to select the database engine
that best fits your application's needs, and it manages the underlying infrastructure for you.
7. Ease of Maintenance: Routine database maintenance tasks such as software patching, hardware
scaling, and backups are automated, reducing the administrative overhead.
3.Choosing a database for AWS.
In choosing a database solution for your AWS solutions, consider the following factors in making your
selection:

 Choose SimpleDB when index and query functions do not require relational database
support.
 Use SimpleDB for the lowest administrative overhead.
 Select SimpleDB if you want a solution that autoscales on demand.
 Choose SimpleDB for a solution that has a very high availability.
 Use RDS when you have an existing MySQL database that could be ported and you want
to minimize the amount of infrastructure and administrative management required.
 Use RDS when your database queries require relation between data objects.
 Chose RDS when you want a database that scales based on an API call and has a pay-asyou-use-
it pricing model.
 Select Amazon EC2/Relational Database AMI when you want access to an enterprise relational
database or have an existing investment in that particular application.
 Use Amazon EC2/Relational Database AMI to retain complete administrative control over
your database server.

You might also like