CC - Unitwise Material - Compressed
CC - Unitwise Material - Compressed
Technology Innovations
Distributive Computing
Grid Computing
Cluster Computing
Utility Computing
Cloud: A cloud refers to a distinct IT environment that is designed for the purpose of remotely
provisioning scalable and measured IT resources.
• The IT resources shown within the boundary of a given cloud symbol usually do not represent
all of the available IT resources hosted by that cloud. Subsets of IT resources are generally
highlighted to demonstrate a particular topic.
• Focusing on the relevant aspects of a topic requires many of these diagrams to intentionally
provide abstracted views of the underlying technology architectures. This means that only a
portion of the actual technical details are shown.
Note :- The virtual server IT resource displayed Physical servers are sometimes referred to
as physical hosts (or just hosts) in reference to the fact that they are responsible for hosting
virtual servers.
Cloud Consumers and Cloud Providers: The party that provides cloud-based IT resources is
the cloud provider. The party that uses cloud-based IT resources is the cloud consumer. These
terms represent roles usually assumed by organizations in relation to clouds and corresponding
cloud provisioning contracts.
Scaling: Scaling, from an IT resource perspective, represents the ability of the IT resource to
handle increased or decreased usage demands.
The following are types of scaling:
• Horizontal Scaling – scaling out and scaling in
• Vertical Scaling – scaling up and scaling down
Horizontal Scaling: The allocating or releasing of IT resources that are of the same type is
referred to as horizontal scaling . The horizontal allocation of resources is referred to as scaling
out and the horizontal releasing of resources is referred to as scaling in. Horizontal scaling is a
common form of scaling within cloud environments.
An IT resource (Virtual Server A) is scaled out by adding more of the same IT resources (Virtual
Servers B and C).
Vertical Scaling: vertical scaling is considered to have occurred Specifically, the replacing of an
IT resource with another that has a higher capacity is referred to as scaling up and the replacing
an IT resource with another that has a lower capacity is considered scaling down. Vertical
scaling is less common in cloud environments due to the downtime required while the
replacement is taking place.
An IT resource (a virtual server with two CPUs) is scaled up by replacing it with a more powerful
IT resource with increased capacity for data storage (a physical server with four CPUs).
A brief overview of common pros and cons associated with horizontal and vertical scaling.
Cloud Service: A cloud service is any IT resource that is made remotely accessible via a cloud.
A cloud service can exist as a simple Web-based software program with a technical interface
invoked via the use of a messaging protocol, & provide remote access for administrative tools
for all IT resources.
A cloud service that exists as a virtual server is also being accessed from outside of the cloud’s
boundary. The cloud service on the right may be accessed by a human user that has remotely
logged on to the virtual server.
NOTE:- Cloud service usage conditions are typically expressed in a service-level agreement
(SLA) that is the part of a service contract between a cloud provider and cloud consumer that
describes Quality of Service features, behaviors, and limitations of a cloud-based service or
other provisions.
3.3. GOALS AND BENEFITS: Common measurable benefits to cloud consumers include:
• The perception of having unlimited computing resources that are available on demand,
thereby reducing the need to prepare for provisioning.
• The ability to add or remove IT resources at a fine-grained level, such as modifying available
storage disk space by single gigabyte increments.
• Abstraction of the infrastructure so applications are not locked into devices or locations and
can be easily moved if needed.
Example: A sizable project tasks can complete as quickly as their application software can scale.
Using 100 servers for one hour costs the same as using one server for 100 hours.
This “elasticity” of IT resources, achieved without requiring steep initial investments to create a
large-scale computing infrastructure, can be extremely compelling.
The decision to proceed with a cloud computing adoption strategy will involve much more than
a simple comparison between the cost of leasing and the cost of purchasing.
Example: The financial benefits of dynamic scaling and the risk transference of both over-
provisioning (under-utilization) and under-provisioning (over-utilization) must also be
accounted
Increased Scalability: By providing pools of IT resources, along with tools and technologies
designed to leverage them collectively, clouds can instantly and dynamically allocate IT
resources to cloud consumers, on-demand or via the cloud consumer’s direct configuration.
This empowers cloud consumers to scale up & scale down (automatically or manually) their
cloud-based IT resources to accommodate processing fluctuations.
The ability of IT resources to always meet and fulfill unpredictable usage demands avoids
potential loss of business that can occur when usage thresholds are met.
NOTE: When associating the benefit of Increased Scalability with the capacity planning
strategies introduced earlier in the Business Drivers section, the Lag and Match Strategies are
generally more applicable due to a cloud’s ability to scale IT resources on-demand.
The availability and reliability of IT resources are directly associated with business benefits.
Outages limit the time an IT resource can be “open for business” for its customers,
Specifically:
• An IT resource with increased availability is accessible for longer periods of time (for example,
22 hours out of a 24 hour day). Cloud providers generally offer “resilient” IT resources for which
they are able to guarantee high levels of availability.
• An IT resource with increased reliability is able to better avoid and recover from exception
conditions. The modular architecture of cloud environments provides extensive failover support
that increases reliability.
It is important that organizations carefully examine the SLAs offered by cloud providers when
considering the leasing of cloud-based services and IT resources. Although many cloud
environments are capable of offering remarkably high levels of availability and reliability, it
comes down to the guarantees made in the SLA that typically represent their actual contractual
obligations.
• Cloud environments are comprised of highly extensive infrastructure that offers pools of IT
resources that can be leased using a pay-for-use model whereby only the actual usage of the IT
resources is billable. When compared to equivalent on-premise environments, clouds provide
the potential for reduced initial investments and operational costs proportional to measured
usage.
Several of the most critical cloud computing challenges pertaining mostly to cloud consumers
that use IT resources located in public clouds are presented and examined.
1. Data Breaches: Cloud storage and databases may become targets for cybercriminals
aiming to steal sensitive information. Weak authentication, misconfigured permissions,
and unpatched vulnerabilities can expose data to unauthorized access.
2. Insecure APIs: Cloud services rely on Application Programming Interfaces (APIs) to
interact with applications and data. If these APIs have security flaws or are not properly
protected, attackers can exploit them to gain unauthorized access or manipulate data.
3. Denial-of-Service (DoS) Attacks: Cloud resources can be targeted with DoS attacks,
overwhelming the service and causing disruptions for legitimate users. Service
unavailability can have severe consequences for businesses that rely on cloud
applications.
4. Data Loss: Despite robust backup systems, data loss can still occur due to technical
failures, accidental deletions, or cyberattacks. Organizations need to implement proper
data backup and disaster recovery strategies.
5. Shared Infrastructure Risks: Cloud providers use shared infrastructure to host multiple
customers' data and applications. If one customer's environment is compromised, there
is a risk of cross-tenant data breaches.
6. Compliance and Regulatory Risks: Storing data in the cloud may raise compliance
challenges, especially when data crosses international borders. Organizations must
ensure their cloud providers adhere to relevant regulations and standards.
7. Inadequate Identity and Access Management (IAM): Weak or misconfigured IAM
practices can lead to unauthorized access, privilege escalation, and data exposure.
Properly managing user identities and access permissions is crucial.
Data security becomes shared with the cloud provider. The remote usage of IT resources
requires an expansion of trust boundaries by the cloud consumer to include the external cloud.
Another consequence of overlapping trust boundaries relates to the cloud provider’s privileged
access to cloud consumer data.
The overlapping of trust boundaries and the increased exposure of data can provide malicious
cloud consumers with greater opportunities to attack IT resources and steal or damage
business data.
Two organizations accessing the same cloud service are required to extend their respective
trust boundaries to the cloud, resulting in overlapping trust boundaries. It can be challenging
for the cloud provider to offer security mechanisms that accommodate the security
requirements of both cloud service consumers.
Reduced Operational Governance Control: Cloud consumers are usually allotted a level of
governance control that is lower than that over on-premise IT resources. This can introduce
risks associated with how the cloud provider operates its cloud, as well as the external
connections that are required for communication between the cloud and the cloud consumer.
• An unreliable cloud provider may not maintain the guarantees it makes in the SLAs that were
published for its cloud services. This can jeopardize the quality of the cloud consumer solutions
that rely on these cloud services.
• Longer geographic distances between the cloud consumer and cloud provider can require
additional network hops that introduce fluctuating latency and potential bandwidth
constraints.
Limited Portability Between Cloud Providers: Lack of standards within the cloud computing
industry, public clouds are commonly proprietary to various extents. For cloud consumers that
have custom-built solutions with dependencies on these proprietary environments, it can be
challenging to move from one cloud provider to another.
A cloud consumer’s application has a decreased level of portability when assessing a potential
migration from Cloud A to Cloud B, because the cloud provider of Cloud B does not support the
same security technologies as Cloud A.
Multi-Regional Compliance and Legal Issues: Multi-regional compliance and legal issues are
significant concerns for organizations using cloud computing services, especially when dealing
with data that crosses international borders. Different countries and regions have varying data
protection and privacy laws, which can create complexities for cloud users.
Conducting thorough research on the data protection and privacy laws of the regions
where they operate and where their cloud provider's data centers are located.
Choosing cloud providers that are compliant with relevant international and regional
standards and regulations.
Ensuring proper data classification and control measures to manage data based on its
sensitivity and regulatory requirements.
Implementing appropriate data encryption and access controls to protect data privacy
and security.
Drafting clear and comprehensive contracts with cloud providers that outline
responsibilities, data handling processes, and compliance requirements.
Regularly monitoring changes in laws and regulations in different regions and adjusting
cloud strategies accordingly.
• Cloud environments can introduce distinct security challenges, some of which pertain to
overlapping trust boundaries imposed by a cloud provider sharing IT resources with multiple
cloud consumers.
• A cloud consumer’s operational governance can be limited within cloud environments due to
the control exercised by a cloud provider over its platforms.
• The geographical location of data and IT resources can be out of a cloud consumer’s control
when hosted by a third-party cloud provider. This can introduce various legal and regulatory
compliance concerns.
ROLES & RESPONSIBILITIES OF CLOUD COMPUTING
In cloud computing, roles and boundaries define the responsibilities and limitations of various
entities involved in the cloud environment, including cloud service providers (CSPs) and cloud
customers (organizations or individuals using the cloud services).
Cloud computing exhibits several key characteristics that differentiate it from traditional on-
premises computing models.
Note:- These essential characteristics define the core principles of cloud computing and form
the basis for the benefits that cloud services offer, such as flexibility, cost-effectiveness,
scalability, and reduced IT management burden. Understanding these characteristics is crucial
for organizations considering adopting cloud computing to make informed decisions about the
most suitable cloud deployment models and service offerings for their specific needs.
3.1 Virtualization allows the creation of a secure, customizable, and isolated execution
environment for running applications. For example, we can run Windows OS on top of a virtual
machine, which itself is running on Linux OS.
Virtualization is a technique, which allows to share a single physical instance of a resource or an
application among multiple customers and organizations. It does by assigning a logical name to
a physical storage and providing a pointer to that physical resource when demanded.
Virtualization technologies have gained renewed interested recently due to the confluence of
several phenomena:
Increased performance and computing capacity
- Virtual Machines are having high computing power.
Underutilized hardware and software resources.
- Limited use of increased performance & computing
Lack of space
- Continuous need for additional for capacity
Greening initiatives
- Reduce carbon footprints
- Reducing the number of servers, reduce power consumption.
Rise of administrative costs.
- Power and cooling costs are higher then IT equipments.
3.2 CHARACTERISTICS OF VIRTUALIZED ENVIRONMENTS
Virtualization is a broad concept that refers to the creation of a virtual version of something,
whether hardware, a software environment, storage, or a network.
In a virtualized environment there are three major components: Guest, Host, and Virtualization
layer.
The Guest represents the system component that interacts with the virtualization layer
rather than with the host, as would normally happen.
The Host represents the original environment where the guest is supposed to be
managed.
The Virtualization layer is responsible for recreating the same or a different
environment where the guest will operate
Increased Security: Cloud computing can lead to increased security when implemented
correctly. While security concerns have historically been a major challenge for cloud adoption,
cloud service providers have made significant advancements in addressing security issues.
When properly configured and managed, cloud environments can provide robust security
measures that surpass those of many on-premises setups.
- Ability to control the execution of a guest
- Guest is executed in emulated environment.
- Virtual Machine Manager control and filter the activity of the guest.
- Hiding of resources & having no effect on other users/guest environment.
Managed execution: Virtualization of the execution environment not only allows increased
security, but a wider range of features also can be implemented. In particular sharing,
aggregation, emulation, and isolation are the most relevant features
Portability in virtualization and cloud computing refers to the ability to move applications,
workloads, and data seamlessly between different virtualized environments or cloud platforms.
It allows organizations to avoid vendor lock-in and provides the flexibility to adapt to changing
business requirements or leverage the strengths of various cloud providers. Portability is an
important consideration for cloud users who want to avoid dependency on a single cloud
provider and maintain control over their applications and data.
3.3 TAXONOMY OF VIRTUALIZATION TECHNIQUES
Virtualization covers a wide range of emulation techniques that are applied to different areas of
computing.
Virtualization is mainly used to emulate the execution environment, storage, and networks. The
execution environment is classified into two:
Execution Environments: To provide support for the execution of the programs eg. OS, and
Application.
Process-level techniques are implemented on top of an existing operating system,
which has full control of the hardware.
System-level techniques are implemented directly on hardware and do not require - or
require a minimum of support from - an existing operating system
Storage: Storage virtualization is a system administration practice that allows decoupling the
physical organization of the hardware from its logical representation.
Networks: Network virtualization combines hardware appliances and specific software for the
creation and management of a virtual network.
Among these categories, execution virtualization constitutes the oldest, most popular, and
most developed area. Therefore, it deserves major investigation and a further categorization.
We can divide these execution virtualization techniques into two major categories by
considering the type of host they require.
3.3.1.1 Machine reference model
Application Binary Interface (ABI) separates the OS layer from the application and
libraries which are managed by the OS.
System calls are defined & allows portabilities of applications and libraries across OS.
API – it interfaces applications to libraries and/or the underlying OS.
Nonprivileged instructions: That can be used without interfering with other tasks
because they do not access shared resources. Ex. Arithmetic, floating & fixed point.
Privileged instructions: They are executed under specific restrictions and are mostly
used for sensitive operations, which expose (behavior-sensitive) or modify (control
sensitive) the privileged state.
Behavior-sensitive = operate on the I/O
Control-sensitive = alter the state of the CPU register.
Privileged Hierarchy:
"Security Ring" in virtualization refers to a layered security approach that aims to protect and
isolate different components within a virtualized environment. Each security ring represents a
level of trust, with inner rings having higher levels of privilege and access, while outer rings
have lower privilege and more restricted access.
This model helps establish security boundaries and prevent unauthorized access to critical
components. The security ring concept is commonly used in operating systems and
virtualization platforms to ensure the integrity and isolation of virtual machines (VMs) and
other components.
Ring 1 contains the components that interact closely with the hypervisor and have
higher privileges than the guest operating systems.
Some virtualization technologies may use a second-level hypervisor or "hypervisor
helper" components in Ring 1 to enhance VM performance or security.
Ring 2:
Ring 3 known as the "User Mode," Ring 3 is the least privileged level in the security ring
hierarchy.
Guest operating systems and applications run in Ring 3.
The hypervisor enforces isolation between VMs and restricts direct access to hardware
resources from Ring 3, providing security and preventing interference between VMs.
3.3.1.2 Hardware-level virtualization
The migrate phase is where the actual process of moving data, applications, and other
workloads to the cloud occurs. This phase can involve a variety of techniques, including lift-
and-shift (moving an application to the cloud without modification), refactoring (modifying
an application to take advantage of cloud-native features), or even completely rebuilding
applications.
There are several types of virtualization migrations, each serving different purposes and
scenarios:
1. Live Migration: Live migration, also known as VM migration or VMotion (in the case of
VMware), allows VMs to be moved from one physical host to another without any
disruption to the running applications or services. Live migration ensures continuous
availability and seamless resource management during server maintenance, load
balancing, or hardware upgrades.
2. Storage Migration: Storage migration involves moving virtualized data, such as VM disks
or container images, from one storage system to another. This type of migration can be
useful for load balancing storage resources, migrating to higher-performance storage, or
consolidating data onto a more efficient storage solution.
3. Cross-Hypervisor Migration: Cross-hypervisor migration involves moving VMs between
different virtualization platforms or hypervisors. It allows organizations to switch to a
different virtualization technology without the need to reconfigure or rebuild their VMs.
4. P2V (Physical to Virtual) Migration: P2V migration is the process of converting physical
machines into virtual machines. It involves creating VMs that replicate the configuration
and contents of the physical servers, allowing organizations to consolidate physical
servers onto virtual infrastructure.
5. V2V (Virtual to Virtual) Migration: V2V migration refers to the movement of VMs from
one virtualization platform to another. It may be necessary when changing virtualization
providers or consolidating VMs onto a single virtualization platform.
6. Application Container Migration: For containerized applications, container migration
involves moving containers from one host to another within a container orchestration
platform, such as Kubernetes. Container migration ensures application availability and
resource optimization within the container cluster.
Benefits of Migrating to the Cloud
Scalability
Cost
Performance
Digital experience
Examples of these kinds of malware are BluePill and SubVirt. BluePill, malware targeting
the AMD processor family, moves the execution of the installed OS within a virtual
machine.
Xen has been extended to compatible with full virtualization using hardware-assisted
virtualization. It enables high performance to execute guest operating system.
This is probably done by removing the performance loss while executing the instructions
requiring significant handling and by modifying portion of the guest operating system
executed by Xen, with reference to the execution of such instructions.
Hence this especially support x86, which is the most used architecture on commodity machines
and servers.
The Xen Architecture and its mapping onto a classic x86 privilege model. A Xen based system is
handled by Xen hypervisor, which is executed in the most privileged mode and maintains the
access of guest operating system to the basic hardware.
Guest operating system are run between domains, which represents virtual machine instances.
In addition, particular control software, which has privileged access to the host and handles all
other guest OS, runs in a special domain called Domain 0. This the only one loaded once the
virtual machine manager has fully booted, and hosts an HTTP server that delivers requests for
virtual machine creation, configuration, and termination. This component establishes the
primary version of a shared virtual machine manager (VMM), which is a necessary part of Cloud
computing system delivering Infrastructure-as-a-Service (IaaS) solution.
Various x86 implementation support four distinct security levels, termed as rings, i.e.,
Ring 0, Ring 1, Ring 2, Ring 3,
Ring 0 represents the level having most privilege and Ring 3 represents the level having least
privilege. Almost all the frequently used Operating system, except for OS/2, uses only two levels
i.e. Ring 0 for the Kernel code and Ring 3 for user application and non-privilege OS program.
This provides a chance to the Xen to implement paravirtualization. This enables Xen to control
unchanged the Application Binary Interface (ABI) thus allowing a simple shift to Xen-virtualized
solutions, from an application perspective.
Due to the structure of x86 instruction set, some instructions allow code execution in Ring 3 to
switch to Ring 0 (Kernel mode). Such an operation is done at hardware level, and hence
between a virtualized environment, it will lead to a TRAP or a silent fault, thus preventing the
general operation of the guest OS as it is now running in Ring 1.
This condition is basically occurred by a subset of system calls. To eliminate this situation,
implementation in operating system requires a modification and all the sensitive system calls
needs re-implementation with hypercalls. Here, hypercalls are the particular calls revealed by
the virtual machine (VM) interface of Xen and by use of it, Xen hypervisor tends to catch the
execution of all the sensitive instructions, manage them, and return the control to the guest OS
with the help of a supplied handler.
Paravirtualization demands the OS codebase be changed, and hence all operating systems can
not be referred to as guest OS in a Xen-based environment. This condition holds where
hardware-assisted virtualization can not be free, which enables to run the hypervisor in Ring 1
and the guest OS in Ring 0. Hence, Xen shows some limitations in terms of legacy hardware and
in terms of legacy OS.
In fact, these are not possible to modify to be run in Ring 1 safely as their codebase is not
reachable, and concurrently, the primary hardware hasn’t any support to execute them in a
more privileged mode than Ring 0. Open source OS like Linux can be simply modified as its code
is openly available, and Xen delivers full support to virtualization, while components of
Windows are basically not compatible with Xen, unless hardware-assisted virtualization is
available. As new releases of OS are designed to be virtualized, the problem is getting resolved
and new hardware supports x86 virtualization.
Advantages
Xen server is developed over open-source Xen hypervisor and it uses a combination of
hardware-based virtualization and paravirtualization. This tightly coupled collaboration
between the operating system and virtualized platform enables the system to develop
lighter and flexible hypervisor that delivers their functionalities in an optimized manner.
Xen supports balancing of large workload efficiently that capture CPU, Memory, disk
input-output and network input-output of data. It offers two modes to handle this
workload: Performance enhancement, and For handling data density.
It also comes equipped with a special storage feature that we call Citrix storage link.
Which allows a system administrator to uses the features of arrays from Giant
companies- Hp, Netapp, Dell Equal logic etc.
It also supports multiple processor, Iive migration one machine to another, physical
server to virtual machine or virtual server to virtual machine conversion tools,
centralized multiserver management, real time performance monitoring over window
and linux.
Disadvantages
Xen is more reliable over linux rather than on window.
Xen relies on 3rd-party component to manage the resources like drivers, storage,
backup, recovery & fault tolerance.
Xen deployment could be a burden some on your Linux kernel system as time passes.
Xen sometimes may cause increase in load on your resources by high input-output rate
and and may cause starvation of other Vm’s.
VMware Cloud Solution stack included several products and services that helped organizations
build, manage, and optimize their cloud infrastructure. However, please note that the specific
details or new developments beyond that date might not be covered in this response.
A pool of virtualized servers is tied together and remotely managed as a whole by VMware
vSphere.
1. VMware vSphere: This is the foundation of VMware's cloud infrastructure suite. vSphere
is a virtualization platform that enables the creation and management of virtual
machines (VMs) on physical servers, providing robust resource management and
scalability.
2. VMware vCenter: This management platform allows administrators to centrally manage
and monitor VMware vSphere environments, including VMs, hosts, and data centers.
VMware ESXi server architecture is a bare-metal hypervisor and the foundation of VMware's
virtualization technology. It allows you to run multiple virtual machines (VMs) on a single
physical server. Here's an overview of the architecture of VMware ESXi:
Virtual Machines: ESXi allows the creation and execution of multiple virtual machines. Each VM
is an isolated software container that runs its own operating system and applications.
Virtual Networking: ESXi provides virtual networking capabilities, enabling the creation of
virtual switches, port groups, and network adapters for VMs. It also supports VLANs, NIC
teaming, and other advanced network features.
Virtual Storage: ESXi enables the creation of virtual storage using various storage technologies
such as VMFS (Virtual Machine File System) or NFS (Network File System). Virtual disks are
stored as files on the underlying physical storage but are presented to VMs
3.6.3 Hyper-V :Hyper-V provides Windows users the ability to start their own virtual machine.
In this virtual machine, a complete hardware infrastructure with RAM, hard disk space,
processor power, and other components can be virtualized. A separate operating system runs
on this basis, which does not necessarily have to be Windows. It is very popular, for example, to
run an open-source distribution of Linux in a virtual machine.
The physical host system can be mapped to multiple virtual guest systems (child partitions) that
share the host hardware (parent partition). Microsoft has created its own hypervisor,
Microsoft Hyper-V’s Architecture
Hyper-V allows x64 versions of Windows to host one or more virtual machines, which in turn
contain a fully configured operating system. These “child” systems are treated as partitions. The
term is otherwise known from hard disk partitioning - and Hyper-V virtualization works in a
similar way. Each virtual machine is an isolated unit next to the “parent” partition, the actual
operating system.
The individual partitions are orchestrated by the hypervisor. The subordinate partitions can be
created and managed via an interface (Hypercall API) in the parent system. However, the
isolation is always maintained. Child systems are assigned virtual hardware resources but can
never access the physical hardware of the parent.
To request hardware resources, child partitions use VMBus. This is a channel that
enables communication between partitions. Child systems can request resources from the
parent, but theoretically they can also communicate with each other.
The partitions run services that handle the requests and responses that run over the VMBus.
The host system runs the Virtualization Service Provider (VSP), the subordinate partitions run
the Virtualization Service Clients (VSC).
Working:-
The root partition owns and has direct access to the physical I/O devices. The virtualization
stack in the root partition provides a memory manager for VMs, management APIs, and
virtualized I/O devices. It also implements emulated devices, such as the integrated device
electronics (IDE) disk controller and PS/2 input device port. And it supports Hyper-V-specific
synthetic devices for increased performance and reduced overhead.
The Hyper-V-specific I/O architecture consists of virtualization service providers (VSPs) in the
root partition and virtualization service clients (VSCs) in the child partition. Each service is
exposed as a device over VM Bus, which acts as an I/O bus and enables high-performance
communication between VMs that use mechanisms such as shared memory. The guest
operating system's Plug and Play manager enumerates these devices, including VM Bus, and
loads the appropriate device drivers, virtual service clients. Services other than I/O are also
exposed through this architecture.
The hypervisor is the component that directly manages the underlying hardware (processors
and memory). It is logically defined by the following components:
• Hypercalls interface. This is the entry point for all the partitions for the execution of
sensitive instructions. This is an implementation of the paravirtualization approach already
discussed with Xen. This interface is used by drivers in the partitioned operating system to
contact the hypervisor using the standard Windows calling convention. The parent partition
also uses this interface to create child partitions.
• Memory service routines (MSRs). These are the set of functionalities that control the
memory and its access from partitions. By leveraging hardware-assisted virtualization, the
hypervisor uses the Input/Output Memory Management Unit (I/O MMU or IOMMU) to fast-
track access to devices from partitions by translating virtual memory addresses.
• Advanced programmable interrupt controller (APIC). This component represents the
interrupt controller, which manages the signals coming from the underlying hardware when
some event occurs (timer expired, I/O ready, exceptions and traps).
• Scheduler. This component schedules the virtual processors to run on available physical
processors. The scheduling is controlled by policies that are set by the parent partition.
• Address manager. This component is used to manage the virtual network addresses that
are allocated to each guest operating system.
• Partition manager. This component is in charge of performing partition creation,
finalization, destruction, enumeration, and configurations. Its services are available through
the hypercalls interface API previously discussed.
Child partition Any virtual machine that is created by the root partition.
The parent partition executes the host operating system and implements the
Parent partition virtualization stack that complements the activity of the hypervisor in running
guest operating systems.
guest Software that is running in a partition. It can be a full-featured operating system or
a small, special-purpose kernel. The hypervisor is guest-agnostic.
hypervisor A layer of software that sits above the hardware and below one or more operating
systems. Its primary job is to provide isolated execution environments called
partitions. Each partition has its own set of virtualized hardware resources (central
processing unit or CPU, memory, and devices). The hypervisor controls and
arbitrates access to the underlying hardware.
root partition The root partition that is created first and owns all the resources that the
hypervisor does not, including most devices and system memory. The root
partition hosts the virtualization stack and creates and manages the child
partitions.
Hyper-V-specific A virtualized device with no physical hardware analog, so guests may need a driver
device (virtualization service client) to that Hyper-V-specific device. The driver can use
virtual machine bus (VMBus) to communicate with the virtualized device software
in the root partition.
virtual machine A virtual computer that was created by software emulation and has the same
characteristics as a real computer.
virtual network (also referred to as a virtual switch) A virtual version of a physical network switch.
switch A virtual network can be configured to provide access to local or external network
resources for one or more virtual machines.
virtual processor A virtual abstraction of a processor that is scheduled to run on a logical processor.
A virtual machine can have one or more virtual processors.
virtualization A software module that a guest loads to consume a resource or service. For I/O
service client devices, the virtualization service client can be a device driver that the operating
(VSC) system kernel loads.
virtualization A provider exposed by the virtualization stack in the root partition that provides
service provider resources or services such as I/O to a child partition.
(VSP)
virtualization A collection of software components in the root partition that work together to
stack support virtual machines. The virtualization stack works with and sits above the
hypervisor. It also provides management capabilities.
VMBus Channel-based communication mechanism used for inter-partition communication
and device enumeration on systems with multiple active virtualized partitions. The
VMBus is installed with Hyper-V Integration Services.
UNIT II - UNDERSTANDING CLOUD MODELS AND ARCHITECTURES
Essential Characteristics:
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human interaction
with each service provider.
Broad network access. Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand. (e.g., country, state, or datacenter). Examples
of resources include storage, processing, memory, and network bandwidth.
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can be
appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by
leveraging a metering capability1 at some level of abstraction appropriate to the type of service
(e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported,
Cloud Cube model
The four dimensions of the Cloud Cube Model are shown in Figure 1.2 and listed here:
Physical location of the data: Internal (I) / External (E) determines your organization’s
boundaries.
Ownership: Proprietary (P) / Open (O) is a measure of not only the technology ownership, but
of interoperability, ease of data transfer, and degree of vendor application lock-in.
Sourcing: Insourced or Outsourced means whether the service is provided by the cus- tomer or
the service provider
3. Community cloud
4. Hybrid cloud
Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s
applications running on a cloud infrastructure . The applications are accessible from various
client devices through either a thin client interface, such as a web browser (e.g., web-based
email), or a program interface. The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, storage, or even individual
application capabilities, with the possible exception of limited userspecific application
configuration settings
Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx, ZenDesk,
Slack, and GoToMeeting.
Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the
cloud infrastructure consumer-created or acquired applications created using programming
languages, libraries, services, and tools supported by the provider.3 The consumer does not
manage or control the underlying cloud infrastructure including network, servers, operating
systems, or storage, but has control over the deployed applications and possibly configuration
settings for the application-hosting environment.
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine,
Apache Stratos, Magento Commerce Cloud, and OpenShift.
Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision
processing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include operating systems
and applications. The consumer does not manage or control the underlying cloud infrastructure
but has control over operating systems, storage, and deployed applications; and possibly
limited control of select networking components (e.g., host firewalls).
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google Compute
Engine (GCE), Rackspace, and Cisco Metacloud.
Application: The application may be any software or platform that a client wants to
access.
Service: A Cloud Services manages that which type of service you access according to
the client’s requirement.
Runtime Cloud: Runtime Cloud provides the execution and runtime environment to the
virtual machines.
Storage: Storage is one of the most important components of cloud computing. It
provides a huge amount of storage capacity in the cloud to store and manage data.
Infrastructure:It provides services on the host level, application level, and network level.
Cloud infrastructure includes hardware and software components such as servers,
storage, network devices, virtualization software, and other storage resources that are
needed to support the cloud computing model.
Management: Management is used to manage components such as application, service,
runtime cloud, storage, infrastructure, and other security issues in the backend and
establish coordination between them.
Security: Security is an in-built back end component of cloud computing. It implements
a security mechanism in the back end.
Composability:
A composable system uses components to assemble services that can be tailored for a specific
purpose using standard parts. A composable component must be:
Modular: It is a self-contained and independent unit that is cooperative, reusable, and
replaceable.
Stateless: A transaction is executed without regard to other transactions or requests
Infrastructure
Infrastructure as a Service (IaaS) providers rely on virtual machine technology to deliver servers
that can run applications. VM instance have characteristics that often can be described in terms
of real servers delivering a certain number of microprocessor (CPU) cycles, memory access, and
network bandwidth to customers
The software that runs in the virtual machines is what defines the utility of the cloud computing
system.
Platforms: Platform as a Service (PaaS) providers offer services meant to provide developers
with different capabilities
l Salesforce.com’s Force.com Platform
l Windows Azure Platform
l Google Apps and the Google AppEngine
These three services offer all the hosted hardware and software needed to build and deploy
Web applications or services that are custom built by the developer
A platform in the cloud is a software layer that is used to create higher levels of service, These
three services offer all the hosted hardware and software needed to build and deploy Web
applications or services that are custom built by the developer
Platforms often come replete with tools and utilities to aid in application design and
deployment. & most often we find developer tools for team collaboration, testing tools,
instrumentation for measuring program performance and attributes, versioning, database and
Web service integration, and storage tools.
Virtual Appliance Applications such as a Web server or database server that can run on a virtual
machine image are referred to as virtual appliances
Virtual Appliances may expose itself to users through an API, so too an application built in the
cloud using a platform service would encapsulate the service through its own API. Many
platforms offer user interface development tools based on HTML, JavaScript, or some other
technology. Web becomes more media-oriented, many developers have chosen to work with
rich Internet environments such as Adobe Flash, Flex, or Air, or alternatives such as Windows
Silverlight
VirtualBox: is a virtual machine technology now owned by Oracle that can run various
operating systems and serves as a host for a variety of virtual appliances.
Vmachines : is a site with desktop, server, and security- related operating systems that
run on VMware.
Communication Protocols:- Cloud uses services available over the Internet communicating
using the stan- dard Internet protocol suite underpinned by the HTTP and HTTPS transfer
protocols
In use of Inter Process communication(IPC) enables many client/server protocols have been
applied to distributed networking over the years. Various forms of RPC (Remote Procedure Call)
implementations (including DCOM, Java RMI, and CORBA) attempt to solve the problem of
engaging services
Applications:- In Nature all Websites or any Distributed apis are written in Web Tech & this be
the application which designed to work in web . Variety of web apps may be different but
common idea was to host application in public via Internet using web.
Use a secure protocol to transfer data such as SSL (HTTPS), FTPS, or IPsec, or connect
using a secure shell such as SSH to connect a client to the cloud.
Create a virtual connection using a virtual private network (VPN), or with a remote data
transfer protocol such as Microsoft RDP or Citrix ICA, where the data is protected by a
tunneling mechanism.
Encrypt the data so that even if the data is intercepted or sniffed, the data will not be
meaningful.
THE JOLICLOUD NETBOOK OS
Joli OS, developed by Jolicloud, provides file sharing and access to Web applications (apps) and
desktops from the cloud. Based on the Ubuntu Linux kernel, Joli OS was designed to give
netbook and low-end processors the ability to utilize Web app and basic computing services
without hardware upgrades.
Joli OS is installed as a thin client on a host desktop and provisions a variety of Web apps
from the cloud, including standard Web browsers, Gmail, Dropbox, Google Docs and
Flickr.
Joli OS hosts a number of apps that may be accessed and easily added to the cloud
desktop via the default launcher. Joli OS also provides social bookmarking capabilities
for user sharing of popular apps and services.
Jolicloud concentrates on building a social platform with automatic software updates and
installs. The application launcher is built in HTML 5 and comes preinstalled with Gmail, Skype,
Twitter,Firefox, and other applications.
Any HTML 5 browser can be used to work with the Jolicloud interface. Jolicloud maintains a
library or App Directory of over 700 applications as part of an app store. When you click to
select an application, the company both installs and updates the applica- tion going forward,
just as the iPhone manages applications on that device.
IaaS workloads
Consider a transactional ecommerce (WEBSITE ) system, for which a typical stack contains
the following components:
Web server
Application server
File server
Database
Transaction engine
This Website system has several different workloads that are operating: queries against the
database, processing of business logic, and serving up clients’ Web pages
IMP NOTE:- Amazon Web Services offers a classic Service Oriented Architecture (SOA)
approach to IaaS. Where Service Oriented Architecture approach used to building for
distributed apllication .
Infrastructure as a Service (IaaS) is a versatile cloud computing model that can support a
wide range of workloads across different industries and use cases. Here are some common
IaaS workloads:
1. Web Hosting: IaaS is often used to host websites and web applications. Users can
create virtual machines, configure web servers, and scale resources based on traffic
demands.
4. Big Data and Analytics: IaaS is well-suited for big data processing and analytics
workloads. Users can deploy clusters of virtual machines to analyze large datasets
and run data processing frameworks like Hadoop or Spark.
6. Virtual Desktop Infrastructure (VDI): Organizations can use IaaS to deploy virtual
desktops for remote or distributed teams, reducing the need for physical hardware
and providing secure access to desktop environments.
These are just some examples of the diverse workloads that can be supported by
Infrastructure as a Service. The flexibility and scalability of IaaS make it a valuable option
for organizations looking to optimize their IT infrastructure and meet specific computing
needs.
Pods, aggregation, and silos
Pods, aggregation, and silos are concepts often used in different contexts, including
technology, business, and organizational structures. Here's an explanation of each term:
1. Pods:
2. Aggregation:
3. Silos:
Platform as a Service (PaaS) is a cloud computing service model that provides a platform
and environment for developers to create customized solutions with context of build,
deploy, and manage applications without having to manage the underlying infrastructure.
PaaS systems must offer a way to create user interfaces, and thus support standards such
as HTLM, JavaScript, or other rich media technologies In a PaaS model, customers may
interact with the software to enter and retrieve data, perform actions, get results, and to
the degree that the vendor allows it, customize the platform involvedThe customer takes
no responsibility for maintaining the hardware, the software, or the development of the
applications and is responsible only for his interaction with the platform. The vendor is
responsible for all the operational aspects of the service, for maintenance, and for
managing the product(s) lifecycle.
PaaS abstracts the complexities of infrastructure management, allowing developers to
focus on coding and application development. Here are key characteristics and components
of Platform as a Service:
2. Middleware and Services: PaaS often includes middleware services like databases
(DBaaS), messaging systems, caching, and identity management. These services are
pre-configured and readily available for developers, reducing the time and effort
required to set up and manage these components.
4. Deployment and Management: PaaS platforms provide tools and services for
deploying applications to the cloud. Developers can easily manage application
lifecycles, update code, and roll back changes as needed.
8. Cost-Efficiency: PaaS often follows a pay-as-you-go pricing model, where users are
billed based on the resources and services they consume. This can result in cost
savings compared to managing on-premises infrastructure.
10. Vendor Lock-In: Adopting a specific PaaS platform may tie developers to that
provider's ecosystem and APIs. Careful consideration is needed to assess the
potential vendor lock-in and the portability of applications.
Software as a Service (SaaS): SaaS characteristics, Open SaaS and SOA, Salesforce.com
and CRM SaaS;
Software as a Service (SaaS) is a cloud computing model that delivers software applications
over the internet on a subscription basis. In this model, software is hosted and maintained
by a third-party provider, making it accessible to users from any device with an internet
connection.
Microsoft 365 (formerly Office 365): Includes software like Word, Excel,
PowerPoint, and cloud-based collaboration tools.
Google Workspace (formerly G Suite): Offers applications like Google Docs,
Sheets, and Gmail for productivity and communication.
Salesforce: A popular CRM platform that helps businesses manage sales, customer
interactions, and marketing.
WordPress.com: A popular platform for website creation and content
management.
Google Analytics: Provides web analytics and reporting on website and app
performance.
SAP Business ByDesign: A cloud-based ERP solution for small and medium-sized
enterprises.
Zoom: A widely used video conferencing and communication platform.
SaaS characteristics
All Software as a Service (SaaS) applications share the following characteristics:
1. The software is available over the Internet globally through a browser on demand.
3. The software and the service are monitored and maintained by the vendor, regardless of
where all the different software components are running. There may be executable client-
side code, but the user isn’t responsible for maintaining that code or its interaction with the
service.
4. Reduced distribution and maintenance costs and minimal end-user system costs
generally make SaaS applications cheaper to use than their shrink-wrapped versions.
5. Such applications feature automated upgrades, updates, and patch management and
much faster rollout of changes.
6. SaaS applications often have a much lower barrier to entry than their locally installed
competitors, a known recurring cost, and they scale on demand (a property of cloud
computing in general).
7. All users have the same version of the software so each user’s software is compatible
with another’s. 8. SaaS supports multiple users and provides a shared data model through a
single-instance, multi-tenancy model.
SaaS ecosystem offers advantages such as reduced upfront costs, ease of deployment, and
accessibility. It is widely used by businesses of all sizes and has transformed the way
software is delivered and consumed.
Open SaaS (Open Software as a Service): Open SaaS refers to a specific approach within the
Software as a Service (SaaS) model that emphasizes flexibility, customization, and
openness. Unlike traditional SaaS solutions that offer fixed, closed, and often proprietary
software, Open SaaS provides a more open and extensible platform. This allows users to
tailor the software to their specific needs and integrate it with other applications or
services.
1. Customization: Open SaaS platforms allow users to customize and configure the
software to meet their unique requirements. This might include adjusting
workflows, adding new features, or modifying existing ones.
4. Flexibility: Users have the flexibility to adapt the software to evolving business
needs, which is beneficial for industries and organizations with specialized
requirements.
Service-Oriented Architecture (SOA): Service-Oriented Architecture (SOA) is an
architectural style for designing and building software systems. It focuses on organizing
software components as services, which are independent, self-contained units of
functionality. These services can communicate with each other over a network, and they
are designed to be reusable and interoperable. SOA principles are not limited to SaaS; they
can be applied in various software development contexts, including on-premises systems.
A considerable amount of SaaS software is based on open source software. When open
source software is used in a SaaS, you may hear it referred to as Open SaaS.
The advantages of using open source software are that systems are much cheaper to deploy
because you don’t have to purchase the operating system or software, there is less vendor
lock-in, and applications are more portable.
The popularity of open source software, from Linux to APACHE, MySQL, and Perl (the
LAMP platform) on the Internet, and the number of people who are trained in open source
software make Open SaaS an attractive proposition.
The impact of Open SaaS will likely translate into better profitability for the companies that
deploy open source software in the cloud, resulting in lower development costs and more
robust solutions.
Three essentials components:
1. CRM Solutions: Salesforce offers a suite of CRM solutions that cover sales,
marketing, customer service, and analytics. These solutions are designed to help
businesses manage and analyze customer interactions and data.
2. Cloud-Based Delivery: Salesforce CRM is delivered as a cloud service, allowing
users to access it from anywhere with an internet connection. This cloud-based
approach eliminates the need for businesses to set up and maintain on-premises
CRM software and infrastructure.
Salesforce CRM offers several editions tailored to different business needs and sizes,
including small businesses, mid-sized enterprises, and large corporations.
Identity as a Service (IDaaS) is a cloud-based service that provides identity and access
management solutions as a service. IDaaS is designed to help organizations manage and
secure user identities and control access to their systems and resources. It offers a range of
features and tools for identity verification, authentication, authorization, and user
provisioning, all delivered via the cloud. Here are the key components and aspects of IDaaS:
3. Single Sign-On (SSO): SSO allows users to access multiple applications and services
with a single set of login credentials. With IDaaS, users can authenticate once and
gain access to multiple resources without the need to re-enter their credentials.
5. Security and Compliance: IDaaS solutions offer security features like encryption,
threat detection, and real-time monitoring to protect user identities and data. They
also help organizations comply with data privacy and regulatory requirements.
IDaaS is particularly valuable for businesses and organizations looking to enhance security,
streamline user management, and provide a better user experience for both employees and
customers.
What is an identity?
An identity refers to the digital representation of a user, service, or entity that is interacting
with cloud resources and services. Identity management in the cloud is crucial for
controlling access, ensuring security, and managing permissions within cloud
environments.
1. User Identity: User identities are associated with individual users or employees
who need access to cloud resources. User identities are typically linked to user
accounts, which are used to authenticate and authorize access.
2. Single Sign-On (SSO): SSO is a mechanism that allows users to access multiple
cloud services and applications with a single set of login credentials. It simplifies the
authentication process and enhances security by reducing the need for users to
remember multiple passwords.
3. Access Control: Identity and access management (IAM) is a critical aspect of cloud
security. It involves defining policies and rules that specify what each identity (user
or service) is allowed to do within the cloud environment. These permissions are
typically defined using roles, groups, and policies.
4. Multi-Factor Authentication (MFA): MFA adds an additional layer of security to
identity verification by requiring users to provide multiple forms of authentication,
such as something they know (password) and something they have (a mobile app or
hardware token).
6. Role-Based Access Control (RBAC): RBAC is a method for controlling access based
on roles and permissions. Users or services are assigned roles, and these roles
determine what actions they can perform within the cloud environment.
Identity system codes of conduct are ethical guidelines and principles that organizations,
service providers, and individuals involved in identity management should follow. These
codes of conduct help ensure the responsible and ethical use of identity information and
systems, as well as protect the privacy, security, and rights of individuals.
In working with IDaaS software, evaluate IDaaS applications on the following basis:
User control for consent: Users control their identity and must consent to the use
of their information.
Minimal Disclosure: The minimal amount of information should be disclosed for an
intended use.
Justifiable access: Only parties who have a justified use of the information
contained in a digital identity and have a trusted identity relationship with the
owner of the information may be given access to that information.
Directional Exposure: An ID system must support bidirectional identification for a
public entity so that it is discoverable and a unidirectional identifier for private
entities, thus protecting the private ID.
Interoperability: A cloud computing ID system must interoperate with other
identity services from other identity providers.
Unambiguous human identification: An IDaaS application must provide an
unambiguous mechanism for allowing a human to interact with a system while
protecting that user against an identity attack.
Consistency of Service: An IDaaS service must be simple to use, consistent across
all its uses, and able to operate in different contexts using different technologies.
IDaaS interoperability
User authentication
Authorization markup languages
OpenID is a developing industry standard for authenticating “end users” by storing their digital identity
in a common format.
Any software application that complies with the standard accepts an OpenID that is authenticated by a
trusted provider. A very impressive group of cloud computing vendors serve as identity providers (or
OpenID providers Facebook, Google etc
Authorization markup languages are used to define and manage access control policies within various
systems and applications. These markup languages provide a standardized way to specify permissions
and access rights for users or entities within a given system. Here are some of the commonly used
authorization markup languages:
1. XACML (eXtensible Access Control Markup Language): XACML is an OASIS standard that
provides a flexible and extensible framework for access control policies. It allows administrators
to define policies for authorization, including rules for granting or denying access based on
various attributes and conditions.
2. SAML (Security Assertion Markup Language): SAML is an XML-based standard for exchanging
authentication and authorization data between parties, particularly between an identity
provider (IdP) and a service provider (SP). While SAML is primarily focused on authentication, it
includes authorization-related assertions as well.
3. ABAC (Attribute-Based Access Control): ABAC is a model for access control where access
decisions are based on attributes associated with the user, the resource, and the environment.
While not a specific markup language, ABAC policies can be expressed using languages like
XACML.
4. ALFA (Abbreviated Language for Authorization): ALFA is a specialized language designed for
writing access control policies for XACML. It simplifies the process of defining policies by
providing a more human-readable and concise syntax.
5. REL (Request and Evaluation Language): REL is used in the context of XACML and is a language
for specifying the authorization requests and decision evaluation logic. It allows for specifying
the conditions under which a request should be granted or denied.
6. NGAC (Next Generation Access Control) Policy Language: NGAC is a policy language used to
define access control policies based on attributes and relationships. It provides a framework for
defining and enforcing fine-grained access control policies.
Compliance as a Service (CaaS) is a cloud-based service model that focuses on helping
organizations manage and maintain compliance with relevant regulatory, industry-specific,
and internal requirements. CaaS leverages cloud technology and services to streamline and
automate compliance processes, making it more efficient and cost-effective for businesses.
Here are key aspects and features of Compliance as a Service:
In order to implement CaaS, some companies are organizing what might be referred to as
“vertical clouds,” clouds that specialize in a vertical market. Examples of vertical clouds
that advertise CaaS capabilities include the following:
A baseline represents the reference point or starting level for measuring performance,
utilization, or any other relevant metric related to an IT system or infrastructure
Developers create cloud-based applications and Web sites based on a LAMP solution stack,
let’s use those applications for example
LAMP is good to use as an example because it offers a system with two applications
(APACHE and MySQL) that can be combined or run separately on servers.
Baseline Measurements:
Let’s assume that a capacity planner is working with a system that has a Web site based on
APACHE, and let’s assume the site is processing database transactions using MySQL.
There are two important overall workload metrics in this LAMP system:
1. Page views or hits on the Web site, as measured in hits per second
2. Transactions completed on the database server, as measured by transactions per
second or perhaps by queries per second
System Metrics: System metrics are quantitative measures that assess the performance
and resource utilization of a system. Common system metrics include CPU utilization,
memory usage, disk I/O, network bandwidth, and response time.
1. CPU
2. Memory (RAM)
3. Disk
4. Network connectivity
Load Testing: Load testing involves simulating user or application traffic to evaluate how a
system performs under different levels of load. It helps determine how well a system can
handle increased workloads.
HP LodeRunner (https://h10078.www1.hp.com/cda/hpms/display/main/
hpms_content.jsp?zn=bto&cp=1-11-126-17^8_4000_100__)
IBM Rational Performance Tester (http://www-01.ibm.com/software/
awdtools/tester/performance/)
JMeter (http://jakarta.apache.org/jmeter)
Resource Ceilings:Resource ceilings are predefined limits set for various system resources
(e.g., CPU, memory, disk space) to prevent resource exhaustion and maintain system
stability.
Server and Instance Types: Server and instance types refer to the specifications of the
hardware or virtual machines (VMs) used to host applications and services. These
specifications include CPU, memory, storage, and network capacity.
Micro Instance: 633 MB memory, 1 to 2 EC2 Compute Units (1 virtual core, using 2 CUs for short
periodic bursts) with either a 32-bit or 64-bit platform
Small Instance (Default): 1.7GB memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2
Compute Unit), 160GB instance storage (150GB plus 10GB root partition), 32-bit platform, I/O
Performance: Moderate, and API name: m1.small
High-Memory Quadruple Extra Large Instance: 68.4GB of memory, 26 EC2 Compute Units (8
virtual cores with 3.25 EC2 Compute Units each), 1,690GB of instance storage, 64-bit platform,
I/O Performance: High, and API name: m2.4xlarge
High-CPU Extra Large Instance: 7GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5
EC2 Compute Units each), 1,690GB of instance storage, 64-bit platform, I/O Performance: High,
API name: c1.xlarge
Network Capacity and Scaling: Network capacity refers to the ability of a network
infrastructure to handle data traffic, including bandwidth, latency, and packet processing
capacity. Monitoring network metrics is essential for capacity planning.
If any cloud-computing system resource is difficult to plan for, it is network capacity. There
are three aspects to assessing network capacity:
1. Network traffic to and from the network interface at the server, be it a physical or
virtual interface or server
2. Network traffic from the cloud to the network interface
3. Network traffic from the cloud through your ISP to your local network interface
(your computer)
Cloud’s network performance, which is a measurement of WAN traffic. A WAN’s capacity is a function of
many factors: l Overall system traffic (competing services)
Effective capacity planning requires continuous monitoring of system metrics, load testing
under various conditions, and adjusting resources and infrastructure as needed to ensure
optimal performance and scalability. It's an ongoing process that helps organizations avoid
performance issues, downtime, and resource bottlenecks as their systems grow and evolve.
`
UNIT IV – EXPLORING PLATFORM AS A SERVICE (PaaS)
PaaS Application Frameworks: Drupal, Eccentex AppBase 3.0, LongJump, Squarespace, WaveMaker
and Wolf Frameworks.
Platform as a Service (PaaS) application frameworks are cloud-based platforms that provide developers
with the tools, libraries, and infrastructure needed to build, deploy, and manage applications.
Application frameworks provide a means for creating SaaS hosted applications using a unified
development environment or an integrated development environment (IDE).
These frameworks are much of the underlying infrastructure and allow developers to focus on writing
code and building applications rather than worrying about server management or hardware
provisioning.
Example :- Web sites are now based on the notion of information management and they are referred to
as content management systems (CMS). Web site as a CMS adds a number of special features to the
concept that includes rich user interaction, multiple data sources, and extensive customization and
extensibility.
Drupal
Drupal (http://drupal.org/) is a content management system (CMS) that is used as the backend to a
large number of Web sites worldwide.
The software is an open-source project that was created in the PHP programming language. Drupal is
really a programming environment for managing content, and it has elements of blogging and
collaboration software as part of its distribution Drupal is in this section because it is a highly extensible
way to create Web sites with rich features. Druplas has a large developer community that has created
nearly 6,000 third-party add-ons called contrib modules.
Technology: -
Drupal applications running on any Web server that can run PHP 4.4.0 and later. The most
common deployments are on Apache
Drupal on Microsoft IIS and other Unix Web servers.
`
Drupal must be used with a database. Apache Xamp |Lamp installations are a standard Web
deployment platform, the database most often used is MySQL. & Other SQL databases work
equally well.
The Drupal core by itself contains a number of modules that provide for the following:
Auto-updates
Blogs, forums, polls, and RSS feeds
Multiple site management
OpenID authentication
Performance optimization through caching and throttling
Search
User interface creation tools
User-level access controls and profiles
Themes
Traffic management
Workflow control with events and triggers
Analytics and Reporting
The Drupal CMS was chosen as an example of this type of PaaS because it is so extensively used and has
broad industry impact, and it is a full-strength developer tool.
Eccentex had released the AppBase 3.0, which is a platform designed for building and deploying
business process management (BPM) and case management applications. AppBase is known for its low-
code/no-code capabilities, which allow organizations to create and customize applications without
extensive coding.
Here are some key features and capabilities of Eccentex's AppBase 3.0:
3. Integration Capabilities: It supports integration with various data sources, systems, and third-
party services. This enables organizations to connect AppBase with their existing IT
infrastructure and other software solutions.
4. User Interface Customization: AppBase provides tools for customizing the user interface,
allowing organizations to create applications with tailored interfaces that match their branding
and user experience requirements.
5. Analytics and Reporting: The platform includes features for tracking and analyzing the
performance of processes and cases. Users can generate reports and dashboards to gain insights
into their operations.
6. Security and Compliance: Security is a critical aspect of AppBase. It offers role-based access
control and helps organizations maintain compliance with industry regulations and data
protection standards.
7. Cloud and On-Premises Deployment: Organizations can choose to deploy AppBase in the cloud
or on-premises, providing flexibility in how they host and manage their applications.
8. Configurable Workflows: Users can define and configure workflows to match their specific
business processes, making it possible to adapt to changing requirements and new challenges.
AppBase includes a set of different tools for building these applications, including the following:
Business Objects Build: This object database has the ability to create rich data objects and
create relationships between them.
Presentation Builder: This user interface (UI) builder allows you to drag and drop visual controls
for creating Web forms and data entry screens and to include the logic necessary to automate
what the user sees.
Business Process Designer: This tool is used to create business logic for your application. With it,
you can manage workflow, integrate modules, create rules, and validate data.
Dashboard Designer: This instrumentation tool displays the real-time parameters of your
application in a visual form.
Report Builder: This output design tool lets you sort, aggregate, display, and format report
information based on the data in your application.
Security Roles Management: This allows you to assign access rights to different objects in the
system, to data sets, fields, desktop tabs, and reports. Security roles can be assigned in groups
without users, and users can be added later as the application is deployed.
`
Applications that you create are deployed with the AppBase Application Revision Management console.
The applications you create in AppBase, according to the company, may be integrated with Amazon S3
Web Services (storage), Google AppEngine (PaaS), Microsoft Windows Azure (PaaS), Facebook, and
Twitter.
`
LongJump
LongJump was a cloud-based Platform as a Service (PaaS) and application platform that allowed
organizations to build and deploy web-based applications without the need for extensive coding. It was
known for its rapid application development and customization capabilities.
Squarespace is a popular Software as a Service (SaaS) platform that allows individuals and businesses to
create and maintain websites. SaaS refers to a software distribution model where applications are
hosted by a third-party provider and made available to customers over the internet.
The applications are built using visual tools and deployed on hosted infrastructure.
Squarespace presents as:
A blogging tool
A social media integration tool
A photo gallery
A form builder and data collector
An item list manager
A traffic and site management and analysis tool The platform has more than 20 core modules that you
can add to your Web site. Squarespace sites can be managed on the company’s iPhone app.
Note:- With Squarespace, users have created some very visually beautiful sites.
All personal Web sites, portfolios, and business brand identification Squarespace positions itself as a
competitor to other sites of their which are full content management system with a variety of useful
features.
WaveMaker
WaveMaker is a WYSIWYG (What You See is What You Get) drag-and-drop environment that runs inside
a browser.
WaveMaker is a framework that creates applications that can interoperate with other Java frameworks
and LDAP systems, including the following:
The visual builder tool is called Visual Ajax Studio, and the development server is called the WaveMaker
Rapid Deployment Server for Java applications. When you develop within the Visual Ajax Studio,
a feature called LiveLayout allows you to create applications while viewing live data.
The data schema is prepared within a part of the tool called LiveForms.
Mashups can be created using the Mashup Tool, which integrates applications using Java
Services, SOAP, REST, and RSS to access databases.
Applications developed in WaveMaker run on standard Java servers such as Tomcat,
DojoToolkit, Spring, and Hibernate.
NOTE:- A new version of WaveMaker also runs on Amazon EC2, and the development environment can
be loaded on an EC2 instance as one of its machine images.
Many application frameworks like Google AppEngine and the Windows Azure Platform are tied to the
platform on which they run. So we can’t build an AppEngine application and port it to Windows Azure
without completely rewriting the application.
There isn’t any particular necessity to build an application framework in this way, but it suits the
purpose of these particular vendors:
Google to have a universe of Google applications that build on the Google infrastructure,
Microsoft to provide another platform on which to extend .NET Framework applications
If you are building an application on top of an IaaS vendor such as AWS, GoGrid, or RackSpace, what you
really developer want application development frameworks that are open, standards-based, and
portable.
Wolf Frameworks is an example of a PaaS vendor offering a platform on which you can build an SaaS
solution that is open and cross-platform.
Wolf Frameworks is based on the three core Windows SOA standard technologies of cloud computing:
Wolf Essentials:
Wolf Frameworks uses a C# engine and supports both Microsoft SQL Server and MySQL
database.
Applications that you build in Wolf are browser-based applications
Wolf applications can be built without the need to write technical code.
Wolf allows application data to be written to the client’s database server and data can be
imported or exported from a variety of data formats.
In Wolf, Design of the software application that can be build in XML, Wolf supports forms,
search, business logic and rules, charts, reports, dashboards, and both custom and external Web
pages.
Security:-
Connections to the datacenter are over a 128-bit encrypted SSL,with authentication, access
control, and a transaction history and audit trail.
Security to multiple modules can be made available through a Single Sign-On (SSO) mechanism.
`
WOLF platform architecture feature enable Wolf developers to create a classic multitenant SOA
application without the need for high-level developer skills. These applications are interoperable,
portable from one Windows virtual machine to another, and support embedded business applications.
You can store your Wolf applications on a private server or in the Wolf cloud.
Exploring Google Applications involves understanding and using various suite of services, Tools and
resources that facilitate application development, deployment, and management provided by Google.
1. Google Search: The most well-known Google application, the search engine helps users find
information on the internet.
2. Gmail: Google's email service, offering features like threaded conversations, powerful search
capabilities, and integration with other Google services.
3. Google Drive: A cloud storage service that allows users to store and share files. It includes
Google Docs, Sheets, and Slides for document creation and collaboration.
4. Google Docs: An online word processing application that enables collaborative editing and
sharing of documents.
5. Google Sheets: A cloud-based spreadsheet application for creating, editing, and sharing
spreadsheets.
6. Google Slides: An online presentation tool for creating and sharing slideshows.
7. Google Calendar: A web-based calendar application that allows users to schedule events, set
reminders, and share calendars with others.
8. Google Photos: A cloud-based service for storing, organizing, and sharing photos and videos.
9. Google Maps: A mapping service that provides directions, local business information, and
street-level views.
10. Google Chrome: A popular web browser developed by Google known for its speed, simplicity,
and synchronization features.
11. Google Meet: A video conferencing service that allows users to host virtual meetings, webinars,
and collaborative sessions.
12. Google Classroom: An online platform designed for educational purposes, enabling teachers to
create and manage classes, assignments, and communication with students.
13. Google Analytics: A web analytics service that tracks and reports website traffic, providing
insights into user behavior and website performance.
14. Google Translate: A language translation service that supports text, speech, and image
translation across multiple languages.
`
Google has a diverse and extensive application portfolio that spans various categories, including
productivity, communication, collaboration, and entertainment.
Across worldwide on Google’s one million plus servers in nearly 30 datacenters. Roughly 17 of the 48
services listed leverage Google’s search engine in some specific way. Some of these search-related sites
search through selected content such as Books, Images, Scholar, Trends, and more. Other sites such as
Blog Search, Finance, News, and some others take the search results and format them into an
Aggregation
INDEXED SEARCH
Google Search consists of two key components: indexing and ranking. Understanding how these
processes work can provide insights into how Google retrieves and presents search results.
1. Crawling: Google uses automated programs called crawlers or spiders to browse the web and
discover new and updated pages.
2. Indexing: Once a page is discovered, Google's crawler analyzes the content, including text,
images, and other elements.The information is then added to Google's index, a massive
database containing information about all the pages the crawler has visited.
`
3. Ranking Signals: Google's algorithms analyze various factors to determine the relevance and
quality of a page. These factors are known as ranking signals.
Examples of ranking signals include keywords, content quality, page speed, mobile-
friendliness, backlinks, and user experience.
Online content that isn’t indexed by search engines belongs to what has come to be called the “Deep
Web”
Aggregation involves the collection and organization of information from various sources into a
centralized platform or service. Google, as a search engine and a suite of services, is an excellent
example of an aggregation platform.
1. Google Search:
Google aggregates information from billions of web pages and presents relevant results
based on user queries.
2. Google News:
Aggregates news articles from various sources, categorizes them, and presents them in
one place.
3. Google Maps:
Aggregates geographical data, business information, and user-generated content to
provide a comprehensive mapping service.
4. Google Shopping:
Aggregates product information and prices from various online retailers to help users
compare and shop.
Google Translate can be accessed directly at http://translate.google.com/ , where you can select the
language pair to be translated. You can do the following:
Enter text directly into the text box, and click the Translate button to have the text translated. If
you select the Detect Language option, Translate tries to determine the language automatically
and translate it into English.
Enter a URL for a Web page to have Google display a copy of the translated Web page.
Enter a phonetic equivalent for script languages.
Upload a document to the page to have it translated
`
Features of G-Translate
1. Language Translation:
Google Translate supports the translation of text between numerous languages. It
covers a wide range of languages, including major world languages and many regional or
less common languages.
2. Text Translation:
Users can enter text in one language, and Google Translate will provide the
corresponding translation in the selected target language. It supports both written and
typed input.
3. Document Translation:
Google Translate allows users to upload documents for translation. It supports various
file formats, including Word documents, PDFs, and more.
4. Website Translation:
Users can enter the URL of a website, and Google Translate will attempt to translate the
entire webpage into the selected language. This feature is useful for getting a general
understanding of the content on a foreign-language website.
5. Speech Translation:
Google Translate can translate spoken words and phrases. Users can speak into their
device's microphone, and the service will provide the translated text and, in some cases,
an option to listen to the translation.
6. Handwriting Input:
For some languages, Google Translate allows users to draw characters on a touchscreen
device, and it will attempt to recognize and translate the handwritten text.
GOOGLE ANALYTICS
Google Analytics is a web analytics service provided by Google that allows website owners, marketers,
and analysts to track and report on website traffic. It provides valuable insights into how users interact
`
with a website, enabling businesses to make informed decisions to improve their online presence and
marketing efforts.
1. Website Traffic Analysis:
Google Analytics tracks the number of visitors, page views, and sessions on a website. It
provides an overview of overall traffic and user engagement.
2. Audience Insights:
The platform provides detailed information about the website's audience, including
demographics, interests, geographic location, and the devices used to access the site.
3. Acquisition Reports:
This section shows how users found the website. It breaks down traffic sources into
channels such as organic search, direct, referral, and paid search. It also tracks campaign
performance.
4. Behavior Reports:
Google Analytics tracks user behavior on the site, showing popular pages, average time
spent on the site, and the sequence of pages visited. It helps in understanding how users
navigate through the website.
5. Conversion Tracking:
Businesses can set up conversion goals to track specific actions users take on the site,
such as making a purchase, filling out a form, or signing up for a newsletter. E-
commerce tracking is also available for online stores.
6. Event Tracking:
With event tracking, website owners can monitor specific interactions on a site that may
not be automatically tracked as a pageview, such as clicks on buttons, video views, or
downloads.
7. Custom Reports and Dashboards:
Users can create custom reports and dashboards to focus on specific metrics and
visualizations that are relevant to their business goals. This allows for a more
personalized and efficient analysis.
8. Real-Time Reporting:
Google Analytics offers real-time reporting, allowing users to see current site activity,
including active users, traffic sources, and popular content.
9. User Flow Analysis:
The User Flow report visualizes how users move through a website, helping to identify
common paths and potential bottlenecks in the user journey.
`
10. Mobile Analytics: Google Analytics provides insights into how users interact with a website on
different devices, including mobile phones and tablets.
GOOGLE ADWORDS
Google AdWords (http://www.google.com/AdWords) is a targeted ad service based on
matching advertisers and their keywords to users and their search profiles.
This service transformed Google from a competent search engine into an industry giant and is
responsible for the majority of Google’s revenue stream.
In general AdWords’ two largest competitors are Microsoft adcenter (http://
adcenter.microsoft.com/) and Yahoo! Search Marketing (http://searchmarketing. yahoo.com/).
Google Ads is an online advertising platform where advertisers pay to display brief advertisements,
service offerings, product listings, video content, and generate mobile application installs within the
Google ad network to web users.
Here are some key features of Google Ads:
1. Keyword Targeting:
Advertisers can choose specific keywords related to their products or services. Ads are
then shown to users who search for those keywords on Google.
`
2. Ad Formats:
Google Ads supports various ad formats, including text ads, display ads, video ads, and
app promotion ads. The format you choose depends on your advertising goals.
3. Campaign Types:
Google Ads offers different campaign types, such as Search Campaigns, Display
Campaigns, Video Campaigns, Shopping Campaigns, and App Campaigns. Each type
targets specific advertising goals and platforms.
4. Bidding Options:
Advertisers can set bids for their ads, indicating the maximum amount they are willing
to pay for a click (Cost Per Click or CPC) or for a thousand impressions (Cost Per Mille or
CPM).
5. Ad Extensions:
These are additional pieces of information that can be added to your ads to provide
more context and encourage users to engage. Examples include site link extensions,
callout extensions, and location extensions.
6. Quality Score:
Google uses a Quality Score to determine the relevance and quality of your ads. The
higher your Quality Score, the better your ad position and the lower your cost per click.
7. Targeting Options:
Advertisers can target their ads based on factors such as location, language, device,
demographics, and user behavior. This allows for precise targeting of the desired
audience.
8. Ad Auction:
Google Ads operates on an auction system. When a user searches for a keyword, Google
runs an ad auction to determine which ads will be shown and in what order. The auction
considers factors like bid amount, Quality Score, and ad relevance.
9. Conversion Tracking:
Advertisers can set up conversion tracking to measure the success of their ad
campaigns. This involves tracking actions such as form submissions, purchases, or phone
calls generated by the ads.
10. Budget Control:
Advertisers can set daily or campaign-level budgets to control how much they spend on
their advertising.
`
Google has an extensive program that supports developers who want to leverage Google’s cloudbased
applications and services.
Google has a number of areas in which it offers development services, including the following:
l AJAX APIs (http://code.google.com/intl/en/apis/ajax/) are used to build widgets and other applets
commonly found in places like iGoogle. AJAX provides access to dynamic information using JavaScript
and HTML.
Note:- There are many G-API listed are available but few are enlisted below
`
Google App Engine (GAE) is a platform-as-a-service (PaaS) cloud computing platform for developing and
hosting web applications in Google's data centers. It allows developers to build and deploy applications
without dealing with the underlying infrastructure.
Key Concepts:
App Engine offers two environments: Standard and Flexible. The Standard environment
is a fully managed platform with automatic scaling, while the Flexible environment
provides more flexibility but requires you to manage the underlying infrastructure.
App Engine supports multiple programming languages, including Python, Java, Node.js,
Go, and others. Each runtime has its own set of libraries and features.
To encourage developers to write applications using GAE, Google allows for free application
development and deployment up to a certain level of resource consumption. Resource limits are
described on Google’s quota page at http://code.google.com/appengine/docs/ quotas.html, and the
quota changes from time to time.
`
Google uses the following pricing scheme:
1. Project Setup:
Download and install the Google Cloud SDK on your local machine. This SDK includes the
gcloud command-line tool for interacting with Google Cloud services.
3. App Configuration:
Create an app.yaml configuration file in the root of your project. This file defines
settings for your App Engine application, including runtime, version, and scaling settings.
4. Develop Locally:
Use the local development server provided by the SDK to test your application locally
before deploying it. This helps identify and fix issues before they reach the production
environment.
5. Deployment:
Deploy your application to App Engine using the gcloud app deploy command. This
uploads your application code, configuration files, and dependencies to the App Engine
environment.
6. Scaling:
App Engine automatically scales your application based on demand. You can configure
automatic scaling settings in your app.yaml file or use manual scaling if needed.
Use Google Cloud Monitoring and Logging to monitor the performance of your
application. You can set up alerts, view metrics, and access logs for debugging.
`
8. Services and Versions:
App Engine allows you to deploy multiple services within the same project. Each service
can have multiple versions, making it easy to deploy updates without affecting the
entire application.
Microsoft Cloud Services, commonly referred to as Microsoft Azure, is a comprehensive suite of cloud
computing services offered by Microsoft. Azure provides a variety of tools and services for building,
deploying, and managing applications and services through Microsoft's global network of data centers.
Azure is a virtualized infrastructure to which a set of additional enterprise services has been layered on
top, including:
A virtualization service called Azure AppFabric that creates an application hosting environment.
AppFabric (formerly .NET Services) is a cloud-enabled version of the .NET Framework.
A high capacity non-relational storage facility called Storage.
A set of virtual machine instances called Compute(VM).
A cloud-enabled version of SQL Server called SQL Azure Database.
A database marketplace based on SQL Azure Database
An xRM (Anything Relations Management) service called Dynamics CRM based on Microsoft
Dynamics. A document and collaboration service based on SharePoint called SharePoint
Services.
Windows Live Services, a collection of services that runs on Windows Live, which can be used in
applications that run in the Azure cloud.
`
Microsoft Azure-
Windows Azure is a virtualized Windows infrastructure run by Microsoft on a set of datacenters around
the world
1. Compute Services:
Virtual Machines (VMs): Deploy and manage virtual machines in the cloud, supporting
both Windows and Linux.
Azure App Service: A platform for building, deploying, and scaling web apps.
2. Storage Services:
Azure File Storage: Managed file shares for cloud or on-premises deployments.
3. Database Services:
Cosmos DB: Globally distributed, multi-model database service for NoSQL data.
4. Networking:
Azure Load Balancer: Balances incoming network traffic across multiple servers.
Azure VPN Gateway: Establish secure connections between on-premises and Azure
resources.
Azure Active Directory (AD): Identity and access management service for applications
and services.
Windows AppFabric was a set of integrated technologies in Windows Server and Azure designed to
make it easier to build, scale, and manage applications.
`
These steps are associated with Access Control:
2. Access Control creates a token based on the stored rules for server application.
5. The server application verifies the signature and uses the token to decide what the client application
is allowed to do.
Windows Azure AppFabric Access Control Service (ACS) was a component of the Windows Azure
platform that provided a way to integrate identity and access control into web applications and services.
It was designed to simplify the process of managing authentication and authorization, especially in
scenarios where applications needed to interact with users from various identity providers.
The Windows Azure Content Delivery Network (CDN) is a worldwide content caching and delivery
system for Windows Azure blob content.
Any storage account can be enabled for CDN. In order to share information stored in an Azure blob, you
need to place the blob in a public blob container that is accessible to anyone using an anonymous sign-
in.
SQL Azure
Azure SQL Database is a fully managed relational database service provided by Microsoft Azure. It is
based on the Microsoft SQL Server database engine and is designed to simplify database management
tasks, reduce administrative overhead, and enhance scalability and availability.
`
key aspects of Azure SQL Database:
2. Deployment Options
3. Scalability
4. Security
5. High Availability
6. Built-in Intelligence
Messenger Connect was released as part of the Windows Live Wave 4 at the end of June 2010, and it
unites APIs such as Windows Live ID, Windows Live Contacts, and Windows Live Messenger Web Toolkit
`
into a single API. Messenger Connect works with ASP.NET, Windows Presentation Foundation (WPF),
Java, Adobe Flash, PHP, and Microsoft’s Silverlight graphics rendering technology through four different
methods:
Live Essentials
Windows Live includes several popular cloud-based services. The two best known and most widely used
are Windows Live Hotmail and Windows Live Messenger, with more than 300 million users worldwide.
Windows Live is based around five core services:
E-mail
Instant Messaging
Photos
Social Networking
Online Storage
You can access Windows Live services in one of the following ways:
By navigating to the service from the command on the navigation bar at the top of Windows
Live
By directly entering the URL of the service
By selecting the application from the Windows Live Essentials folder on the Start menu
`
Live Home
Amazon Web Services is comprised of the following components, listed roughly in their order of
importance:
1.Amazon Elastic Compute Cloud (EC2; http://aws.amazon.com/ec2/), is the central application in the
AWS portfolio. It enables the creation, use, and management of virtual private servers running the Linux
or Windows operating system over a Xen hypervisor. Amazon Machine Instances are sized at various
levels and rented on a computing/ hour basis. Spread over data centers worldwide, EC2 applications
may be created that are highly scalable, redundant, and fault tolerant. EC2 is described more fully the
next section. A number of tools are used to support EC2 services:
2.Amazon Simple Storage System (S3; http://aws.amazon.com/s3/) is an online backup and storage
system, which is described in “Working with Amazon Simple Storage System (S3)” later in this chapter. A
high speed data transfer feature called AWS Import/Export (http://aws.amazon. com/importexport/)
can transfer data to and from AWS using Amazon’s own internal network to portable storage devices.
3.Amazon Elastic Block Store (EBS; http://aws.amazon.com/ebs/) is a system for creating virtual disks
(volume) or block level storage devices that can be used for Amazon Machine Instances in EC2.
While the list above represents the most important of the AWS offerings, it is only a partial list—a list
that is continually growing and very dynamic. A number of services and utilities support Amazon
partners or the AWS infrastructure itself.
-----------------------------------------------------------------------------------------------------------------------------
T3: WORKING WITH THE ELASTIC COMPUTE CLOUD (EC2)
Amazon Elastic Compute Cloud (EC2) is a virtual server platform that allows users to create and run
virtual machines on Amazon’s server farm. With EC2, you can launch and run server instances called
Amazon Machine Images (AMIs) running different operating systems such as Red Hat Linux and
Windows on servers that have different performance profiles. You can add or subtract virtual servers
elastically as needed; cluster, replicate, and load balance servers; and locate your different servers in
different data centers or “zones” throughout the world to provide fault tolerance. The term elastic
refers to the ability to size your capacity quickly as needed.
Consider a situation where you want to create an Internet platform that provides the following:
Implementing that type of service might require a rack of components that included the following:
Amazon Machine Images AMIs are operating systems running on the Xen virtualization hypervisor.
Each virtual private server is accorded a size rating called its EC2 Compute Unit
Standard Instances: The standard instances are deemed to be suitable for standard server
applications.
High Memory Instances: High memory instances are useful for large data throughput
applications such as SQL Server databases and data caching and retrieval.
High CPU Instances: The high CPU instance category is best used for applications that are
processor- or compute-intensive. Applications of this type include rendering, encoding, data
analysis, and others.
Pricing models:- The pricing of these different AMI types depends on the operating system used,
which data center the AMI is located in (you can select its location), and the amount of time that the
AMI runs. Rates are quoted based on an hourly rate. Additional charges are applied for:
AMIs that have been saved and shut down incurs a small one-time fee, but do not incur additional
hourly fees.
The three different pricing models for EC2 AMIs are as follows:
On-Demand Instance: This is the hourly rate with no long-term commitment.
Reserved Instances: This is a purchase of a contract for each instance you use with a
significantly lower hourly usage charge after you have paid for the reservation.
Spot Instance: This is a method for bidding on unused EC2 capacity based on the current spot
price. This feature offers a significantly lower price, but it varies over time or may not be
available when there is no excess capacity
NOTE:- The AWS Simple Monthly Calculator help you estimate your monthly charges.
http://calculator.s3. amazonaws.com/calc5.html
System images and software: Choose & use a template AMI system image with the operating system
of your choice or create your own system image that contains your custom applications, code libraries,
settings, and data. Security can be set through passwords, Kerberos tickets, or certificates.
These operating systems are offered:
Red Hat Enterprise Linux OS
OpenSuse Linux OS
Ubuntu Linux OS
Sun OpenSolaris OS
Fedora OS
Gentoo Linux OS
Oracle Enterprise Linux OS
Windows Server 2003/2008 32-bit and 64-bit up to Data Center Edition OS
Debian OS
Note:- When you create a virtual private server, you can use the Elastic IP Address feature to create
what amounts to a static IPv4 address to your server. This address can be mapped to any of your AMIs
and is associated with your AWS account.
Creating an Amazon Machine Instance or provision it with a certain amount of storage. That storage is
temporal; It only exists for as long as your instance is running. All of the data contained in that storage is
lost when the instance is suspended or terminated, as the storage is reassigned to the pool for other
AWS users to use. For this and other reasons you need to have access to persistent storage(S3 BUCKET )
1.Amazon Simple Storage System (S3): Amazon S3’s cloud-based storage system allows you to store
data objects ranging in size from 1 byte up to 5GB in a flat namespace. In S3, storage containers are
referred to as buckets, and buckets serve the function of a directory, although there is no object
hierarchy to a bucket, and you save objects and not files to it. It is important that you do not associate
the concept of a file system with S3, because files are not supported; only objects are stored.
Additionally, not needed to “mount” a bucket as you do a file system.
Amazon S3 can be used alone or together with other AWS services such as Amazon EC2, Amazon Elastic
Block Store (Amazon EBS), and Amazon Glacier, as well as third-party storage repositories and gateways.
Amazon S3 provides cost-effective object storage for a wide variety of use cases including web
applications, content distribution, backup and archiving, disaster recovery, and big data analytics.
Creating a backup process in Amazon S3 involves a few key steps to ensure your data is securely backed
up and can be easily restored when needed. Here's a general outline of the process:
2.Amazon Elastic Block Store (EBS): Amazon Elastic Block Store (Amazon EBS) is a block storage service
provided by Amazon Web Services (AWS) that allows you to create and attach persistent block storage
volumes to your Amazon EC2 (Elastic Compute Cloud) instances. EBS volumes are designed for high
availability and durability and provide scalable and reliable block-level storage for your EC2 instances.
Here are some key features and concepts associated with Amazon EBS:
1. Volume Types:
Amazon EBS offers different volume types optimized for various workloads:
General Purpose (SSD): Provides a balance of price and performance. Suitable
for a wide range of workloads.
Provisioned IOPS (SSD): Designed for I/O-intensive applications, allowing you to
provision a specific number of IOPS (input/output operations per second).
Cold HDD: Offers low-cost storage for infrequently accessed data.
Throughput Optimized HDD: Designed for big data and data warehousing
workloads that require high throughput.
I/O Optimized HDD: Designed for big data and data warehousing workloads that
require high IOPS.
You can choose the most appropriate volume type based on your application's
performance and cost requirements.
2. Volume Size and Attach/Detach:
EBS volumes can range in size from 1 GB to 16 TB, depending on the volume type.
You can attach and detach EBS volumes from EC2 instances, allowing you to move data
between instances or resize volumes as needed.
3. Snapshots:
EBS snapshots are point-in-time copies of your EBS volumes.
You can use snapshots to back up your data, create new volumes, and migrate data to
other AWS regions.
Snapshots are incremental, meaning that only changed data is stored, which helps in
reducing storage costs.
4. Encryption:
EBS volumes support encryption at rest using AWS Key Management Service (KMS) keys.
You can encrypt both the root volume of an EC2 instance and additional data volumes.
5. Availability and Durability:
EBS volumes are designed for high availability and durability. They are replicated within
an Availability Zone (AZ) to protect against component failures.
You can also create EBS snapshots and copy them to different regions for added data
resilience.
6. Performance Scaling:
For performance-intensive workloads, you can dynamically resize and scale EBS volumes
to meet the performance requirements of your applications.
Provisioned IOPS volumes allow you to provision a specific level of performance.
7. Multi-Attach (Beta):
Some EBS volume types support multi-attach, allowing you to attach a single volume to
multiple EC2 instances simultaneously.
This can be useful for shared storage scenarios.
8. Lifecycle Management:
EBS offers features like EBS Lifecycle Manager to automate the creation, retention, and
deletion of snapshots based on policies.
9. Use Cases:
Amazon EBS is commonly used for various use cases, including database storage, file
storage, boot volumes for EC2 instances, and application data storage.
10. Pricing:
EBS pricing is based on the volume type, size, and region. You pay for the provisioned
storage capacity and the volume type's performance characteristics.
NOTE:- Amazon EBS plays a critical role in providing scalable and persistent storage for AWS EC2
instances, making it an essential component for running various workloads in the AWS cloud.
3.Amazon CloudFront
Amazon CloudFront is a content delivery network (CDN) service provided by Amazon Web Services
(AWS). It is designed to distribute content, including web pages, media files, and application data, to
users worldwide with low-latency and high data transfer speeds. CloudFront uses a global network of
edge locations to cache and deliver content to users from the nearest location, reducing latency and
improving the overall user experience.
Key features and concepts associated with Amazon CloudFront include:
1. Content Delivery: CloudFront accelerates the delivery of your content by caching it at edge
locations around the world. When a user requests content, CloudFront serves it from the
nearest edge location, reducing the round-trip time and improving load times.
2. Edge Locations: CloudFront has a network of edge locations strategically located in multiple
regions worldwide. These edge locations are where your cached content is stored and served
from. AWS continuously adds new edge locations to expand its global reach.
3. Distribution: To use CloudFront, you create a distribution, which is a collection of settings and
configuration information related to how CloudFront should cache and serve your content.
There are two types of distributions:
Web Distribution: Used for websites and web applications.
RTMP (Real-Time Messaging Protocol) Distribution: Used for streaming media over
Adobe Flash Media Server.
4. Origin: An origin is the source of your content. It can be an Amazon S3 bucket, an EC2 instance,
a load balancer, or even a custom HTTP server. CloudFront retrieves content from the origin and
caches it at edge locations.
5. Cache Behavior: You can define cache behaviors to specify how CloudFront should handle
requests for different types of content. For example, you can configure different TTLs (Time to
Live) for various file types.
6. HTTPS Support: CloudFront supports HTTPS to secure the transmission of data between your
users and the edge locations. You can use AWS Certificate Manager (ACM) to provision free
SSL/TLS certificates.
7. Logging and Monitoring: CloudFront provides access logs that can be sent to Amazon S3 or
Amazon CloudWatch for monitoring and analysis. You can track viewer activity and performance
metrics.
8. Customization: You can customize the behavior of CloudFront using features like
Lambda@Edge, which allows you to run serverless functions at the edge locations to modify
content or responses dynamically.
9. Security: You can use AWS Identity and Access Management (IAM) to control access to your
CloudFront distributions. You can also use AWS Web Application Firewall (WAF) to protect
against web application attacks.
10. Geo-Restriction: CloudFront allows you to restrict access to your content based on geographic
locations, helping you comply with content distribution regulations.
11. Cost Management: CloudFront pricing is based on data transfer and the number of requests.
You can use AWS Cost Explorer to monitor and manage your CloudFront costs.
WORKING MODEL AMAZON CLOUDFRONT
NOTE:- CloudFront is a highly scalable and globally distributed CDN service that can significantly improve
the performance, availability, and security of your web applications and content delivery. It is widely
used by websites, mobile apps, and streaming platforms to deliver content efficiently to users
worldwide.
1.Amazon SimpleDB
Amazon SimpleDB, also known as Amazon Simple Database, was a fully managed NoSQL database
service offered by Amazon Web Services (AWS). AWS announced that they were retiring Amazon
SimpleDB, and they were no longer accepting new sign-ups for the service.
Here are some key characteristics and features of Amazon SimpleDB as it existed before its retirement:
1. Schema-less: Amazon SimpleDB was a schema-less database, meaning you could store data
without predefining a fixed schema. This made it flexible for handling various types of data.
2. Data Attributes: Instead of tables, SimpleDB used domains to store data. Each domain could
have multiple data attributes, which were key-value pairs.
3. Automatic Scaling: SimpleDB automatically scaled to handle increasing workloads by
distributing data across multiple servers.
4. High Availability: It provided high availability with data replication across multiple Availability
Zones within a region.
5. Query Language: SimpleDB used a query language called SimpleDB Query Language (SQL),
which allowed for querying and filtering data based on attribute values.
6. Consistency Model: It offered eventual consistency for read operations, which means that data
might not immediately reflect updates but would eventually converge to a consistent state.
7. Limited Indexing: SimpleDB supported indexing of attributes, which allowed for efficient
querying of data.
8. Usage-Based Pricing: Billing was based on actual usage, including the amount of data stored,
the number of requests, and data transfer.
Amazon RDS simplifies database management tasks such as provisioning, patching, backup, recovery,
and scaling, allowing developers and database administrators to focus on application development
rather than infrastructure management.
Key features of Amazon RDS include:
1. Automated Backups: Amazon RDS automatically takes daily backups of your database and
allows you to retain backups for a specified period, making data recovery easier.
2. High Availability: Amazon RDS provides options for high availability, including Multi-AZ
deployments, which replicate your database across multiple availability zones for failover
protection.
3. Scalability: You can easily scale your database instance vertically by changing its instance type or
horizontally by adding read replicas to offload read traffic.
4. Security: Amazon RDS offers security features like network isolation, encryption at rest and in
transit, IAM database authentication, and automated software patching to enhance database
security.
5. Monitoring and Metrics: You can use Amazon CloudWatch to monitor database performance
and set up alarms to be notified of any issues.
6. Database Engine Compatibility: Amazon RDS provides options to select the database engine
that best fits your application's needs, and it manages the underlying infrastructure for you.
7. Ease of Maintenance: Routine database maintenance tasks such as software patching, hardware
scaling, and backups are automated, reducing the administrative overhead.
3.Choosing a database for AWS.
In choosing a database solution for your AWS solutions, consider the following factors in making your
selection:
Choose SimpleDB when index and query functions do not require relational database
support.
Use SimpleDB for the lowest administrative overhead.
Select SimpleDB if you want a solution that autoscales on demand.
Choose SimpleDB for a solution that has a very high availability.
Use RDS when you have an existing MySQL database that could be ported and you want
to minimize the amount of infrastructure and administrative management required.
Use RDS when your database queries require relation between data objects.
Chose RDS when you want a database that scales based on an API call and has a pay-asyou-use-
it pricing model.
Select Amazon EC2/Relational Database AMI when you want access to an enterprise relational
database or have an existing investment in that particular application.
Use Amazon EC2/Relational Database AMI to retain complete administrative control over
your database server.