0% found this document useful (0 votes)
8 views

Cloud Computing

The document provides a comprehensive overview of cloud computing, covering its evolution, deployment and service models, and architecture. It details key components such as IaaS, PaaS, and SaaS, along with the roles of various actors like cloud consumers, providers, brokers, auditors, and carriers. Additionally, it addresses cloud security issues and the techniques that enable cloud computing, such as virtualization and service-oriented architecture.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Cloud Computing

The document provides a comprehensive overview of cloud computing, covering its evolution, deployment and service models, and architecture. It details key components such as IaaS, PaaS, and SaaS, along with the roles of various actors like cloud consumers, providers, brokers, auditors, and carriers. Additionally, it addresses cloud security issues and the techniques that enable cloud computing, such as virtualization and service-oriented architecture.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

Cloud Computing

PARALA MAHARAJA
ENGINEERING COLLEGE

Asst. Prof., Dept. CSE, AMAR RANJAN DASH


Contents
Module-1..................................................................................................................................................... 4
Evolution of Processing Units .................................................................................................................. 4
Deployment Models ................................................................................................................................. 5
Service Models ......................................................................................................................................... 6
Cloud Computing Conceptual Reference Model: .................................................................................. 10
Languages Used for SOA ........................................................................................................................ 16
Data Center:............................................................................................................................................ 20
Module-2................................................................................................................................................... 23
Amazon Web Services (AWS) ................................................................................................................. 23
Amazon EC2 (Elastic Compute Cloud).................................................................................................... 29
Amazon Simple Storage Service (Amazon S3) ....................................................................................... 32
Amazon S3 features ............................................................................................................................ 32
Use cases ............................................................................................................................................. 32
How Amazon S3 works........................................................................................................................ 32
Amazon S3 storage classes ................................................................................................................. 33
Working with buckets ....................................................................................................................... 33
Protecting your data ........................................................................................................................... 33
Competitor services ......................................................................................................................... 34
Amazon Simple Queue Service (SQS) .................................................................................................... 34
VMware vCloud Suite............................................................................................................................. 34
vCloud Suite .......................................................................................................................................... 36
VMware Cloud Director....................................................................................................................... 38
vCloud Connector ................................................................................................................................. 40
Google AppEngine:.................................................................................................................................. 46
Module-3................................................................................................................................................... 52
What is Azure? ........................................................................................................................................ 52
Microsoft Azure ...................................................................................................................................... 54
Azure as PaaS (Platform as a Service) ..................................................................................................... 57
Azure as IaaS (Infrastructure as a Service) .............................................................................................. 58
Windows Azure Platform ........................................................................................................................ 59
Salesforce ................................................................................................................................................ 61
Salesforce Architecture .......................................................................................................................... 68
Salesforce Database ............................................................................................................................... 69
Data Modeling Components of the Salesforce Database ................................................................... 69
Microsoft Office Online........................................................................................................................... 71
Microsoft 365 as a SaaS .......................................................................................................................... 73
Microsoft OneDrive................................................................................................................................. 75
comparison of cloud computing platforms............................................................................................. 76
Module-4................................................................................................................................................... 78
Cloud Security:........................................................................................................................................ 78
Security Issues in Cloud Computing:...................................................................................................... 80
7 Privacy Challenges in Cloud Computing ............................................................................................. 81
Infrastructure Security ........................................................................................................................... 83
Infrastructure Security: The Network Level ........................................................................................ 83
Infrastructure Security: The Host Level .............................................................................................. 84
Infrastructure Security: The Application Level.................................................................................... 85
Data Security .......................................................................................................................................... 86
Identity and Access Management ......................................................................................................... 89
Audit and Compliance ............................................................................................................................ 93
Governance, Risk, and Compliance (GRC)........................................................................................... 96
Module-1
Evolution of Processing Units
Wire=>logic gate=>flip flop=>processor

Uniprocessor
 by either increasing the number of transistor or quality of transistor
 By utilizing the concept of Pipelining, for proper utilization of individual subsection of the
processor
Processor is basically made up of set of transistors

MAR, MDR=> Fetch ALU, CU, register=> execution


C=a+b;

Fetch decode Execute Write back


fetch Decode Execute
Fetch Decode
fetch

Multiprocessor

 Try to increase the processing capacity of the processor by creating multiple processing unit (core)
with in a single processor.
 Parallel vs Distributed
UMA Non Uniform Memory Access
 server => GRID => CLOUD
There are certain services and models working behind the scene making the cloud computing feasible
and accessible to end users. Following are the working models for cloud computing:

Deployment Models
Deployment models define the type of access to the cloud, i.e., how the cloud is located? Cloud can
have any of the four types of access: Public, Private, Hybrid and Community.

PUBLIC CLOUD : The Public Cloud allows systems and services to be easily accessible to the general public.
Public cloud may be less secure because of its openness, e.g., e-mail.
PRIVATE CLOUD : The Private Cloud allows systems and services to be accessible within an organization.
It offers increased security because of its private nature.
COMMUNITY CLOUD : The Community Cloud allows systems and services to be accessible by group of
organizations.
HYBRID CLOUD : The Hybrid Cloud is mixture of public and private cloud. However, the critical activities
are performed using private cloud while the non-critical activities are performed using public cloud.
Service Models
Though service-oriented architecture advocates "Everything as a
service" (with the acronyms EaaS), cloud-computing providers
offer their "services" according to different models, of which the
three standard models per NIST are Infrastructure as a Service
(IaaS), Platform as a Service (PaaS), and Software as a Service
(SaaS). These models offer increasing abstraction; they are thus
often portrayed as layers in a stack: infrastructure-, platform- and
software-as-a-service, but these need not be related. For example,
one can provide SaaS implemented on physical machines, without
using underlying PaaS or IaaS layers, and conversely one can run a
program on IaaS and access it directly, without wrapping it as SaaS.

IaaS (Infrastructure as a Service)


IaaS is the delivery of technology infrastructure as an on demand
scalable service.
IaaS provides access to fundamental resources such as physical
machines, virtual machines, virtual storage, etc.
•Usually billed based on usage
•Usually multi tenant virtualized environment
•Can be coupled with Managed Services for OS and application
support

PaaS (Platform as a Service)

PaaS provides the runtime environment for applications,


development & deployment tools, etc. PaaS provides all of
the facilities required to support the complete life cycle of
building and delivering web
applications and services entirely from the Internet.
Typically, applications must be developed with a particular
platform in mind
•Multi tenant environments
•Highly scalable multi tier architecture
SaaS (Software as a Service)

SaaS model allows to use software applications as a service to end users.


SaaS is a software delivery methodology that provides licensed multi-
tenant access to software and its functions remotely as a Web-based
service.
• Usually billed based on usage
• Usually multi tenant environment
• Highly scalable architecture

Cloud computing Architecture is a combination of service oriented architecture and event driven
architecture. Cloud computing architecture is divided into following two parts:

1. Front End
The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes User-agents ( like web
browsers including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and
mobile devices.
2. Back End
The back end is used by the service provider. It manages all the resources that are required to
provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.
Components of Cloud Computing Architecture
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to interact
with the cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s requirement.

Cloud computing offers the following three type of services:

i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS applications
run directly through the web browser means we do not require to download and install these
applications. Some important example of SaaS is given below –

Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.

ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite similar to SaaS, but
the difference is that PaaS provides a platform for software creation, but using SaaS, we can access
software over the internet without the need of any platform.
Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.

iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is responsible for
managing applications data, middleware, and runtime environments.

Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod.

4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.

5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount of
storage capacity in the cloud to store and manage data.

6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure includes
hardware and software components such as servers, storage, network devices, virtualization software,
and other storage resources that are needed to support the cloud computing model.

7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.

8. Security
Security is an in-built back end component of cloud computing. It implements a security mechanism in
the back end.
9. Internet
The Internet is medium through which front end and back end can interact and communicate with each
other.

Architecture for Elasticity


Vertical Scale-Up
Keep on adding resources to a unit to increase computation power.
Process the job to single computation unit with high resources.
Horizontal Scale Out
Keep on adding discrete resources for computation and make them behave as in converged unit.
Splitting job on multiple discrete machines, combine the output.
Distribute database.
For HPC second option is better than first. Because Complexity and cost of first option is very high
Cloud Computing Conceptual Reference Model:
Cloud High-level architecture
Five major actors with their roles, responsibilities, activities and functions in cloud computing.
Understanding of the requirements, uses, characteristics and standards of cloud computing.
1. Cloud Consumer
2. Cloud Provider
3. Cloud Broker
4. Cloud Auditor
5. Cloud Carrier

Actors in Cloud Computing


 Cloud Consumer: A person or organization that maintains a business relationship with, and uses
service from, Cloud Providers.
 Cloud Provider: A person, organization, or entity responsible for making a service available to
interested parties.
 Cloud Auditor: A party that can conduct independent assessment of cloud services, information
system operations, performance and security of the cloud implementation.
 Cloud Broker: An entity that manages the use, performance and delivery of cloud services, and
negotiates relationships between Cloud Providers and Cloud Consumers.
 Cloud Carrier: An intermediary that provides connectivity and transport of cloud services from
Cloud Providers to Cloud Consumers.

Scenarios in Cloud: 1
1. Cloud consumer interacts with the cloud broker instead of contacting a cloud provider directly.
2. The cloud broker may create a new service (mash up) by combining multiple services or by enhancing
an existing service.
3. Actual cloud providers are invisible to the cloud consumer.

Scenarios in Cloud: 2
1. Cloud carriers provide the connectivity and transport of cloud services from cloud providers to cloud
consumers.
2. Cloud provider participates in and arranges for two unique service level agreements (SLAs), one with a
cloud carrier (e.g. SLA2) and one with a cloud consumer (e.g. SLA1).
3. A cloud provider may request cloud carrier to provide dedicated and encrypted connections to ensure
the cloud services (SLA’s).
Scenarios in Cloud: 3
1. Cloud auditor conducts independent assessments for the operation and security of the cloud service.
2. The audit may involve interactions with both the Cloud Consumer and the Cloud Provider.

Cloud Consumer
Cloud consumer browses & uses the service.
Cloud consumer sets up contracts with the cloud provider.
Cloud consumers need SLAs to specify the technical performance requirements that should be
fulfilled by a cloud provider.
SLAs cover the quality of service, security, remedies for performance failures.
A cloud provider list some SLAs that limit and obligate the cloud consumers by must acceptance.
Cloud consumer can freely choose a cloud provider with better pricing with favorable conditions.
Pricing policy and SLAs are non-negotiable.
SaaS consumers
SaaS consumers can be organizations that provide their members with access to software applications,
end users who directly use software applications, or software application administrators who configure
applications for end users.
SaaS consumers can be billed based on the number of end users, the time of use, the network bandwidth
consumed, the amount of data stored or duration of stored data.

PaaS consumers
PaaS consumers can be application developers or administrators
1. who design and implement application software
2. application testers who run and test applications
3. who publish applications into the cloud
4. who configure and monitor application performance.
PaaS consumers can be billed according to, processing, database storage and network resources
consumed by the PaaS application, and the duration of the platform usage.
IaaS consumer
IaaS consumer can be system developers, system administrators and IT managers who are interested in
creating, installing, managing and monitoring services for IT infrastructure operations.
IaaS consumer can be billed according to the amount or duration of the resources consumed, such as
CPU hours used by virtual computers, volume and duration of data stored, network bandwidth consumed,
number of IP addresses used for certain intervals.
Cloud Provider
Cloud Provider acquires and manages the computing infrastructure required for providing the services,
runs the cloud software that provides the services, and makes arrangement to deliver the cloud services
to the Cloud Consumers through network access.
SaaS provider deploys, configures, maintains and updates the operation of the software applications on
a cloud infrastructure. SaaS provider maintains the expected service levels to cloud consumers.
PaaS Provider manages the computing infrastructure for the platform and components (runtime
software execution stack, databases, and other middleware).
IaaS Cloud Provider provides physical hardware and cloud software that makes the provisioning of these
infrastructure services, for example, the physical servers, network equipment, storage devices, host OS
and hypervisors for virtualization.

Cloud auditor
Audits are performed to verify conformance to standards.
Auditor evaluates the security controls, privacy impact, performance, etc.
Auditing is especially important for federal agencies.
Security auditing, can make an assessment of the security controls to determine the extent to which
the controls are implemented correctly, operating as intended, and producing the desired outcome.
This is done by verification of the compliance with regulation and security policy.
Privacy audit helps in Federal agencies comply with applicable privacy laws and regulations governing
an individual's privacy, and to ensure confidentiality, integrity, and availability of an individual's personal
information at every stage of development and operation.

Cloud Broker
Integration of cloud services can be complex for consumers. Hence cloud broker, is needed.
Broker manages the use, performance and delivery of cloud services and negotiates relationships
between cloud providers and cloud consumers.
In general, a cloud broker can provide services in three categories:
 Service Intermediation: Broker enhances a service by improving capability and providing value-
added services to consumers. The improvement can be managing access to cloud services,
identity management, performance reporting, enhanced security, etc.
 Service Aggregation: Broker combines and integrates multiple services into one or more new
services. The broker provides data integration and ensures the secure data movement.
 Service Arbitrage: It is similar to service aggregation with the flexibility to choose services from
multiple agencies. For example, broker can select service with the best response time.

Cloud Carrier
Cloud carriers provide access to consumers through network, telecommunication and other access
devices.
For example, cloud consumers can obtain cloud services through network access devices, such as
computers, laptops, mobile phones, mobile internet devices (MIDs), etc.
The distribution of cloud services is normally provided by network and telecommunication carriers or a
transport agent, where a transport agent refers to a business organization that provides physical
transport of storage media such as high-capacity hard drives.
Cloud provider can set up SLAs with a cloud carrier to provide services consistent with the level of SLAs
offered to cloud consumers.

Scope of Control between Provider and Consumer


Application layer are used by SaaS consumers, or installed/managed/ maintained by PaaS consumers,
IaaS consumers, and SaaS providers.
Middleware is used by PaaS consumers, installed/managed/maintained by IaaS consumers or PaaS
providers. Middleware is hidden from SaaS consumers.
IaaS layer is hidden from SaaS consumers and PaaS consumers.
Consumers have freedom to choose OS to be hosted
Clouc Computing Tecnques:
 Virtualization: Allows to share the single physical instance of a resource among multiple users.

 SOA: It allows to use application as a service for other application


 It allows to exchange of data or message between different application (web services)

URI = URN + URL


(Uniform Resource Identifier)

 Services are provided to/ from application through internet


o Service Reusability
o Easy maintenance of Service
o Reliability
o Scalability

 Utility Computing: pay per use model

A Web Service is a mechanism of interaction between

Languages Used for SOA


The orchestration Language always represent the web service composition from the view point of the
parties involved. BPEL (Business Process Execution Language) is the most adapted language for web
service orchestration.

On the other hand, the target of choreography language is the co-ordination of long running interaction
between multiple distributed parties, where each one of the parties uses web services to offer his
externally accessible operations. Choreography languages depict the compositions from a global view
point, showing the interchange of message between the involved parties. The languages used for web
service choreography are:
1. WSCI:
It is an XML-based language to illustrate the interface of a web service, which participate in a
choreographed interaction with other services. This interface shows the flow of messages exchanged by
the web services. This language has been developed by Companies like Sun, SAP, BEA, and Intalio.

WSCI also describes how the choreography of these operations should expose relevant information such
as message correlation, exception handling, transaction description and dynamic participation
capabilities. This behavior is expressed by means of temporal and logical dependencies in the flow of
messages. For that purpose WSCI includes sequencing rules, correlation, exception handling and
transacxtion. But the internal implementation of the web service is not addressed by WSCI.

2. Web Service Choreography Description Language:

Th WSCDL is an XML based language to describe the peer to peer collaborations of web services talking
part in choreography. This description defines (from a global view point) the common behavior of the
services, and the ordered message interchanges make reaching a common business goal possible. The
Choreography modeling with WS-CDL consist of the following elements
a. Participant: groups all the parts of the collaboration that must be implemented by the same
entity.
b. Role: Potential behavior of the participant.
c. Relationship: identifies the mutual obligations the must be fulfilled in a collaboration to
succeed.
d. Type: Kind of information corresponding to a variable.
e. Variables: information about the common objects in collaboration.
f. Token: alias to the reference part of a variable.
g. Choreographies: A choreography defines collaboration between part6icipants using the
following means.
a. Choreography Composition
b. Choreography Lifeline
c. Choreography Recovery
h. Channel: a point of collaboration between participant.
i. Activities: an activity is the lowest level element of a choreography that perform some work.
j. Ordering Structure: include Sequence, parallel, & choice.
k. Semantics: allows the creation of description with the semantic definitions.

3. OWL-S(Ontology Web Language for Services):


The OWL-S was originally known as DAML (DARPA Agent Markup Language). The objective of DAML
program is the development of a language and tools that facilitate the concept of semantic web. The
aim of this ontology is to automate the discovery, invocation, composition, interoperation and
monitoring of web services. This ontology is based on providing three essential kinds of information
about services:
 What does the services provides? This information is given by the service profile.
 How is the service used? This information is given by the Service Process Model.
 How to access the Service? This information is provided by the Service Grounding.

In OWL-S each service is considered as a set of atomic processes with inputs and outputs associated. In
that way, when the mapping from abstract definition to concrete utilization must be done. OWL-S is
complemented with the use of WSDL for the concrete definition of services.
4.

Cloud Service Characteristics:


 On demand self-service
 Broad network access
 Resource pooling
 Rapid elasticity
 Measured service
=> Data-structure (a structured way of storing multiple data in RAM with proper classification)

=> file-structure
 multiple layer of classification with permission to the user to decide the type of classification
 used for storing the data in HDD.
 With security

=> database
 Sequential storing of data in a well-structured manner.
 Removing the redundancy
 Maiintating the ACID property
 With proper intermediate relationship
 With proper security

=> data-warehouse (data-mining)


 Larger older chunk of data regarding some specific aspect
 unstructered

=> big-data 5V
 Volume
 Value
 Velocity
 Variety
 Veracity

Data Center:
A data center is a physical facility that organizations use to house their critical applications and data. A data
center's design is based on a network of computing and storage resources that enable the delivery of shared
applications and data. The key components of a data center design include routers, switches, firewalls, storage
systems, servers, and application-delivery controllers.

What defines a modern data center?


Modern data centers are very different than they were just a short time ago. Infrastructure has shifted from
traditional on-premises physical servers to virtual networks that support applications and workloads across pools
of physical infrastructure and into a multicloud environment. In this era, data exists and is connected across
multiple data centers, the edge, and public and private clouds. The data center must be able to communicate across
these multiple sites, both on-premises and in the cloud. Even the public cloud is a collection of data centers. When
applications are hosted in the cloud, they are using data center resources from the cloud provider.

Why are data centers important to business?


In the world of enterprise IT, data centers are designed to support business applications and activities that
include:
• Email and file sharing
• Productivity applications
• Customer relationship management (CRM)
• Enterprise resource planning (ERP) and databases
• Big data, artificial intelligence, and machine learning
• Virtual desktops, communications and collaboration services

What are the core components of a data center?


Data center design includes routers, switches, firewalls, storage systems, servers, and application delivery
controllers. Because these components store and manage business-critical data and applications, data center
security is critical in data center design. Together, they provide:
 Network infrastructure. This connects servers (physical and virtualized), data center services, storage,
and external connectivity to end-user locations.
 Storage infrastructure. Data is the fuel of the modern data center. Storage systems are used to hold this
valuable commodity.
 Computing resources. Applications are the engines of a data center. These servers provide the
processing, memory, local storage, and network connectivity that drive applications.

How do data centers operate?


Data center services are typically deployed to protect the performance and integrity of the core data center
components. Network security appliances. These include firewall and intrusion protection to safeguard the data
center. Application delivery assurance. To maintain application performance, these mechanisms provide
application resiliency and availability via automatic failover and load balancing.

What is in a data center facility?


Data center components require significant infrastructure to support the center's hardware and software. These
include power subsystems, uninterruptible power supplies (UPS), ventilation, cooling systems, fire suppression,
backup generators, and connections to external networks.

What are the standards for data center infrastructure?


The most widely adopted standard for data center design and data center infrastructure is ANSI/TIA-942. It
includes standards for ANSI/TIA-942-ready certification, which ensures compliance with one of four categories
of data center tiers rated for levels of redundancy and fault tolerance.
Tier 1: Basic site infrastructure. A Tier 1 data center offers limited protection against physical events. It has
single-capacity components and a single, nonredundant distribution path.
Tier 2: Redundant-capacity component site infrastructure. This data center offers improved protection against
physical events. It has redundant-capacity components and a single, nonredundant distribution path.
Tier 3: Concurrently maintainable site infrastructure. This data center protects against virtually all physical
events, providing redundant-capacity components and multiple independent distribution paths. Each component
can be removed or replaced without disrupting services to end users.
Tier 4: Fault-tolerant site infrastructure. This data center provides the highest levels of fault tolerance and
redundancy. Redundant-capacity components and multiple independent distribution paths enable concurrent
maintainability and one fault anywhere in the installation without causing downtime.

Types of data centers


Many types of data centers and service models are available. Their classification depends on whether they
are owned by one or many organizations, how they fit (if they fit) into the topology of other data centers,
what technologies they use for computing and storage, and even their energy efficiency. There are four
main types of data centers:
 Enterprise data centers: These are built, owned, and operated by companies and are optimized
for their end users. Most often they are housed on the corporate campus.
 Managed services data centers: These data centers are managed by a third party (or a managed
services provider) on behalf of a company. The company leases the equipment and infrastructure
instead of buying it.
 Colocation data centers: In colocation ("colo") data centers, a company rents space within a data
center owned by others and located off company premises. The colocation data center hosts the
infrastructure: building, cooling, bandwidth, security, etc., while the company provides and
manages the components, including servers, storage, and firewalls.
 Cloud data centers: In this off-premises form of data center, data and applications are hosted by a cloud
services provider such as Amazon Web Services (AWS), Microsoft (Azure), or IBM Cloud or other
public cloud provider.
Module-2
Amazon Web Services (AWS)
AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by Amazon that
includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a
service (SaaS) offerings. AWS services can offer an organization tools such as compute power, database storage
and content delivery services.
AWS launched in 2006 from the internal infrastructure that Amazon.com built to handle its online retail
operations. AWS was one of the first companies to introduce a pay-as-you-go cloud computing model that scales
to provide users with compute, storage or throughput as needed.
AWS offers many different tools and solutions for enterprises and software developers that can be used in data
centers in up to 190 countries. Groups such as government agencies, education institutions, nonprofits and private
organizations can use AWS services.

How AWS works


AWS is separated into different services; each can be configured in different ways based on the user's needs. Users
should be able to see configuration options and individual server maps for an AWS service. More than 100
services comprise the Amazon Web Services portfolio, including those for compute, databases, infrastructure
management, application development and security. These services, by category, include:

Compute Storage databases Data management Migration Hybrid cloud


Networking Development tools Management Monitoring Security
Governance Big data Analytics Artificial Mobile
management intelligence (AI) development
Messages and
notification

Availability
Amazon Web Services provides services from dozens of data centers spread across availability zones (AZs) in
regions across the world. An AZ is a location that contains multiple physical data centers. A region is a
collection of AZs in geographic proximity connected by low-latency network links.
A business will choose one or multiple availability zones for a variety of reasons, such as compliance and
proximity to end customers. For example, an AWS customer can spin up virtual machines (VMs) and replicate
data in different AZs to achieve a highly reliable infrastructure that is resistant to failures of individual servers or
an entire data center.
Amazon Elastic Compute Cloud (EC2) is a service that provides virtual servers (called EC2 instances) for
compute capacity. The EC2 service offers dozens of instance types with varying capacities and sizes, tailored to
specific workload types and applications, such as memory-intensive and accelerated-computing jobs. AWS also
provides an Auto Scaling tool to dynamically scale capacity to maintain instance health and performance.
Storage
Amazon Simple Storage Service (S3) provides scalable object storage for data backup, collection and
analytics. An IT professional stores data and files as S3 objects -- which can range up to 5 gigabytes (GB) --
inside S3 buckets to keep them organized. A business can save money with S3 through its Infrequent Access
storage tier or by using Amazon Glacier for long-term cold storage.
Amazon Elastic Block Store provides block-level storage volumes for persistent data storage when using EC2
instances. Amazon Elastic File System offers managed cloud-based file storage.
A business can also migrate data to the cloud via storage transport devices, such as AWS Snowball and
Snowmobile, or use AWS Storage Gateway to enable on-premises apps to access cloud data.

Databases, data management


The Amazon Relational Database Service (which includes options for Oracle, SQL Server,
PostgreSQL, MySQL, MariaDB and a proprietary high-performance database called Amazon Aurora)
provides a relational database management system for AWS users. AWS also offers managed NoSQL
databases through Amazon DynamoDB.
An AWS customer can use Amazon ElastiCache and DynamoDB Accelerator as in-memory and real-
time data caches for applications. Amazon Redshift offers a data warehouse, which makes it easier for
data analysts to perform business intelligence (BI) tasks.
Migration, hybrid cloud
AWS includes various tools and services designed to help users migrate applications, databases,
servers and data onto its public cloud. The AWS Migration Hub provides a location to monitor and
manage migrations from on premises to the cloud. Once in the cloud, EC2 Systems Manager helps an
IT team configure on-premises servers and AWS instances.
Amazon also has partnerships with several technology vendors that ease hybrid cloud deployments.
VMware Cloud on AWS brings software-defined data center technology from VMware to the AWS
cloud. Red Hat Enterprise Linux for Amazon EC2 is the product of another partnership, extending
Red Hat's operating system to the AWS cloud.
Networking
An Amazon Virtual Private Cloud (Amazon VPC) gives an administrator control over a virtual
network to use an isolated section of the AWS cloud. AWS automatically provisions new resources
within a VPC for extra protection.
Admins can balance network traffic with the Elastic Load Balancing (ELB) service, which includes
the Application Load Balancer and Network Load Balancer. AWS also provides a domain name
system called Amazon Route 53 that routes end users to applications.
An IT professional can establish a dedicated connection from an on-premises data center to the AWS
cloud via AWS Direct Connect.

Developer tools
A developer can take advantage of AWS command-line tools and software development kits (SDKs)
to deploy and manage applications and services. This includes:
 The AWS Command Line Interface, which is Amazon's proprietary code interface.
 A developer can use AWS Tools for Powershell to manage cloud services from Windows
environments.
 Developers can use AWS Serverless Application Model to simulate an AWS environment to
test Lambda functions.
AWS SDKs are available for a variety of platforms and programming languages, including Java, PHP,
Python, Node.js, Ruby, C++, Android and iOS.
Amazon API Gateway enables a development team to create, manage and monitor custom application
program interfaces (APIs) that let applications access data or functionality from back-end services. API
Gateway manages thousands of concurrent API calls at once.
AWS also provides a packaged media transcoding service (like Amazon Elastic Transcoder) and a
service that visualizes workflows for microservices-based applications (AWS Step Functions).
A development team can also create continuous integration and continuous delivery pipelines with
services like:
 AWS CodePipeline
 AWS CodeBuild
 AWS CodeDeploy
 AWS CodeStar
A developer can also store code in Git repositories with AWS CodeCommit and evaluate the
performance of microservices-based applications with AWS X-Ray.
Management and monitoring
An admin can manage and track cloud resource configuration via AWS Config and AWS Config
Rules. Those tools, along with AWS Trusted Advisor, can help an IT team avoid improperly
configured and needlessly expensive cloud resource deployments.
AWS provides several automation tools in its portfolio. An admin can automate infrastructure
provisioning via AWS CloudFormation templates, and also use AWS OpsWorks and Chef to
automate infrastructure and system configurations.
An AWS customer can monitor resource and application health with Amazon CloudWatch and the
AWS Personal Health Dashboard, as well as use AWS CloudTrail to retain user activity and API
calls for auditing.

Security and governance


AWS provides a range of services for cloud security, including AWS Identity and Access Management, which
allows admins to define and manage user access to resources. An admin can also create a user directory with
Amazon Cloud Directory, or connect cloud resources to an existing Microsoft Active Directory with the AWS
Directory Service. Additionally, the AWS Organizations service enables a business to establish and manage
policies for multiple AWS accounts.
Amazon Web Services has also introduced tools that automatically assess potential security risks. Amazon
Inspector analyzes an AWS environment for vulnerabilities that might impact security and compliance. Amazon
Macie uses machine learning (ML) technology to protect sensitive cloud data.
AWS also includes tools and services that provide software- and hardware-based encryption, protect against
DDoS attacks, provision Secure Sockets Layer (SSL) and Transport Layer Security (TLS) certificates and filter
potentially harmful traffic to web applications.
The AWS Management Console is a browser-based graphical user interface (GUI) for AWS. The Management
Console can be used to manage resources in cloud computing, cloud storage and security credentials. The AWS
Console interfaces with all AWS resources.

Big data management and analytics


AWS includes a variety of big data analytics and application services. This includes:
 Amazon Elastic MapReduce, which offers a Hadoop framework to process large amounts of data.
 Amazon Kinesis, which provides several tools to process and analyze streaming data.
 AWS Glue, which is a service that handles extract, transform and load jobs.
 Amazon Elasticsearch Service enables a team to perform application monitoring, log analysis and other
tasks with the open source Elasticsearch tool.
 Amazon Athena for S3, which allows analysts to query data.
 Amazon QuickSight, which help analysts visualize data.

Artificial intelligence
AWS offers a range of AI model development and delivery platforms, as well as packaged AI-based applications.
The Amazon AI suite of tools includes:
 Amazon Lex for voice and text chatbot technology;
 Amazon Polly for text-to-speech translation; and
 Amazon Rekognition for image and facial analysis.

AWS also provides technology for developers to build smart apps that rely on machine learning technology and
complex algorithms. With AWS Deep Learning Amazon Machine Images (AMIs), developers can create and
train custom AI models with clusters of graphics processing units (GPUs) or compute-optimized instances. AWS
also includes deep learning development frameworks for MXNet and TensorFlow.

On the consumer side, AWS technologies power the Alexa Voice Services, and a developer can use the Alexa
Skills Kit to build voice-based apps for Echo devices.

Mobile development
The AWS Mobile Hub offers a collection of tools and services for mobile app developers, including the AWS
Mobile SDK, which provides code samples and libraries. A mobile app developer can also use Amazon Cognito
to manage user access to mobile apps, as well as Amazon Pinpoint to send push notifications to application end
users and then analyze the effectiveness of those communications.

Messages and notifications


AWS messaging services provide core communication for users and applications. Amazon Simple Queue
Service (SQS) is a managed message queue that sends, stores and receives messages between components of
distributed applications to ensure that the parts of an application work as intended.

Amazon Simple Notification Service (SNS) enables a business to send publish/subscribe messages to endpoints,
such as end users or services. SNS includes a mobile messaging feature that enables push messaging to mobile
devices. Amazon Simple Email Service (SES) provides a platform for IT professionals and marketers to send
and receive emails.

AR & VR (Augmented reality and virtual reality)


AWS offers augmented reality (AR) and virtual reality (VR) development tools through the Amazon Sumerian
service. Amazon Sumerian allows users to create AR and VR applications without needing to know programming
or create 3D graphics. The service also enables users to test and publish applications in-browser. Amazon
Sumerian can be used in:
 3D web applications
 E-commerce & sales applications
 Marketing
 Online education
 Manufacturing
 Training simulations
 Gaming

Game development
AWS can also be used for game development. Large game developing companies, such as Ubisoft, will use
AWS services for their games, like For Honor. AWS can provide services for each part of a game's lifecycle.
For example, AWS will provide a developer back-end services, analytics and developer tools. Developer tools
should help aid developers in making their game, while back-end services might be able to help with building,
deploying or scaling a developer's platform. Analytics might help developers better know their customers and
how they play the game. Developers can also store data, or host game data on AWS servers.
Internet of Things
AWS also has a variety of services that enable the internet of things (IoT) deployments. The AWS IoT service
provides a back-end platform to manage IoT devices and data ingestion to other AWS storage and database
services. The AWS IoT Button provides hardware for limited IoT functionality and AWS Greengrass brings
AWS compute capabilities to IoT devices.

Other services
Amazon Web Services has a range of business productivity SaaS options, including:
 The Amazon Chime service enables online video meetings, calls and text-based chats across devices.
 Amazon WorkDocs, which is a file storage and sharing service.
 Amazon WorkMail, which is a business email service with calendaring features.
Desktop and streaming application services include Amazon WorkSpaces, a remote desktop-as-a-service
platform (DaaS), and Amazon AppStream, a service that lets a developer stream a desktop application from AWS
to an end user's web browser.

AWS pricing models and competition


AWS offers a pay-as-you-go model for its cloud services, either on a per-hour or per-second basis. There is also
an option to reserve a set amount of compute capacity at a discounted price for customers who prepay in whole,
or who sign up for one- or three-year usage commitments.
If potential customers can’t afford the costs, then AWS Free Tier is another possible avenue for using AWS
services. AWS Free Tier allows users to gain first-hand experience with AWS services for free; they can access
up to 60 products and start building on the AWS platform. Free Tier is offered in three different options: always
free, 12 months free and trials.
AWS competes primarily with Microsoft Azure, Google and IBM in the public IaaS market.

History
The AWS platform was originally launched in 2002 with only a few services. In 2003, AWS was re-envisioned
to make Amazon's compute infrastructure standardized, automated and web service focused. This re-
envisioning included the thought of selling access to virtual servers as a service platform. One year later, in
2004, the first publicly available AWS service (Amazon SQS) was launched.

In 2006, AWS was relaunched to include three services -- including Amazon S3 cloud storage, SQS, and EC2 --
officially making AWS a suite of online core services. In 2009, S3 and EC2 were launched in Europe, and the
Elastic Block Store and Amazon CloudFront were released and adopted to AWS. In 2013, AWS started to offer
a certification process in AWS services, and 2018 saw the release of an autoscaling service.

Over time, AWS has added plenty of services that helped make it a low-cost infrastructure platform that is
highly available and scalable. AWS now has a focus on the cloud, with data centers placed around the world, in
places such as the United States, Australia, Europe, Japan and Brazil.

Acquisitions
Over time, AWS has acquired multiple organizations, increasing its focus on technologies it wants to further
incorporate. Recently AWS' acquisitions haven't concentrated on larger well-established companies, but instead
on organizations that could bolster and overall improve the cloud vendor's existing offerings. These acquisitions
don't add to AWS, but rather enhance its core services. For example, AWS has acquired TSO Logic, Sqrrl and
CloudEndure.

TSO Logic was a cloud migration company that provides analytics, enabling customers to view the state of their
current data center and model a migration to the cloud. Sqrrl was a security startup that collects data from points
such as gateways, servers and routers, and then puts those findings inside a security dashboard. Cloud Endure is
a company that focuses on workload migrations to the public cloud, disaster recovery and backup.

These acquisitions shouldn't majorly change AWS; they will position it better. For example, the acquisition of
CloudEndure should accelerate movement of on-premises workloads to the AWS cloud.

Amazon EC2 (Elastic Compute Cloud)


Amazon Elastic Compute Cloud (Amazon EC2) is a web-based service that allows businesses to run
application programs in the Amazon Web Services (AWS) public cloud. Amazon EC2 allows a developer
to spin up virtual machines (VMs), which provide compute capacity for IT projects and cloud workloads
that run with global AWS data centers.
An AWS user can increase or decrease instance capacity as needed within minutes using the Amazon EC2
web interface or an application programming interface (API). A developer can code an application to scale
instances automatically with AWS Auto Scaling. A developer can also define an auto-scaling policy and
group to manage multiple instances at once.

EC2 history
EC2 was the idea of engineer Chris Pinkham who conceived it as a way to scale Amazon's internal
infrastructure. Pinkham and engineer Benjamin Black presented a paper on their ideas to Amazon CEO Jeff
Bezos, who liked what he read and requested details on virtual cloud servers. EC2 was then developed by
a team in Cape Town, South Africa. Pinkham provided the initial architecture guidance for EC2, gathered
a development team and led the project along with Willem van Biljon.
In 2006, Amazon announced a limited public beta test of EC2, and in 2007 added two new instance types -
- Large and Extra-Large. Amazon announced the addition of static IP addresses, availability zones, and
user selectable kernels in spring 2008, followed by the release of the Elastic Block Store (EBS) in August.
Amazon EC2 went into full production on October 23, 2008. Amazon also released a service level
agreement (SLA) for EC2 that day, along with Microsoft Windows and SQL Server in beta form on EC2.
Amazon added the AWS Management Console, load balancing, autoscaling, and cloud monitoring services
in 2009.

How EC2 works


To begin using EC2, developers sign up for an account at Amazon's AWS website. They can then use the
AWS Management Console, the AWS Command Line Tools (CLI), or AWS Software Developer Kits
(SDKs) to manage EC2.
A developer then chooses EC2 from the AWS Services dashboard and 'launch instance' in the EC2 console.
At this point, they select either an Amazon Machine Image (AMI) template or create an AMI containing an
operating system, application programs, and configuration settings. The AMI is then uploaded to the
Amazon S3 and registered with Amazon EC2, creating an AMI identifier. Once this has been done, the
subscriber can requisition virtual machines on an as-needed basis.
Data only remains on an EC2 instance while it is running, but a developer can use an Amazon Elastic Block
Store volume for an extra level of durability and Amazon S3 for EC2 data backup.
VM Import/Export allows a developer to import on-premises virtual machine images to Amazon EC2,
where they are turned into instances.
EC2 also offers Amazon CloudWatch which monitors Amazon cloud applications and resources, allowing
users to set alarms, view graphs, and get statistics for AWS data; and AWS Marketplace, an online store
where users can buy and sell software that runs on AWS.

Amazon EC2 instance types


Instances allow developers to expand computing capabilities by 'renting' virtual machines rather than
purchasing hardware. An EC2 instance is used to run applications on the Amazon Web Services
infrastructure.
Amazon EC2 provides different instance types, sizes and pricing structures designed for different
computing and budgetary needs. In addition to general purpose instances, Amazon EC2 offers an instance
type for compute, memory, accelerated computing, and storage-optimized workloads. AWS limits how
many instances a user can run in a region at a time, depending on the type of instance. Each instance type
comes with different size options corresponding to the CPU, memory and storage needs of each enterprise.

Cost
On-Demand instances allow a developer to create resources as needed and to pay for them by the hour.
Reserved instances (RIs) provide a price discount in exchange for one and three-year contract commitments
-- a developer can also opt for a convertible RI, which allows for the flexibility to change the instance type,
operating system or tenancy.
There's also an option to purchase a second-hand RI from the Amazon EC2 reserved instances marketplace.
A developer can also submit a bid for spare Amazon EC2 capacity, called Spot instances, for a workload
that has a flexible start and end time.
If a business needs dedicated physical server space, a developer can opt for EC2 dedicated hosts, which
charge hourly and let the business use existing server-bound software licenses, including Windows Server
and SQL Server.
A breakdown of Amazon EC2 instances and their associated prices.

Benefits

Getting started with EC2 is easy, and because EC2 is controlled by APIs developers can commission any
number of server instances at the same time to quickly increase or decrease capacity. EC2 allows for complete
control of instances which makes operation as simple as if the machine were in-house.

The flexibility of multiple instance types, operating systems, and software packages and the fact that EC2 is
integrated with most AWS Services -- S3, Relational Database Service (RDS), Virtual Private Cloud (VPC) --
makes it a secure solution for computing, query processing, and cloud storage.

Challenges

Resource utilization -- developers must manage the number of instances they have to avoid costly large, long-
running instances.
Security -- developers must make sure that public facing instances are running securely.
Deploying at scale -- running a multitude of instances can result in cluttered environments that are difficult to
manage.
Management of AMI lifecycle -- developers often begin by using default Amazon Machine Images. As
computing needs change, custom configurations will likely be required.
Ongoing maintenance -- Amazon EC2 instances are virtual machines that run in Amazon's cloud. However,
they ultimately run on physical hardware which can fail. AWS alerts developers when an instance must be
moved due to hardware maintenance. This requires ongoing monitoring.

EC2 vs. S3

Both Amazon EC2 and Amazon S3 are important services that allow developers to maximize use of the AWS
cloud. The main difference between Amazon EC2 and S3 is that EC2 is a computing service that allows
companies to run servers in the cloud. While S3 is an object storage service used to store and retrieve data from
AWS through the Internet. S3 is like a giant hard drive in the cloud, while EC2 offers CPU and RAM in
addition to storage. Many developers use both services for their cloud computing needs.
Amazon Simple Storage Service (Amazon S3)
Amazon Simple Storage Service (Amazon S3) is a scalable, high-speed, web-based cloud storage service.
The service is designed for online backup and archiving of data and applications on Amazon Web Services
(AWS). Amazon S3 was designed with a minimal feature set and created to make web-scale computing
easier for developers.

Amazon S3 features

S3 provides 99.999999999% durability for objects stored in the service and supports multiple security and
compliance certifications. An administrator can also link S3 to other AWS security and monitoring services,
including CloudTrail, CloudWatch and Macie. There's also an extensive partner network of vendors that
link their services directly to S3.
Data can be transferred to S3 over the public internet via access to S3 application programming interfaces
(APIs). There's also Amazon S3 Transfer Acceleration for faster movement over long distances, as well as
AWS Direct Connect for a private, consistent connection between S3 and an enterprise's own data center.
An administrator can also use AWS Snowball, a physical transfer device, to ship large amounts of data
from an enterprise data center directly to AWS, which will then upload it to S3.
In addition, users can integrate other AWS services with S3. For example, an analyst can query data directly
on S3 either with Amazon Athena for ad hoc queries or with Amazon Redshift Spectrum for more complex
analyses.

Use cases
Amazon S3 can be used by organizations ranging in size from small businesses to large enterprises. S3's
scalability, availability, security and performance capabilities make it suitable for a variety of data storage
use cases. Common use cases for S3 include the following:

 data storage;
 data archiving;
 application hosting for deployment, installation and management of web apps;
 software delivery;
 data backup;
 disaster recovery (DR);
 running big data analytics tools on stored data;
 data lakes;
 mobile applications;
 internet of things (IoT) devices;
 media hosting for images, videos and music files; and
 website hosting -- particularly well suited to work with Amazon CloudFront for content delivery.

How Amazon S3 works


Amazon S3 is an object storage service, which differs from other types of cloud computing storage types,
such as block and file storage. Each object is stored as a file with its metadata included. The object is also
given an ID number. Applications use this ID number to access objects. This is unlike file and block
cloud storage, where a developer can access an object via a REpresentational State Transfer (REST)
API.
The S3 object storage cloud service gives a subscriber access to the same systems that Amazon uses to run
its own websites. S3 enables customers to upload, store and download practically any file or object that is
up to 5 TB in size (with the largest single upload capped at 5 GB).

Amazon S3 storage classes


Amazon S3 comes in seven storage classes:
1. S3 Standard is suitable for frequently accessed data that needs to be delivered with low latency and
high throughput. S3 Standard targets applications, dynamic websites, content distribution and big
data workloads.
2. S3 Intelligent-Tiering is most suitable for data with access needs that are either changing or
unknown. S3 Intelligent-Tiering has four different access tiers: Frequent Access, Infrequent Access
(IA), Archive and Deep Archive. Data is automatically moved to the most inexpensive storage tier
according to customer access patterns.
3. S3 Standard-IA offers a lower storage price for data that is needed less often but that must be quickly
accessible. This tier can be used for backups, DR and long-term data storage.
4. S3 One Zone-IA is designed for data that is used infrequently but requires rapid access on the
occasions that it is needed. Use of S3 One Zone-IA is indicated for infrequently accessed data without
high resilience or availability needs, data that is able to be recreated and backing up on-premises data.
5. S3 Glacier is the least expensive storage option in S3, but it is strictly designed for archival storage
because it takes longer to access the data. Glacier offers variable retrieval rates that range from
minutes to hours.
6. S3 Glacier Deep Archive has the lowest price option for S3 storage. S3 Glacier Deep Archive is
designed to retain data that only needs to be accessed once or twice a year.
7. S3 Outposts adds S3 object storage features and APIs to an on-premises AWS Outposts environment.
S3 Outposts is best used when performance needs call for data to be stored near on-premises
applications or to satisfy specific data residency requirements.

A user can also implement life cycle management policies to curate data and move it to the most appropriate
tier over time.

Working with buckets


Amazon does not impose a limit on the number of items that a subscriber can store; however, there are
limits to Amazon S3 bucket quantities. Each AWS account allows up to 100 buckets to be created; limits
can be increased to 1,000 with service limit increases. An Amazon S3 bucket exists within a particular
region of the cloud. An AWS customer can use an Amazon S3 API to upload objects to a particular bucket.
Customers can configure and manage S3 buckets.

Protecting your data


User data is stored on redundant servers in multiple data centers. S3 uses a simple web-based interface (the
Amazon S3 console) and encryption for user authentication.
S3 buckets are kept private from public access by default, but an administrator can choose to make them
publicly accessible. A user can also encrypt data prior to storage. Rights may be specified for individual
users, who will then need approved AWS credentials to download or access a file in S3.
When a user stores data in S3, Amazon tracks the usage for billing purposes, but it does not otherwise
access the data unless required to do so by law.

Competitor services
Competitor services to Amazon S3 include other object storage software tool services. Comparable object
storage services are offered by other major cloud service providers (CSPs), such as Google, Microsoft,
IBM and Alibaba. Main competitor services to Amazon S3 include the following:

 Google Cloud Storage


 Azure Blob storage
 IBM Cloud Object Storage
 DigitalOcean Spaces
 Alibaba Cloud Object Storage Service (OSS)
 Cloudian
 Zadara Storage
 Oracle Cloud Infrastructure Object Storage

Amazon Simple Queue Service (SQS)


Amazon Simple Queue Service (Amazon SQS) is a pay-per-use web service for storing messages in transit
between computers. Developers use SQS to build distributed applications with decoupled components without
having to deal with the overhead of creating and maintaining message queues.

Amazon Simple Queue Service supports tasks that process asynchronously. Instead of one application having to
invoke another application directly, the service enables an application to submit a message to a queue, which
another application can then pick up at a later time.

An SQS queue can be FIFO (first-in, first-out) or standard. A FIFO queue maintains the exact order in which
messages strings are sent and received. Standard queues attempt to preserve the order of messages, but can be
flexible and when processing demands require it. FIFO queues provide exactly-once delivery, while standard
queues provide at-least-once delivery.

SQS is compatible with other Amazon Web Services, including Amazon Relational Database Service, Amazon
Elastic Compute Cloud and Amazon Simple Storage Service.

VMware vCloud Suite


VMware vCloud Suite is a integrated collection of VMware software products for building a private
cloud infrastructure. VMware vCloud Suite features components for cloud services provisioning, cloud
services monitoring and cloud services chargeback or showback.
vCloud Suite, which features a self-service portal, IT service catalog and policy engine, comes in
Standard, Advanced and Enterprise versions. vCloud Suite 5.5 is composed of:

 vSphere: provides a virtualization platform.


 vCenter Site Recovery Manager: provides automated disaster recovery.
 vCloud Networking and Security: includes firewall, VPN, DHCP, NAT and other network
functions for a virtualized compute environment.
 vCloudAutomation Center: facilitates self-service cloud service provisioning.
 vCenter Operations Management Suite: provides a visual representation of the infrastructure's
health, security risk and efficiency.
 vCloud Director: manages infrastructure as a service (IaaS) architectures by monitoring and
controlling various cloud-computing components, such as security, virtual machine (VM)
provisioning, billing and self-service access.

VMware first introduced the vCloud tag at the Las Vegas 2008 VMworld conference. In the early days,
there were many iterations from vCloud Pavilion, through to vCloud Hybrid Service and vCloud Air. The
latter providing public Infrastructure-as-a-Service (IaaS) running VMware vSphere, which was eventually
acquired in 2017 by French cloud computing company OVH.

Over the last few years, VMware has shifted its focus towards cloud-agnostic software, and the integration
of its products with leading cloud providers from Amazon, Microsoft, Google, IBM, and Oracle.

Furthermore, VMware aims to bring the benefits of cloud computing to customer’s existing data centers
through private and hybrid cloud deployments, as well as to provide platforms for cloud-native
application development.

Although VMware still partners with OVH on go-to-market solutions and customer support for vCloud Air,
the acquisition suggested a move away from VMware itself being a cloud provider, and more towards
engineering the building blocks for deployment and management of multi-cloud platforms.

VMware now classifies vCloud Suite as a cloud infrastructure management solution, and VMware Cloud
Director (VCD) a cloud-service delivery platform for Cloud Providers.

According to VMware’s Public Cloud Solution Service Definition, VMware Cloud Providers are a global
network of ‘service providers who have built their cloud and hosting services on VMware software.’

 VMware powered private clouds, service provider-managed or unmanaged, use VMware vSphere
with the vRealize Suite, which forms VMware vCloud Suite.
 VMware powered public clouds use VMware vSphere, with VMware Cloud Director, and
generally with vCloud Application Programming Interfaces (APIs) exposed to its tenants.

The original vCloud Air is available through OVH as a hosted private cloud with enterprise support
including vSphere, vCenter, and NSX.
vCloud Suite
VMware vCloud Suite is the combination of enterprise-proven virtualization platform vSphere, and multi-
cloud management solution vRealize. VMware vSphere includes the hypervisor ESXi, providing server
virtualization, and vCenter Server, which centralizes the management of physical ESXi hosts and Virtual
Machines, as well as enabling some of the enterprise features like High Availability.

Included with vSphere in the vCloud Suite is vRealize, delivering automation, orchestration, and intelligent
IT operations for multi-cloud management and modern applications. The vRealize Suite contains the
following products:
 vRealize Automation: for self-service provisioning, service catalog, governance, and policy
enforcement, with aligned orchestration to automate runbooks and workload deployments.
 vRealize Operations: offers Machine Learning (ML) powered and self-driving operational
capabilities, monitoring, automated remediation, performance optimization, capacity management
and planning, usage metering, service pricing, and chargeback.
 vRealize Log Insight: enables centralized log management and intelligent log analytics for
operational visibility, troubleshooting, and compliance.
 vRealize Suite Lifecycle Manager: provides a comprehensive application lifecycle management
solution for vCloud Suite.
Additionally, vCloud Suite fully supports vSphere with Kubernetes and integrates seamlessly with other
Software-Defined Data components such as NSX and vSAN.
With multi-tenancy, each vRealize Automation tenant can have its own branding, services, and fine-grained
permissions. The following screenshot shows an example of tenant branding at the login page:
In the screenshot below the vRealize Automation design canvas is shown, administrators drag
and drop the relevant components for automated builds with corresponding catalog items:

The following screenshot shows the vRealize Automation self-service catalog:


A VMware powered hybrid cloud can be formed by connecting the private cloud with
either a public VMware cloud offering or another public cloud service. With vCloud
Suite, infrastructure administrators can integrate private and public clouds to deliver and
manage modern infrastructure across many environments. Developers can consume
infrastructure services through APIs, Command Line Interface (CLI), or the service
catalog Graphical User Interface (GUI).

VMware Cloud Director

VMware Cloud Director is VMware’s flagship cloud services platform, empowering


cloud providers with an API-driven cloud infrastructure control plane for managing
global VMware Cloud estates. Available through the VMware Cloud Provider Program
(VCPP), VMware Cloud Director allows cloud service providers to automate the
provisioning and management of compute resources and services.

As the portfolio of Software-as-a-Service (SaaS) offerings in the VMware Cloud


brochure continues to grow, the formerly named vCloud Director became VMware
Cloud Director in v10.1 to align with VMware’s branding direction.

The key features VMware Cloud Director delivers are as follows:

 Resource pooling of compute into virtual data centers providing Software-


Defined Data Centre operations with a range of tenancy options.
 Cloud-native development of modern applications with enterprise-grade
Kubernetes and lifecycle management.
 Automation of service-ready cloud stacks as code with the VMware Cloud
Director Terraform provider.
 Policy-driven approach to cloud resource management, tenancy, security,
compliance, and independent role-based access control.
 A centralized suite of services for integrating with leading storage, network,
security, data protection, and other software vendors, or custom applications.
 Single pane of glass management and monitoring for enterprise-scale multi-
SDDC environments, with deep visibility and predictive remediation.

These features allow cloud providers to upscale from IaaS hosting to a profitable
portfolio of cloud-based services, providing the following key benefits:

 VCPP Cloud Providers:


o Operational efficiency of deploying and maintaining cloud infrastructure for
tenants across multi-cloud environments.
o A unified management plane for the entire service portfolio.
o Reduced time-to-market for new and expanding services.
o Additional revenue streams from publishing custom service suites and
integration with Independent Software Vendors (ISVs).
o VCD is one of the main steps towards becoming Cloud Verified, providing an
industry-standard mark of recognition.
 VMware Cloud Customers:
o VMware Cloud-as-a-Service consumption model of the full VMware
Software-Defined Data Center, as a managed service or with a complete set
of self-service controls.
o Ease of provisioning and scaling cloud services and partner services from a
single web interface or set of APIs.
o The fastest available path to hybrid cloud services and workload migration,
whether that be for portability between cloud platforms, or backup and
evacuation of existing data centers.
o Leverage Infrastructure-as-Code (IaC) capabilities across various cloud
platforms with native container services and Platform-as-a-Service (PaaS)
for Kubernetes and Bitnami.

Many of the benefits above work in turn for both parties, alongside taking advantage of
economies of scale to facilitate business growth with minimal operational overhead.

You can try both vCloud Suite (vSphere with vRealize) and VMware Cloud Director
using VMware Hands on Labs. At the time of writing the Cloud Director lab is still
running v9.7, so is still branded vCloud:
vCloud Connector

Accompanying VMware Cloud Director, vCloud Air customers can make use of vCloud
Connector, a vSphere plugin that connects up to 10 private and public clouds. Using
vCloud Connector, customers can harness the full power of hybrid cloud from a single
interface to help with private data center extension and migration to a public cloud, or
management of hybrid cloud setups.

One of the great features of managing distributed environments from the vCloud
Connector plugin is the content sync, creating a single content library across the
entire cloud environment for increased operational efficiency and simplified source
catalog management.

The vCloud Connector itself has been available as a free download since v2.6. Although
the latest version of the product is v2.8.2, updated in March 2016, it remains available to
support vCloud Air customers with multi-cloud management.

To summarise With the modern vCloud Suite, we can standardize, automate, and
monitor distributed vSphere environments with vCenter Server and vRealize Suite.

We observed that VMware Cloud Director, previously vCloud Director, remained a


staple of the vCloud brand, underpinning global cloud deployments for a community of
cloud service providers up to the present day. The VMware Cloud family continues to
grow across private and public clouds, with customers creating hybrid clouds, and
VMware Cloud Director enables the automation of these deployments at scale.

VMware’s cloud-agnostic slogan Any App, Any Device, Anywhere, aims to keep the
companies existing market-leading products, and recent acquisitions, relevant for
customers with cloud and multi-cloud strategies. By embedding further native PaaS
services for developers building modern applications, and a wide range of additional
SaaS offerings, both vCloud Suite and VMware Cloud Director are crucial elements of
this vision.

In my post Cloud Adoption for SMBs and End Users – Easy and Affordable, I talked
about how it makes perfect sense that SMBs move to the cloud. vCloud
Express, offered by a number of providers, is an ideal service for SMBs (and
enterprises alike) because it's quick, easy, and pay-as-you-go on a credit card.

It had been some time since I tried out vCloud Express so I was thankful when recently I
had the opportunity to try out vCloud Express from Terremark. Quickly, I found out that
vCloud Express had grown up a lot since I last saw it. Before I show you how to get
started with vCloud Express, here are a few things that you should know:

 vCloud Express is no-commitment & pay as you go with a credit card.


 vCloud Express is designed to be easy to use (which you'll see below).
 Unlike Amazon EC2, Terremark vCloud Express is VMware-based, supports more than
450 guest operating systems, supports up to 8-way 16GB VMs, supports Windows
2008 and SQL 2008, offers hardware load balancing and fiber-attached persistent
storage.
 Prices start at 3.6 cents per computing hour.

step by step process to get started with vCloud Express:

1. Go to the vCloud Express from Terremark page and click Order Now to go to the signup page.
2. Fill out the New User Signup & activate your account.
3. At this point, you'll need to provide a credit card to Terremark to bill your per hour usage on.
4. When you Sign In, you'll be brought to the Resources page so click on Servers to get started creating
your first server.
5. At this point, you have a number of options. You can create Rows and Groups to help organize
servers if you'll have more than a couple of servers. However, minimally, if you're just going to
create one server like I am then you can select either Create Server or Create Blank Server. The
difference between these two is that "Create Server" creates a new server from pre-built templates
where "Create Blank Server" does what it says and creates an empty VM where you would install
your own OS. In my case, I want to demonstrate a VM that has a pre-built OS (a template) so we'll
choose Create Server. (note that we could even create a server with an OS and a SQL database).
6. This brings up the Create Server Wizard that will guide us through the process. First
we need to specify the type of VM (OS, OS & Database, or Cohesive FT). I specified
OS only then set my OS to Windows 2008 Standard R2 64-bit. The only servers that I
saw with additional monthly fees were the SQL database servers.

7. Next, I had to specify the number of virtual processors (VPU) and the amount of RAM
that I wanted this server to have. Notice how as the CPU and RAM rises, so does the
cost per hour of this VM (also add in the cost for the virtual hard drive).
8. From here, I specified the server name, admin password, and IP

settings.
9. Next, I had to specify what row and group this server should be contained in (I
created new rows and groups then named them whatever I wanted).

10. Finally, I reviewed what we were about to deploy (including the associated costs),
opted to power on the server, and accepted the license agreement.

At this point, I was told that the new server could take up to 45 minutes to be created
however, after just 5 minutes my new Windows server in the cloud was ready to be
used.
11. Next, select the server and click Connect. Likely you will have to install the VMware
MKS plugin, as I did, to use the console. I did have some trouble connecting to the
server console however, I was successful when using FireFox, installing the MKS plug-
in as directed, and connecting to the VPN with VPN Connect (a SSL VPN that required
me to install the Cisco AnyConnect VPN Client). Here's what my web console looked
like:

12. From the server console, I updated the VMware Tools by mounting the provided
ISO, installing, and rebooting.
Note that you aren't recommended to use this web-based server console for daily
administration, only to get the server up a running to the point that you can connect to it
via RDP.

After only about 15 minutes of using vCloud Express, I have a working Windows 2008
R2 server with VMware Tools installed, but what remains?

 Configure Outbound and Inbound Internet Access


 Installing Applications
I will cover these in a separate vCloud blog post so look for part 2.

In summary, think about this – never before could you have a new Windows or Linux
server, up and running on the Internet, in under 15 minutes, and only pay a few cents
per hour for the resources that you use? vCloud Express is revolutionary in its
simplicity, affordability, and easy of use.

David Davis is a VMware Evangelist and vSphere Video Training Author for Train
Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years
in the IT industry. David has authored hundreds of articles on the Internet and nine
different video training courses for TrainSignal.com including the popular vSphere video
training package. Learn more about David at his blog or on Twitter and check out a
sample of his VMware vSphere video training course from TrainSignal.com.

Google AppEngine:
A scalable runtime environment, Google App Engine is mostly used to run Web applications.
These dynamic scales as demand change over time because of Google’s vast computing
infrastructure. Because it offers a secure execution environment in addition to a number of
services, App Engine makes it easier to develop scalable and high-performance Web apps.
Google’s applications will scale up and down in response to shifting demand. Croon tasks,
communications, scalable data stores, work queues, and in-memory caching are some of these
services.
The App Engine SDK facilitates the testing and professionalization of applications by emulating
the production runtime environment and allowing developers to design and test applications on
their own PCs. When an application is finished being produced, developers can quickly migrate it
to App Engine, put in place quotas to control the cost that is generated, and make the programmer
available to everyone. Python, Java, and Go are among the languages that are currently supported.
The development and hosting platform Google App Engine, which powers anything from web
programming for huge enterprises to mobile apps, uses the same infrastructure as Google’s large-
scale internet services. It is a fully managed PaaS (platform as a service) cloud computing platform
that uses in-built services to run your apps. You can start creating almost immediately after
receiving the software development kit (SDK). You may immediately access the Google app
developer’s manual once you’ve chosen the language you wish to use to build your app.
AppEngine is the Google’s Platform to Build Web Application on Cloud. It is the dynamic
Web server with full support for common web technologies. It supports Automatic Scaling &
Load balancing concept. It also has Transactional Datastore model.
Google App Engine (often referred to as GAE or simply App Engine) is a platform as a service
(PaaS) cloud computing platform that provides Web app developers and enterprises with access
to Google's scalable hosting and tier 1 Internet service (for developing and hosting web
applications in Google-managed data centers). The App Engine requires that apps be written in
Java or Python, store data in Google BigTable and use the Google query language. Non-
compliant applications require modification to use App Engine.
Applications are sandboxed and run across multiple servers. App Engine offers automatic scaling
for web applications—as the number of requests increases for an application, AppEngine
automatically allocates more resources for the web application to handle the additional demand.
Google App Engine is free up to a certain level of consumed resources. Fees are charged for
additional storage, bandwidth, or instance hours required by the application. It was first released
as a preview version in April 2008, and came out of preview in September 2011.

Features of App Engine


Runtimes and Languages
To create an application for an app engine, you can use Go, Java, PHP, or Python. You can develop
and test an app locally using the SDK’s deployment toolkit. Each language’s SDK and nun time
are unique. Your program is run in a:
 Java Run Time Environment version 7  PHP runtime’s PHP 5.4 environment
 Python Run Time environment version 2.7  Go runtime 1.2 environment

Generally Usable Features


These are protected by the service-level agreement and depreciation policy of the app engine. The
implementation of such a feature is often stable, and any changes made to it are backward-
compatible. These include communications, process management, computing, data storage,
retrieval, and search, as well as app configuration and management. Features like the HRD
migration tool, Google Cloud SQL, logs, datastore, dedicated Memcached, blob store,
Memcached, and search are included in the categories of data storage, retrieval, and search.
Features in Preview
In a later iteration of the app engine, these functions will undoubtedly be made broadly accessible.
However, because they are in the preview, their implementation may change in ways that are
backward-incompatible. Sockets, MapReduce, and the Google Cloud Storage Client Library are a
few of them.
Experimental Features
These might or might not be made broadly accessible in the next app engine updates. They might
be changed in ways that are irreconcilable with the past. The “trusted tester” features, however,
are only accessible to a limited user base and require registration in order to utilize them. The
experimental features include Prospective Search, Page Speed, OpenID, Restore/ Backup/
Datastore Admin, Task Queue Tagging, MapReduce, and Task Queue REST API. App metrics
analytics, datastore admin/backup/restore, task queue tagging, MapReduce, task queue REST API,
OAuth, prospective search, OpenID, and Page Speed are some of the experimental features.
Third-Party Services
As Google provides documentation and helper libraries to expand the capabilities of the app
engine platform, your app can perform tasks that are not built into the core product you are
familiar with as app engine. To do this, Google collaborates with other organizations. Along with
the helper libraries, the partners frequently provide exclusive deals to app engine users.

Advantages of Google App Engine


The Google App Engine has a lot of benefits that can help you advance your app ideas. This
comprises:

1. Infrastructure for Security: The Internet infrastructure that Google uses is arguably the safest
in the entire world. Since the application data and code are hosted on extremely secure servers,
there has rarely been any kind of illegal access to date.
2. Faster Time to Market: For every organization, getting a product or service to market quickly
is crucial. When it comes to quickly releasing the product, encouraging the development and
maintenance of an app is essential. A firm can grow swiftly with Google Cloud App Engine’s
assistance.
3. Quick to Start: You don’t need to spend a lot of time prototyping or deploying the app to users
because there is no hardware or product to buy and maintain.
4. Easy to Use: The tools that you need to create, test, launch, and update the applications are
included in Google App Engine (GAE).
5. Rich set of APIs & Services: A number of built-in APIs and services in Google App Engine
enable developers to create strong, feature-rich apps.
6. Scalability: This is one of the deciding variables for the success of any software. When using
the Google app engine to construct apps, you may access technologies like GFS, Big Table,
and others that Google uses to build its own apps.
7. Performance and Reliability: Among international brands, Google ranks among the top ones.
Therefore, you must bear that in mind while talking about performance and reliability.
8. Cost Savings: To administer your servers, you don’t need to employ engineers or even do it
yourself. The money you save might be put toward developing other areas of your company.
9. Platform Independence: Since the app engine platform only has a few dependencies, you can
easily relocate all of your data to another environment.

Advantages of Google AppEngine:


 Lower Total cost of ownership,  Infrastructure for Security
 Rich set of APIs  Scalability
 Fully featured SDK for local development  Performance Reliability
 Ease of Deployment.  Platform Independence

Components of AppEngine:
1. SDK
a. APIs
b. Easy Deployment Software
c. Locally RunSoftwares
2. Runtime Language
a. Python and
b. JAVA
3. Scalable Infrastructure

4. WebBased Admin Console


Google Data Store Architecture
Google AppEngine Amazon Web Servces
Cloud Services PaaS IaaS,SaaS,PaaS
Platform Linux, Window Server Linux, Solaris, Window Server
Supported
Virtualization Application Container OS level running on an
Platform Used HyperSupeviser
Storage BigTable an MegaStore S3 (Simple Storage Service)
Control Interface API API Command Line
Language Java, Python, PHP, .net, Ruby Java, PHP, Python, Ruby
Supported
Load Balancing Auto Round Robin
Data After Google will not take any action Amazon will not take any action for
Termination for 90 days after the effective date 30 days after the effective date of
of termination termination

Disadvantage of Google AppEngine:


 Violation of Policies
 We are at Google’s Mercy
Module-3
What is Azure?
Azure is Microsoft’s cloud platform, just like Google has its Google Cloud and Amazon has its
Amazon Web Service or AWS.000. Generally, it is a platform through which we can use
Microsoft’s resources. For example, to set up a huge server, we will require huge investment,
effort, physical space, and so on. In such situations, Microsoft Azure comes to our rescue. It will
provide us with virtual machines, fast processing of data, analytical and monitoring tools, and so
on to make our work simpler. The pricing of Azure is also simpler and cost-effective. Popularly
termed as “Pay As You Go”, which means how much you use, pay only for that.

Azure History
Microsoft unveiled Windows Azure in early October 2008 but it went to live after February
2010. Later in 2014, Microsoft changed its name from Windows Azure to Microsoft Azure.
Azure provided a service platform for .NET services, SQL Services, and many Live Services.
Many people were still very skeptical about “the cloud”. As an industry, we were entering a
brave new world with many possibilities. Microsoft Azure is getting bigger and better in the
coming days. More tools and more functionalities are getting added. It has two releases as of
now. It’s a famous version of Microsoft Azure v1 and later Microsoft Azure v2. Microsoft
Azure v1 was more JSON script-driven than the new version v2, which has interactive UI for
simplification and easy learning. Microsoft Azure v2 is still in the preview version.

How Azure can help in business?


Azure can help our business in the following ways-
 Capital less: We don’t have to worry about the capital as Azure cuts out the high cost of
hardware. You simply pay as you go and enjoy a subscription-based model that’s kind to
your cash flow. Also, setting up an Azure account is very easy. You simply register in Azure
Portal and select your required subscription and get going.
 Less Operational Cost: Azure has a low operational cost because it runs on its servers
whose only job is to make the cloud functional and bug-free, it’s usually a whole lot more
reliable than your own, on-location server.
 Cost Effective: If we set up a server on our own, we need to hire a tech support team to
monitor them and make sure things are working fine. Also, there might be a situation where
the tech support team is taking too much time to solve the issue incurred in the server. So, in
this regard is way too pocket-friendly.
 Easy Back-Up and Recovery options: Azure keeps backups of all your valuable data. In
disaster situations, you can recover all your data in a single click without your business
getting affected. Cloud-based backup and recovery solutions save time, avoid large up-front
investments and roll up third-party expertise as part of the deal.
 Easy to implement: It is very easy to implement your business models in Azure. With a
couple of on-click activities, you are good to go. Even there are several tutorials to make you
learn and deploy faster.
 Better Security: Azure provides more security than local servers. Be carefree about your
critical data and business applications. As it stays safe in the Azure Cloud. Even, in natural
disasters, where the resources can be harmed, Azure is a rescue. The cloud is always on.
 Work from anywhere: Azure gives you the freedom to work from anywhere and
everywhere. It just requires a network connection and credentials. And with most serious
Azure cloud services offering mobile apps, you’re not restricted to which device you’ve got
to hand.
 Increased collaboration: With Azure, teams can access, edit and share documents anytime,
from anywhere. They can work and achieve future goals hand in hand. Another advantage of
Azure is that it preserves records of activity and data. Timestamps are one example of
Azure’s record-keeping. Timestamps improve team collaboration by establishing
transparency and increasing accountability.

Microsoft Azure Services


Following are some of the services Microsoft Azure offers:
1. Compute: Includes Virtual Machines, Virtual Machine Scale Sets, Functions for serverless
computing, Batch for containerized batch workloads, Service Fabric for microservices and
container orchestration, and Cloud Services for building cloud-based apps and APIs.
2. Networking: With Azure, you can use a variety of networking tools, like the Virtual
Network, which can connect to on-premise data centers; Load Balancer; Application
Gateway; VPN Gateway; Azure DNS for domain hosting, Content Delivery Network,
Traffic Manager, ExpressRoute dedicated private network fiber connections; and Network
Watcher monitoring and diagnostics
3. Storage: Includes Blob, Queue, File, and Disk Storage, as well as a Data Lake Store,
Backup, and Site Recovery, among others.
4. Web + Mobile: Creating Web + Mobile applications is very easy as it includes several
services for building and deploying applications.
5. Containers: Azure has a property that includes Container Service, which supports
Kubernetes, DC/OS or Docker Swarm, and Container Registry, as well as tools for
microservices.
6. Databases: Azure also included several SQL-based databases and related tools.
7. Data + Analytics: Azure has some big data tools like HDInsight for Hadoop Spark, R
Server, HBase, and Storm clusters
8. AI + Cognitive Services: With Azure developing applications with artificial intelligence
capabilities, like the Computer Vision API, Face API, Bing Web Search, Video Indexer,
and Language Understanding Intelligent.
9. Internet of Things: Includes IoT Hub and IoT Edge services that can be combined with a
variety of machine learning, analytics, and communications services.
10. Security + Identity: Includes Security Center, Azure Active Directory, Key Vault, and
Multi-Factor Authentication Services.
11. Developer Tools: Includes cloud development services like Visual Studio Team Services,
Azure DevTest Labs, HockeyApp mobile app deployment and monitoring, Xamarin cross-
platform mobile development, and more.
Microsoft Azure

Microsoft Azure, formerly known as Windows Azure, is Microsoft's public cloud computing
platform. It provides a broad range of cloud services, including compute, analytics, storage and
networking. Users can pick and choose from these services to develop and scale new applications
or run existing applications in the public cloud.
The Azure platform aims to help businesses manage challenges and meet their organizational
goals. It offers tools that support all industries -- including e-commerce, finance and a variety of
Fortune 500 companies -- and is compatible with open source technologies. This gives users the
flexibility to use their preferred tools and technologies. In addition, Azure offers four different
forms of cloud computing: infrastructure as a service (IaaS), platform as a service (PaaS), software
as a service (SaaS) and serverless functions.
Microsoft charges for Azure on a pay-as-you-go (PAYG) basis, meaning subscribers receive a
bill each month that only charges them for the specific resources and services they have used.

How does Microsoft Azure work?


Once customers subscribe to Azure, they have access to all the services included in the Azure
portal. Subscribers can use these services to create cloud-based resources, such as VMs and
databases. Azure resources and services can then be assembled into running environments used to
host workloads and store data.
In addition to the services that Microsoft offers through the Azure portal, a number of third-party
vendors also make software directly available through Azure. The cost billed for third-party
applications varies widely but may involve paying a subscription fee for the application, plus a
usage fee for the infrastructure used to host the application. Microsoft provides the following five
different customer support options for Azure:
 Basic  Standard  Enterprise
 Developer  Professional Direct (Premier)

What is Microsoft Azure used for?


Because Microsoft Azure consists of widely varied resource and service offerings, its use cases
are extremely diverse. Running virtual machines or containers in the cloud is one of the most
popular uses for Microsoft Azure. These compute resources can host infrastructure components,
such as domain name system (DNS) servers; Windows Server services, such as Internet
Information Services (IIS); networking services such as firewalls; or third-party applications.
Microsoft also supports the use of third-party operating systems, such as Linux. Azure is also
commonly used as a platform for hosting databases in the cloud. Microsoft offers serverless
relational databases such as Azure SQL and non-relational databases such as NoSQL.
In addition, the platform is frequently used for backup and disaster recovery. Many organizations
use Azure for archival storage in order to meet their long-term data retention or disaster recovery
(DR) requirements.
Azure products and services
Microsoft sorts Azure cloud services into nearly two dozen categories. Each category can include
numerous specific instance or service types. The most popular service categories include the
following:
Compute. These services enable a user to deploy and manage VMs, containers and batch jobs,
as well as support remote application access. Compute resources created within the Azure cloud
can be configured with either public IP addresses or private IP addresses, depending on whether
the resource needs to be accessible to the outside world.
Mobile. These products help developers build cloud applications for mobile devices, providing
notification services, support for back-end tasks, tools for building application program
interfaces (APIs) and the ability to couple geospatial context with data.
Web. These services support the development and deployment of web applications. They also
offer features for search, content delivery, API management, notification and reporting.
Storage. This category of services provides scalable cloud storage for structured and unstructured
data. It also supports big data projects, persistent storage and archival storage.
Analytics. These services provide distributed analytics and storage, as well as features for real-
time analytics, big data analytics, data lakes, machine learning, business intelligence, internet of
things (IoT) data streams and data warehousing.
Networking. This group includes virtual networks, dedicated connections and gateways, as well
as services for traffic management and diagnostics, load balancing, DNS hosting and network
protection against distributed denial-of-service (DDoS) attacks.
Media and content delivery network (CDN). These CDN services include on-demand streaming,
digital rights protection, encoding, and media playback and indexing.
Integration. These are services for server backup, site recovery and connecting private and public
clouds.
Identity. These offerings ensure only authorized users can access Azure services and help protect
encryption keys and other sensitive information in the cloud. Services include support for Azure
Active Directory and multifactor authentication.
IoT. These services help users capture, monitor and analyze IoT data from sensors and other
devices. Services include notifications, analytics, monitoring and support for coding and
execution.
DevOps. This group provides project and collaboration tools, such as Azure DevOps -- formerly
Visual Studio Team Services -- that facilitate DevOps software development processes. It also
offers features for application diagnostics, DevOps tool integrations and test labs for build tests
and experimentation.
Development. These services help application developers share code, test applications and track
potential issues. Azure supports a range of application programming languages, including
JavaScript, Python, .NET and Node.js. Tools in this category also include support for Azure
DevOps, software development kits (SDKs) and blockchain.
Security. These products provide capabilities to identify and respond to cloud security threats, as
well as manage encryption keys and other sensitive assets.
AI and machine learning. This is a wide range of services that a developer can use to infuse AI,
machine learning and cognitive computing capabilities into applications and data sets.
Containers. These services help an enterprise create, register, orchestrate and manage huge
volumes of containers in the Azure cloud, using common container platforms such as Docker
and orchestration platforms including Kubernetes.
Databases. This category includes database as a service (DBaaS) offerings for SQL and NoSQL,
as well as other database instances -- such as Azure Cosmos DB and Azure Database for
PostgreSQL. It also includes Azure SQL Data Warehouse support, caching, and hybrid database
integration and migration features. Azure SQL is the platform's flagship database service. It is a
relational database that provides SQL functionality without the need for deploying a SQL server.
Migration. This suite of tools helps an organization estimate workload Migration costs and
perform the actual migration of workloads from local data centers to the Azure cloud.
Management and governance. These services provide a range of backup, recovery, compliance,
automation, scheduling and monitoring tools that can help a cloud administrator manage an
Azure deployment.
Mixed reality. These services are designed to help developers create content for the Windows
Mixed Reality environment.
Blockchain. The Azure Blockchain Service lets you join a blockchain consortium or create your
own.
Intune. Microsoft Intune can be used to enroll user devices, thereby making it possible to push
security policies and mobile apps to those devices. Mobile apps can be deployed either to groups
of users or to a collection of devices. Intune also provides tools for tracking which apps are being
used. A remote wipe feature allows the organization's data to be securely removed from devices
without removing a user's mobile apps in the process.

Azure as PaaS (Platform as a Service)


As the name suggests, a platform is provided to clients to develop and deploy software. The clients
can focus on the application development rather than having to worry about hardware and
infrastructure. It also takes care of most of the operating systems, servers and networking issues.
Pros
 The overall cost is low as the resources are allocated on demand and servers are
automatically updated.
 It is less vulnerable as servers are automatically updated and being checked for all known
security issues. The whole process is not visible to developer and thus does not pose a risk
of data breach.
 Since new versions of development tools are tested by the Azure team, it becomes easy for
developers to move on to new tools. This also helps the developers to meet the customer’s
demand by quickly adapting to new versions.
Cons
 There are portability issues with using PaaS. There can be a different environment at
Azure, thus the application might have to be adapted accordingly.

Azure as IaaS (Infrastructure as a Service)


It is a managed compute service that gives complete control of the operating systems and the
application platform stack to the application developers. It lets the user to access, manage and
monitor the data centers by themselves.
Pros
 This is ideal for the application where complete control is required. The virtual machine can
be completely adapted to the requirements of the organization or business.
 IaaS facilitates very efficient design time portability. This means application can be migrated
to Windows Azure without rework. All the application dependencies such as database can
also be migrated to Azure.
 IaaS allows quick transition of services to clouds, which helps the vendors to offer services
to their clients easily. This also helps the vendors to expand their business by selling the
existing software or services in new markets.
Cons
 Since users are given complete control they are tempted to stick to a particular version for
the dependencies of applications. It might become difficult for them to migrate the
application to future versions.
 There are many factors which increases the cost of its operation. For example, higher server
maintenance for patching and upgrading software.
 There are lots of security risks from unpatched servers. Some companies have welldefined
processes for testing and updating on-premise servers for security vulnerabilities. These
processes need to be extended to the cloud-hosted IaaS VMs to mitigate hacking risks.
 The unpatched servers pose a great security risk. Unlike PaaS, there is no provision of
automatic server patching in IaaS. An unpatched server with sensitive information can be
very vulnerable affecting the entire business of an organization.
 It is difficult to maintain legacy apps in Iaas. It can be stuck with the older version of the
operating systems and application stacks. Thus, resulting in applications that are difficult
to maintain and add new functionality over the period of time.
 It becomes necessary to understand the pros and cons of both services in order to choose
the right one according your requirements. In conclusion it can be said that, PaaS has
definite economic advantages for operations over IaaS for commodity applications. In
PaaS, the cost of operations breaks the business model. Whereas, IaaS gives complete
control of the OS and application platform stack.

Windows Azure Platform


Windows Azure platform mainly stands with the foundation of running applications and keeping
the data on the cloud. This contains computer services, storage services and fabric. Windows
Azure affords a wide range of capabilities in the form of computing services to run applications,
storage services and provides a framework that supports several applications as well as host
services and manages all together centrally.

The Azure platform is a group of three cloud technologies as shown below:

Windows Azure

Windows Azure provides a virtual Windows runtime for executing applications and storing data
on computers in Microsoft data center which includes computational services, basic storage,
queues, web servers, management services, and load-balancers. This also offers a local
development fabric for building and testing services before they are deployed to Windows Azure
in the cloud. The application that are developed for Windows Azure scales better and more
reliable, requires less administration than that are developed through traditional Windows
programming model. Users just spend money for the computing and storage they are consuming,
instead of maintaining an enormous set of servers.

AppFabric (.NET Services)

Windows Azure platform’s major backbone is AppFabric which is a cloud-based infrastructure


service for applications running in the cloud and it allows the creation of combined access and
distributed messaging across clouds and enterprises. The goal of
Fabric is to put altogether the massive distributed processing power
in a unified manner. AppFabric is a middleware component that
consists of services like Access Control, Workflow service and
service bus.

SQL Azure

The core RDBMS is offered by SQL Azure as a service


in the cloud environment. The developers can
access it using a tabular data stream that is the typical
way to access on-premise SQL Server
instances. Developers can create tables, indexes and
views, use Stored Procedures and define triggers alike
SQL Server’s features. Application software can access SQL Azure data using Entity Framework,
ADO.NET and other Windows data access interfaces. The significant benefit of SQL Azure is that
the management requirements are significantly reduced because they need not worry about other
operations, such as monitoring disk usage and service log files.

Azure Marketplace

The Windows Azure marketplace contains data and various other application market segments
including data and web services from leading commercial data providers and authorized public
data sources. The Windows Azure Marketplace is further divided into the following two
categories:

1. App Market: It exposes the applications or service built by developers to potential


customers; so that they can easily choose from them to meet their needs.
2. Data Market: Today, many organizations express their readiness to sell many kinds of
data, including demographic information, financial information, legal information, and
much more. Hence, Data Market offers a chance to expose their offerings to more
customers using Microsoft’s cloud platform. In simple words, Data Market provides a
single place to find, buy, and access a variety of commercial datasets.

Azure Development Life Cycle

1. Create a Windows Azure account and Login using Microsoft Live ID.
2. Prepare the development fabric to build an application in the local cloud platform.
3. Test the application in the development fabric.
4. Package the application for cloud deployment.
5. Test the application on Windows Azure in the cloud.
6. Deploy the application in the production farm.
Salesforce
Salesforce, Inc. is a cloud computing and social enterprise software-as-a-service (SaaS) provider
based in San Francisco. Founded in March 1999 by former Oracle executive Marc Benioff, Parker
Harris, Dave Moellenhoff and Frank Dominguez, the company started off as a customer
relationship management (CRM) platform vendor. Salesforce has transformed into a SaaS
powerhouse over time, offering multiple cloud platforms that serve specialized purposes. In
August 2022, Salesforce announced it had revenue of $7.72 billion, growing 22% year over year.

The main premise behind Salesforce is to deliver affordable CRM software as an online service.
Before Salesforce, most companies hosted CRM software on their servers or used local resources,
which required a great deal of time and financial investment.

Salesforce offers a pay-as-you-go subscription model and houses all the data in the cloud, which
makes it easily accessible from any internet-connected device. Contact Salesforce for pricing
information.

What does Salesforce do?

Salesforce offers a diverse infrastructure of software products designed to help teams from
different industries -- including marketing, sales, IT, commerce and customer service -- connect
with their customers. For example, by accessing the Salesforce Customer 360 app, teams across
an entire organization can connect and share a single view of customer data on an integrated
platform.

Salesforce CRM provides helpful insights into customer behavior and needs through customer
data analysis. By bridging the gaps between data silos from different departments, Salesforce
provides a holistic view of every customer interaction with a brand.

Why is Salesforce used?

Salesforce enables organizations of every size and industry to better understand and connect with
their customers at a deeper level and grow their customer base. Businesses typically integrate
Salesforce into their ecosystem so employees can share customer views from any device,
regardless of their department or location.

Salesforce provides a 360-degree view of the customer lifecycle with streamlined workflows,
centralized cloud-based data management and real-time tracking of customer analytics. According
to Salesforce, more than 150,000 companies -- from small businesses to Fortune 500 companies -
- use its secure and scalable cloud platform.

For example, Pardot, which was renamed Marketing Cloud Account Engagement in April 2022,
is a Salesforce business-to-business (B2B) marketing automation tool that's designed to help
organizations accelerate their sales with better sales intelligence, generate high-quality leads with
powerful marketing tools, automate lead qualification and nurturing, and track campaign
performance.
Salesforce cloud services

Salesforce offers a diverse portfolio of products and services -- from CRM software and marketing
and sales management options to advanced analytics. Of its cloud platforms and applications, the
company is best known for its Salesforce CRM product, which is composed of the Sales Cloud,
Marketing Cloud, Service Cloud, Experience Cloud, Commerce Cloud and the Analytics Cloud.

Other Salesforce cloud offerings that address specific applications and industries include the App
Cloud, IoT Cloud, Financial Services Cloud, Health Cloud, Integration Cloud, Manufacturing
Cloud, Education Cloud, Nonprofit Cloud and the Vaccine Cloud.

The following list examines Salesforce cloud services and their prominent features.

1. Salesforce Sales Cloud enables sales teams to focus on the sales components of CRM in
addition to customer support. The main features of the Sales Cloud include the following:

 helps track customer information and interactions in one place;


 automates complex business processes;
 provides pipeline and forecast management;
 helps keep all information up to date;
 nurtures leads; and
 helps monitor the effectiveness of marketing campaigns.

2. Salesforce Marketing Cloud combines all marketing channels in one place and automates the
marketing processes. The main features of the Marketing Cloud include the following:

 personalizes email marketing at scale;


 engages with mobile messaging and mobile apps;
 connects social to marketing, sales and service;
 helps manage ad campaigns to increase customer acquisition;
 delivers personalized and efficient web content;
 automates the export of marketing data into databases; and
 creates one-to-one customer journeys across channels.

3. Salesforce Service Cloud provides a fast, artificial intelligence (AI)-driven customer service
and support experience to customers and enables businesses to scale their operations efficiently.
The main features of the Salesforce Service Cloud include the following:

 enables service teams to communicate in real time with customers through the Live Agent
tool;
 offers seamless collaboration with customers and faster query resolutions with the
integration of Slack-First Customer 360;
 enables customers to reach across multiple digital channels, including mobile messaging,
AI-powered live chat, social media and email;
 helps set up a self-service center for customers that includes communities and convenient
options for booking appointments, paying bills or checking account balances;
 uses omnichannel routing to automatically deliver cases and leads to certain employees
based on their skill sets or availability;
 helps turn insights into actions with the Salesforce Wave analytics app; and
 provides a comprehensive view of workforce management -- including order placement,
delivery, scheduling, installation and tracking -- through the Salesforce Field Service
option.

4. Salesforce Experience Cloud enables organizations to create connected digital experiences so


they can expand their reach across multiple digital channels while maintaining their brand identity.
It was formerly known as the Community Cloud, which only enabled users to build communities,
whereas the Experience Cloud enables users to create mobile apps, landing pages, portals and help
centers. The main features of the Experience Cloud include the following:

 helps deliver personalized content based on a customer's expertise, interests or other


demographics;
 enables the infusion of data from various digital experiences and third-party platforms
under one roof;
 provides endless customization options for creating experiences. For example, the
Lightning App Builder enables a quick setup of mobile responsive forms for gathering
customer information and feedback; and
 provides options to recognize and reward active members of the community by
dispatching scores, ranks and recognition badges.

5. Salesforce Commerce Cloud unifies the way businesses engage with customers over any
channel. It offers a suite of apps and software services that focus on the e-commerce business. The
main features of the Salesforce Commerce Cloud include the following:

 offers a personalized and engaging shopping experience on websites;


 provides highly relevant product recommendations to customers through the Salesforce
Einstein AI predictive intelligence platform;
 offers real-time reports and dashboards for analytics;
 enables businesses to manage digital commerce with integrated options for commerce,
point of sale and order management;
 helps launch new sites and create new customer experiences; and
 brings stores online and integrates partner technologies.

6. Salesforce Analytics Cloud, or Salesforce Wave Analytics, is a business intelligence platform


that enables organizations to instantly get answers and start making data-driven decisions. Powered
by Einstein Analytics and Tableau tools, this platform enables medium and large-sized
organizations to extract and analyze huge amounts of data efficiently and quickly. The main
features of the Analytics Cloud include the following:
 enables users to act on data instantly;
 connects easily to Sales and Service Cloud data;
 enables users to view, filter, group, measure and share data from their mobile devices
through the mobile-first analytics app;
 analyzes data for better insights and uses analytics apps for every function, including
sales, service, marketing, human resources and IT;
 provides a dynamic visualization engine for data builders and business users; and
 offers multiple dashboards to accept both structured and unstructured data from tools
including enterprise resource planning, CRM, RFID sensors, websites and social media.

7. Salesforce App Cloud is a collection of development tools that enable developers to quickly
and intuitively create applications that run on the Salesforce platform without writing code. App
Cloud provides native integration, eliminating the need for IT. It enables users to build apps that
integrate customer data for more engaging customer experiences. It helps automate business
processes and extend powerful APIs for added security. Tools in the App Cloud include the
following:

 Salesforce Platform, formerly known as Force.com, is a platform as a service (PaaS), which


enables admins and developers to create websites and applications with Apex that integrate
into the main Salesforce.com application.
 AppExchange is a custom application building and sharing platform for third-party
applications that run on the Force.com platform.
 Heroku Enterprise gives developers the flexibility to create apps using preferred languages
and tools.
 Salesforce Shield protects enterprises with data encryption tools that enhance transparency
and compliance across all apps.
 Salesforce DX enables users to manage and develop Salesforce apps across the entire
lifecycle.
 Salesforce Identity provides a single, trusted identity for employees, partners and customers
that enables users to manage apps and data.
 Salesforce Trailhead is a series of online tutorials that teaches various level developers how
to code for the Salesforce platform.
 Salesforce Sandbox enables developers to test ideas in a safe and isolated development
environment.
 Salesforce Connect enables users to connect and access data from other Salesforce
organizations and external sources without leaving the native Salesforce environment.

8. Salesforce IoT Cloud uses the power of IoT to turn data generated by customers, smart
devices, partners and sensors into useful customer data. The main features of the IoT Cloud
include the following:
 enables users to process massive quantities of data received from different processes,
locations and network devices;
 builds orchestration rules with intuitive tools to provide a low-code approach;
 engages with customers in real time;
 uses Einstein Analytics to provide advanced analytics gathered through a variety of
sources, including sensors, hardware components and portals; and
 records and evaluates previous activities and actions through the customer context tool to
make real-time decisions.

9. Salesforce Financial Services Cloud is powered by Lightning and is a combination of Sales


and Service Clouds plus a managed package that's useful for the financial services industry. The
main features of the Financial Services Cloud include the following:

 provides real-time access to critical data;


 offers visibility into unique customer journeys with helpful insights throughout the
customer lifecycle;
 helps deliver experiences that drive client loyalty through personalized tools;
 provides more visibility into existing household opportunities and the ability to track
referrals;
 enables instant access to all client data in one central location; and
 addresses regulatory compliance.

10. Salesforce Health Cloud 2.0 enables businesses and government agencies to offer better
safety and health for their employees, communities and customers. Its mission is to improve
patient care during each step of the healthcare process -- from the first point of contact to
medical billing. The main features of the Health Cloud 2.0 include the following:

 creates a profile for each member that includes demographics, communications and other
pertinent information in one location;
 monitors cases and prioritizes tasks based on levels of importance;
 enhances electronic health record systems by unlocking them and incorporating apps in a
secure and flexible platform;
 enables patients to track progress toward health goals, care plans and post-acute care; and
 helps track patient itineraries and detect system loopholes.

11. Salesforce Integration Cloud provides a single view of customer data for large businesses
and enterprises. This cloud helps users connect large amounts of data spread across the various
cloud platforms. The main features of the Integration Cloud include the following:

 provides the Lightning Flow feature, which enables the creation of personalized customer
experience across all units including sales, service and marketing;
 enables customer service reps to transform service interactions into cross-selling and
upselling opportunities, without ever leaving their console through the Lightning app builder
feature;
 provides easy integration with third-party apps to optimize business and development
processes; and
 helps with smart decisions and data optimization as data is pulled from all sources.

12. Salesforce Manufacturing Cloud is geared toward manufacturing companies and enables
them to view and collaborate between the operations and sales departments. Workers can also
access customer information through sales agreements and account-based forecasting. The main
features of the Manufacturing Cloud include the following:

 provides a sales agreement feature that offers visibility into all customer negotiations and
contract terms;
 offers native lead management that can be modified for any business needs;
 enables manufacturers to view the current business as well as identify future
opportunities for improvements through the account-based forecasting feature; and
 enables account managers to customize permissions and different settings for each
position on the team and for each workflow.

13. Salesforce Education Cloud combines Salesforce Lightning with the Education Data
Architecture to provide student management and engagement, academics, admissions and other
support functions. It delivers the technology required to manage the entire student lifecycle, from
kindergarten to graduation. The main features of the Salesforce Education Cloud include the
following:

 maps student journeys and provides budget tracking, campaign management, social
marketing and personalized messaging through the marketing automation feature;
 provides a sales automation feature that offers easy enrollment from the pre-lead stage to
final enrollment;
 automates the grant concepts, funding, budget tracking, sponsor updates and loan
applications for both internal and third-party vendors. This cuts down on the communication
and follow-up involved in student loan processing;
 offers easy recruitment and outreach to prospective students by consolidating data in one
single place; and
 provides a collaborative experience for students across the campus by connecting multiple
departments.

14. Salesforce Nonprofit Cloud helps nonprofit organizations, such as fundraising organizations
and educational institutions, expand their reach digitally and enhance their connections with
people. It aligns fundraising, marketing, program management and technology teams and offers a
consolidated view across all activities and operations. The main features of the Nonprofit Cloud
include the following:

 comes with a fundraising feature that provides a holistic view of partners and donors by
streamlining communications between both entities;
 connects with potential donors, regardless of their geographic location, through the
digital-first fundraising strategy to help establish viable donor relationships across
various channels; and
 offers built-in templates that enable companies to engage with their constituents,
supporters and partners through personalized messages and email marketing.

15. Salesforce Vaccine Cloud was introduced in early 2021 to help healthcare organizations,
nonprofits and schools operate safely by building and managing vaccine programs at scale quickly
and efficiently. The main features of the Vaccine Cloud include the following:

 consolidates all data sources into a single view for easy accessibility;
 provides vaccine inventory management that helps organizations maintain adequate
vaccine doses, syringes and personal protective equipment stock levels as well as provides
a forecast of demand;
 helps screen vaccine registrants and gather digital consent;
 helps with analysis of communitywide vaccine results; and
 provides contactless visits through quick response codes, on-demand appointment
scheduling and self-service options.

Salesforce technologies

Salesforce offers several innovative technologies that help connect customers, companies,
developers and business partners. Apex is an object-oriented programming language that enables
developers to execute flow and transaction control statements on the Salesforce platform. Apex is
integrated, easy to use, data-focused, hosted, multi-tenant aware, automatically upgradeable, easy
to test and versioned.

Visualforce is a framework that enables developers to create dynamic, reusable interfaces that
can be hosted natively on Salesforce. They can create entire custom pages inside a Salesforce
organization or associate their logic with a controller class written in Apex. Developers can use
Visualforce pages to override standard buttons and tab overview pages, define custom tabs,
embed components in detail page layouts, create dashboard components, customize sidebars in
the Salesforce Console and add menu items.

Lightning is an improved version of Salesforce Classic. Its component-based framework enables


developers to build responsive SaaS applications for any device. It enables business users with
minimal or no coding skills to build third-party user applications on top of Salesforce apps.

Salesforce Einstein is a comprehensive AI technology for CRM developed for the Salesforce
Customer Success Platform. Einstein is designed to give sales and marketing departments more
complete and up-to-date views of customers and potential clients. It's designed to make
Salesforce Customer 360 more intelligent and to bring AI to trailblazers everywhere.

Benefits of Salesforce
Salesforce products are designed to help organizations meet customer expectations and enhance
customer satisfaction. The following are popular benefits of using Salesforce:
5. Time management. Salesforce offers comprehensive customer information and planning
resources in one centralized location. Organizations can save time, as they don't have to
search through logs and other important files. The built-in calendar tool helps them
visualize the daily, weekly, monthly or yearly schedules, which helps with setting
meetings, planning projects and staying on top of leads so they can be quickly
transformed into customers.
6. Increased revenue. Salesforce helps organizations sort through vast amounts of data,
which if done manually can take a lot of time and effort. By incorporating Salesforce,
organizations can spend less time on administrative tasks and more time building
successful customer relationships.
7. Easy accessibility. Salesforce enables organizations to safely access important files and
client updates anywhere with an internet connection. The Salesforce app is supported on
various mobile platforms and devices, including Apple iOS and Android OS. This
provides great flexibility for business owners who are always on the go or travel
frequently.
8. Enhanced collaboration. Salesforce Chatter provides swift and easy communication
between team members. It enables team members to collaborate individually or within
groups regarding work-related tasks, and members from different teams can be added to
accounts or activities that require extra attention.
9. Business scalability. Salesforce's underlying architecture can rapidly scale to
accommodate the needs of businesses and their customers.
10. Seamless integration. Salesforce can be easily integrated with most third-party apps, such
as Gmail or accounting software. Some of the third-party apps that Salesforce integrates
with include Google Cloud, WhatsApp, QuickBooks, LinkedIn, Mailchimp, Dropbox and
Heroku.
11. Trustworthy reporting. Salesforce keeps track of pertinent business data from all business
channels -- social media, app information, website analytics, business software -- and
keeps it organized. This feature is designed to sort and analyze vast amounts of data with
accuracy.

Salesforce
Architecture
In its basic form, salesforce
architecture is a multi-
tenant architecture built-up
of a series of
interconnected layers. The
important thing about this
architecture is that it shares
database resources with all
its tenants and stores data
securely. This architecture offers an easy-to-use interface so users can operate Salesforce
software effortlessly.
Of the many layers of the Salesforce architecture, the Salesforce platform layer serves as the
foundation of the architecture. This layer is powered by metadata and includes vital
components such as data services, AI services, and API. Here, metadata consists of custom
setups, scripts, and functions. It helps to access data from databases quickly. And APIs help
to communicate with other systems seamlessly. Moreover, the top layer of the architecture
consists of the Salesforce Apps such as sales, services, marketing, and so on.

Salesforce Database
Essentially, the Salesforce database is a relational
database. This is where you can store your customer
information in database objects. And Salesforce
database uses object record tables for storing data.
The data may include customer names, email
addresses, contact numbers, sales history, etc. The
Salesforce database provides many excellent
features, such as reliability, security, and flexibility to
its users. Additionally, the functionalities of the
Salesforce database remain unaffected even when
variations occur in the scaling of applications. Not
only that, it remains balanced regardless of the
changes in the data storage and processing power.

Data Modeling Components of the Salesforce Database


Objects, Relationships, and Schema Builder are the three crucial data modeling
components of the Salesforce database. Let's learn them in detail one by one below:

1. Objects

Objects are nothing but tables in the Salesforce database. Three types of objects are used
in the Salesforce database: Standard, Custom, and External. The first one, standard
objects, are the prebuilt objects used in the database. The prebuilt objects are named
account objects, contact objects, and lead objects. Then, Custom objects are the ones you
can create based on the business needs. For example, if you are running a retail industry,
you can create an object like ‘orders’. Custom objects provide custom layouts that help to
build analytic functions and reporting in the objects. External objects, the third one, support
mapping data outside the Salesforce database.

Know that every object in the salesforce database consists of records and fields. The rows
of tables are known as records, and the columns are known as fields.
2. Fields

When considering fields, standard objects consist of three prebuilt standard fields. Similarly,
custom objects also consist of standard fields. The standard fields are known as identity,
name, and system. At first, identity is one of the essential components of the Salesforce
database. Every record in an object will have a field that has a unique identifier for the
record. The second one, the name, is another standard field with the record's name.
Sometimes, it can be a number too. The third one, the system, is a read-only object field. It
will show the identity of the person who modified the data in the record for the last time.
Apart from all these three standard fields, every object will have fields such as checkboxes,
dates, formulas, numbers, etc.

3. Records

Salesforce database allows creating records on the objects once you have finalized the
required fields for the objects. For example, suppose you need to insert a new customer
into the customer table in the database. In that case, you can generate a new record for the
new customer on the customer table.

4. Relationships

As you know, Salesforce adapts relational database structure.


So, you can link multiple tables in a Salesforce database
together and easily share information. Note that we need to set
up relationships for custom objects, which is not required for
standard objects.

There are three types of relationships in the Salesforce


database: Look-up, Master-detail, and Hierarchical. Let’s have a look at them below:

5. Look-up

This relationship represents a relationship between two objects in the Salesforce database.
In this type, one object looks up another based on their relation. So, you can use a look-up
relationship only when two tables are related based on certain aspects. Besides, there can
be two types of relationships: one-to-one or one-to-many.

6. Master-detail

In this relationship, one object acts as the master, and another acts as the detail. The master
object can control the behaviors of the detail object. For instance, who can view the detailed
object data can be decided by the master object. In a way, the master-detail relationship is a
complex one. If you delete the master object, the detail object will also be deleted along with
the master object.

A simple but essential note is that you can use the master-detail relationship when objects
are always related. But you can use look-up relationships only when objects are related
sometimes.
7. Hierarchical

It is yet another type of relationship but a special one. You can use this relationship only for
user objects. This relationship helps to build management chains between users.

8. Schema Builder

It is a tool that can be used to visualise, understand, and edit data models. Not just that, you
can create fields and objects using schema builder. With this tool, you can quickly brief team
members about the customisations you have made in the Salesforce software. Besides, we
can clearly understand how data flows in systems from this tool.

Microsoft Office Online


Microsoft Office Online is a suite of online applications that lets you create Word documents,
Excel spreadsheets, and more. You can store the documents you create—plus any other files
you want—on Microsoft OneDrive, an online file storage service. Both of these tools are
accessible from anywhere with an Internet connection, and both are free. In this lesson, you'll
learn more about the features and advantages of Office Online and OneDrive. You'll also
get an idea of what to expect from the rest of this tutorial.

OneDrive was previously known as SkyDrive; Office Online was previously known as Office
Web Apps. You may occasionally see references to SkyDrive and Office Web Apps while
using these services.

What is Office Online?

Office Online is a free basic version of the most popular programs in the Microsoft Office
suite. It lets you create Word documents, Excel spreadsheets, and more without having to
buy or install software. There are four Office Online apps:

 Word: For creating


text documents
 Excel: For working
with spreadsheets
 PowerPoint: For
creating presentations
 OneNote: For taking
and organizing notes
You don't need to install anything on your computer to use Office Online. Instead, you work
with it online using a service called Microsoft OneDrive.

What is OneDrive?

OneDrive is a free online storage space you can use as your own
personal online hard drive. When you create a document with Office
Online, it will be saved to your OneDrive. You can store other files
there as well. This type of online storage is referred to as the cloud.
Because Office Online and OneDrive are based in the cloud, you can
access them from any device with an Internet connection at any time.
Review our lesson on Understanding the Cloud to learn more about the
basics of cloud computing.

Once you’ve used Office Online and OneDrive to store files in the cloud, you can edit and share them
without ever having to download them to your computer. You can also upload files from your
computer, including photos and music. You can even sync your computer and OneDrive so any
changes you make to your files are automatically copied between the cloud and your computer. As you
can see below, working with the cloud makes all of these things possible.

To use Office Online and OneDrive, you'll need a Microsoft account. Getting a Microsoft account
will also give you access to features like email and instant messaging. You'll learn how to create an
account in our lesson on Getting Started with OneDrive.
Visit our Microsoft Account tutorial to learn more about its features.
\
Why use Office Online and OneDrive?

OneDrive is one of the most popular cloud storage services available today, offering five
gigabytes (5GB) of free storage space. And because OneDrive allows you to share
and edit documents with Office Online, it's easy to collaborate with others.

Of course, Office Online and OneDrive aren't the only services that let you create and store files in
the cloud. Google Drive and Apple's iCloud provide similar features. However, Office Online offers
one major advantage over these other services: It is similar to the desktop versions of Microsoft
Office applications. If you already know how to use these applications, it will be easy for you to start
using Office Online. Also, Office Online and the regular Office applications use the same file types.
This means you can edit the same file in both Office Online and the desktop version.

Limitations of Office Online

While Office Online is a useful tool, it's not perfect. Office Online is a limited version of Microsoft
Office, which means it may be missing some of the features you like to use. You can still create
documents, spreadsheets, and presentations, but they may not look as polished without certain tools.

For example, here are the page layout tools in Word Online:

Here are the page layout tools in Word 2016:

As you can see, the desktop version includes several additional features. Still, if you can't afford to
purchase the full version of Microsoft Office, Office Online is a great (and free) alternative. Keep in
mind that you need to have access to the Internet to use Office Online and OneDrive. If your Internet
connection is unreliable, you may want to keep copies of important files on your computer as well.

Microsoft 365 as a SaaS


Flexibility is what people need to support their mobile activity while still maintaining sustained
productivity. Business productivity deals with real challenges to many organizations, on how building
efficiencies and effectiveness also drive productiveness. Microsoft 365 can help you to build
a productive modern workplace by empowering the mobile workplace with connected collaboration
in trusted clouds to maximize sales and productivity.
Microsoft 365 reinvents business productivity: creates modern workplaces and enables collaboration
through productive virtual meetings & conversations; secures file sharing, email collaboration, and
connection across your organization. It is a term for many of Microsoft’s cloud software services.
The SaaS (Software as a Service) model lets you subscribe to specific services, without the need to
buy software and independently download it by yourself. Microsoft 365 services give you access to
your office anytime you want. So doesn't matter you sign in from a tablet or smartphone, a laptop, or
a desktop, you have access any of the tools or information you need: multiple devices and multiple
locations, but the same access. Microsoft 365 is a business productivity solution, a technology that
fits your business’ needs.
Which are the main Microsoft 365 benefits?

1. Scalability is one of the most popular features of Microsoft 365. Unlike traditional IT, hampered
by the amount of hardware and software that can only be implemented on the leading site, cloud
services like Microsoft 365 are highly flexible. Microsoft 365 is supported by a scalable
infrastructure that can be used in different ways based on a company's business needs. Using it,
you only pay for the features you use. Even in the early stages of your business, you don't have
to worry about wasting money on features you won't use. As your business grows, you will not
be forced to switch to another business software to ensure that your growing needs are met.
Instead, you have to pay for more services and data storage. By choosing Microsoft 365 from the
beginning, you will save a lot of time and problems.
2. Unification of UI and Updating: Another problem for companies is that they often need a lot of
software and apps to do business. MS 365 instead allows you to have a unified experience. Microsoft
provides a single management. In this way, all business activities are easily managed by all employees
using just one software. Furthermore, you can modify the main Microsoft page according to your
business needs. If you want to share an app with your employees, just add it to the home page.
Another advantage of using Microsoft 365 is that you will have all the features up to date.
Since all the apps are developed and managed by Microsoft, these apps will be compatible with each
other and will be updated automatically by the provider. Not having to face compatibility and update
problems regularly will increase employee productivity and save you time.
3. Data Security: Your company data is what needs to be protected. As a result, most companies do
everything they can to protect their data and avoid loss. Microsoft 365 simplifies data loss prevention.
It offers numerous backup and data protection features that will allow you to feel comfortable.
4. Data Migration: If you are not yet using Microsoft software, you may be wondering what the data
migration process might be like. The answer offered by Microsoft is Microsoft 365!
In fact, it makes this process very simple regardless of the storage tools currently used by your
company. Furthermore, once you switch to Microsoft 365, you will never have to worry about
migrating your data again in the future, because Microsoft is continuously updating the system.
Microsoft will make updates to ensure maximum software efficiency and continue to meet all the
business needs.

Windows Live Mesh is a syncing and remote desktop access solution that allows users to
sync files and folders across different computers and Windows SkyDrive, and access their desktops via
Internet from anywhere.

Windows Live Mesh is an online and offline syncing solution that keeps selected documents, photos,
files and program setting preferences synced on supported operating systems up to more than 100,000
files and 50 GB of cumulative data. Windows Live Mesh was formerly known as Live Sync and Windows
Live Folders.
The Windows Live Mesh utility is primarily a file syncing and collaboration solution designed to keep the
selected content on all the synced devices similar and up to date. Windows Live Mesh provides syncing
between different workstations running Live Mesh client applications – even if they are not on the same
network – and update changes made on any workstation automatically on the other when they are
connected to the Internet.

Windows Live Mesh’s online and offline client application can be integrated with SkyDrive to back up
and sync files and folders on cloud storage. These folders are globally accessible over the Internet,
providing remote access to data, as well as remote program execution and complete access on the
remote workstation.

Microsoft OneDrive
OneDrive is an online cloud storage service from
Microsoft. OneDrive integrates with Windows 11 as
a default location for saving documents, giving
Microsoft account users five gigabytes of free
storage space before giving upgrade options.

OneDrive allows users to save files, photos and other


documents across multiple devices. A user can also
save their files in OneDrive and have it automatically
sync on other devices. This means someone can
access and work on the same document in multiple
locations. OneDrive provides relatively easy access
to cloud storage space, allowing options to share
content with others.

How it works

OneDrive integrates with Microsoft Office so users can access Word, Excel and PowerPoint
documents from OneDrive. It doesn’t require a download and should already be a part of
Windows 11. A Microsoft account is required to use One Drive and users will need to sign in
before using it. To sign in, users will need to go to onedrive.com and select “Sign in” which
appears at the top of the page.

The system allows users to simultaneously edit Office documents, edit documents in browsers,
and create and share folders. OneDrive also offers Facebook integration, automatic camera roll
backup and the ability for users to email slide shows. Users can also scan documents and store
them in OneDrive.
Users can choose where to save data -- on OneDrive or File Explorer. Those who want to use
OneDrive as a data backup platform should have data saved in both locations. However, other
users can choose to store their files in either or.

OneDrive also lets users share files stored in OneDrive with anyone. In OneDrive, the user will
need to select the folder they want to share, go to the share button on the top toolbar and select to
invite people. Users then can enter the email address of those they want to share the file with. If
the recipient also has Office 365, then the user can select an option to allow the shared recipient
to edit the page. There are also additional options for choosing access privileges in the drop-
down menus. From this step, users can click the shared button. Users can also generate links to
share files by going to the same share option and choosing “Get a Link.” Additional options
include allowing the recipient to edit or not. Users then create a link, select it, and can copy and
paste it to whoever they may want to. OneDrive is also available on mobile platforms -- on Mac,
iPhone and Android.

Another feature, called Personal Vault, allows users to store important files with additional
protection. Personal Vault allows users to access stored files only with a strong authentication
method or adding another layer of identity verification. For example, biometric authentication,
PIN, or a code sent to the user via email or SMS.

comparison of cloud computing platforms


Amazon Web Services (AWS)
Amazon Web Series or AWS as we abbreviate it is one of the leading Cloud Service providers in
the market. It was initiated in 2002. Back then, it offered only a few sets of tools and services. It
was in 2003 when Chris Pinkham and Benjamin Black presented a paper that helped automate and
revolutionize the AWS platform.
They believed that the retail platform, Amazon, could serve a bigger and better purpose. This is
when Amazon started looking at it from a larger business perspective, and we had services like
Cloud Storage and Computation that came into existence by the end of 2004. It was Christopher
Brown and his team that made this possible and Amazon services was cherished across the globe.
The popularity of AWS is unfathomable, and we will understand what makes this 170+ Cloud
Service Provider work well. Before that, let us go ahead and understand the Microsoft Azure
Cloud Platform.
Microsoft Azure
Microsoft Azure, as the name suggests, is Microsoft’s Cloud platform that lets you test, build,
deploy, and even manage applications that are placed in Microsoft Azure’s data centers or
Availability Zones. It has all three service model solutions just like AWS, which are infrastructure
as a Service, Platform as a Service, and Software as a Service. It lets you integrate with different
open source and Microsoft Stack of products/tools and programming languages.
It was announced in 2008 but was released on February 1, 2020, as Windows Azure and later on
renamed to Microsoft Azure as we know it today.
Azure is similar to AWS and offers a variety of products and solutions for app developers. The
Azure platform offers good processing and computing power. It is capable of deploying and
managing virtual machines at scale. Azure can also run large-scale “parallel batch computing” –
a unique feature that it shares with AWS over the Google Cloud Platform.
Google Cloud Platform (GCP)
Google Cloud Platform (GCP), also known as Google Cloud, announced in 2008 its first Public
Cloud Service Google App Engine which become public in 2011. It was the first Platform as a
Service introduced by Google Cloud. Post that, Google introduced various service cloud services
in the public domain. These services reside on the same cloud space where popular Google
Services reside like Google Search, YouTube, Gmail, etc.
Google is popularly known for its services in Machine Learning, Data Analytics, Compute,
Storage, etc.
I believe this is enough information about the Cloud Service providers we plan to compare. Let
us go ahead and understand how these compare with each other.
Module-4
A commonly agreed upon framework for describing cloud computing services goes by the
acronym “SPI.” This acronym stands for the three major services provided through the cloud:
software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS).
Figure 2-3 illustrates the relationship between services, uses, and types of clouds.

Cloud Security:
Cloud computing which is one of the most demanding technology of the current time, starting
from small to large organizations have started using cloud computing services. Where there are
different types of cloud deployment models are available and cloud services are provided as per
requirement like that internally and externally security is maintained to keep the cloud system safe.
Cloud computing security or cloud security is an important concern which refers to the act of
protecting cloud environments, data, information and applications against unauthorized access,
DDOS attacks, malwares, hackers and other similar attacks. Community Cloud : These allow to
a limited set of organizations or employees to access a shared cloud computing service
environment.
Planning of security in Cloud Computing :
As security is a major concern in cloud implementation, so an organization have to plan for security
based on some factors like below represents the three main factors on which planning of cloud
security depends.
 Resources that can be moved to the cloud and test its sensitivity risk are picked.
 The type of cloud is to be considered.
 The risk in the deployment of the cloud depends on the types of cloud and service
models.
Types of Cloud Computing Security Controls :
There are 4 types of cloud computing security controls i.e.
 Deterrent Controls : Deterrent controls are designed to block nefarious attacks on a cloud
system. These come in handy when there are insider attackers.
 Preventive Controls : Preventive controls make the system resilient to attacks by eliminating
vulnerabilities in it.
 Detective Controls : It identifies and reacts to security threats and control. Some examples
of detective control software are Intrusion detection software and network security
monitoring tools.
 Corrective Controls : In the event of a security attack these controls are activated. They limit
the damage caused by the attack.
Importance of cloud security :
For the organizations making their transition to cloud, cloud security is an essential factor while
choosing a cloud provider. The attacks are getting stronger day by day and so the security needs
to keep up with it. For this purpose it is essential to pick a cloud provider who offers the best
security and is customized with the organization’s infrastructure. Cloud security has a lot of
benefits –
Centralized security : Centralized security results in centralizing protection. As managing all the
devices and endpoints is not an easy task cloud security helps in doing so. This results in
enhancing traffic analysis and web filtering which means less policy and software updates.
Reduced costs : Investing in cloud computing and cloud security results in less expenditure in
hardware and also less manpower in administration
Reduced Administration : It makes it easier to administer the organization and does not have
manual security configuration and constant security updates.
Reliability : These are very reliable and the cloud can be accessed from anywhere with any
device with proper authorization.
When we are thinking about cloud security it includes various types of security like access
control for authorized access, network segmentation for maintaining isolated data, encryption for
encoded data transfer, vulnerability check for patching vulnerable areas, security monitoring for
keeping eye on various security attacks and disaster recovery for backup and recovery during
data loss.
There are different types of security techniques which are implemented to make the cloud
computing system more secure such as SSL (Secure Socket Layer) Encryption, Multi Tenancy
based Access Control, Intrusion Detection System, firewalls, penetration testing, tokenization,
VPN (Virtual Private Networks), and avoiding public internet connections and many more
techniques.
But the thing is not so simple how we think, even implementation of number of security
techniques there is always security issues are involved for the cloud system. As cloud system is
managed and accessed over internet so a lot of challenges arises during maintaining a secure
cloud. Some cloud security challenges are

Control over cloud data


Misconfiguration
Ever changing workload
Access Management
Disaster recovery

Security Issues in Cloud Computing:


There is no doubt that Cloud Computing provides various Advantages but there are also some
security issues in cloud computing. Below are some following Security Issues in Cloud
Computing as follows.
1. Data Loss: Data Loss is one of the issues faced in Cloud Computing. This is also known
as Data Leakage. As we know that our sensitive data is in the hands of Somebody else,
and we don’t have full control over our database. So, if the security of cloud service is to
break by hackers then it may be possible that hackers will get access to our sensitive data
or personal files.
2. Interference of Hackers and Insecure API’s : As we know, if we are talking about the cloud
and its services it means we are talking about the Internet. Also, we know that the easiest
way to communicate with Cloud is using API. So it is important to protect the Interface’s
and API’s which are used by an external user. But also in cloud computing, few services
are available in the public domain which are the vulnerable part of Cloud Computing
because it may be possible that these services are accessed by some third parties. So, it may
be possible that with the help of these services hackers can easily hack or harm our data.
3. User Account Hijacking : Account Hijacking is the most serious security issue in Cloud
Computing. If somehow the Account of User or an Organization is hijacked by a hacker
then the hacker has full authority to perform Unauthorized Activities.
4. Changing Service Provider : Vendor lock-In is also an important Security issue in Cloud
Computing. Many organizations will face different problems while shifting from one
vendor to another. For example, An Organization wants to shift from AWS Cloud to
Google Cloud Services then they face various problems like shifting of all data, also both
cloud services have different techniques and functions, so they also face problems
regarding that. Also, it may be possible that the charges of AWS are different from
Google Cloud, etc.
5. Lack of Skill : While working, shifting to another service provider, need an extra feature,
how to use a feature, etc. are the main problems caused in IT Company who doesn’t have
skilled Employees. So it requires a skilled person to work with Cloud Computing.
6. Denial of Service (DoS) attack : This type of attack occurs when the system receives too
much traffic. Mostly DoS attacks occur in large organizations such as the banking sector,
government sector, etc. When a DoS attack occurs, data is lost. So, in order to recover
data, it requires a great amount of money as well as time to handle it.

7 Privacy Challenges in Cloud Computing


The rapid development of the cloud has led to more flexibility, cost-cutting, and scalability of
products but also faces an enormous amount of privacy and security challenges. Since it is a
relatively new concept and is evolving day by day, there are undiscovered security issues that
creep up and need to be taken care of as soon as discovered. Here we discuss the top 7 privacy
challenges encountered in cloud computing:

1. Data Confidentiality Issues


Confidentiality of the user’s data is an important issue to be considered when externalizing and
outsourcing extremely delicate and sensitive data to the cloud service provider. Personal data
should be made unreachable to users who do not have proper authorization to access it and one
way of making sure that confidentiality is by the usage of severe access control policies and
regulations. The lack of trust between the users and cloud service providers or the cloud database
service provider regarding the data is a major security concern and holds back a lot of people
from using cloud services.

2. Data Loss Issues


Data loss or data theft is one of the major security challenges that the cloud providers face. If a
cloud vendor has reported data loss or data theft of critical or sensitive material data in the past,
more than sixty percent of the users would decline to use the cloud services provided by the
vendor. Outages of the cloud services are very frequently visible even from firms such as
Dropbox, Microsoft, Amazon, etc., which in turn results in an absence of trust in these services
during a critical time. Also, it is quite easy for an attacker to gain access to multiple storage units
even if a single one is compromised.

3. Geographical Data Storage Issues


Since the cloud infrastructure is distributed across different geographical locations spread
throughout the world, it is often possible that the user’s data is stored in a location that is out of
the legal jurisdiction which leads to the user’s concerns about the legal accessibility of local law
enforcement and regulations on data that is stored out of their region. Moreover, the user fears
that local laws can be violated due to the dynamic nature of the cloud makes it very difficult to
delegate a specific server that is to be used for trans-border data transmission.

4. Multi-Tenancy Security Issues


Multi-tenancy is a paradigm that follows the concept of sharing computational resources, data
storage, applications, and services among different tenants. This is then hosted by the same
logical or physical platform at the cloud service provider’s premises. While following this
approach, the provider can maximize profits but puts the customer at a risk. Attackers can take
undue advantage of the multi-residence opportunities and can launch various attacks against their
co-tenants which can result in several privacy challenges.

5. Transparency Issues
In cloud computing security, transparency means the willingness of a cloud service provider to
reveal different details and characteristics on its security preparedness. Some of these details
compromise policies and regulations on security, privacy, and service level. In addition to the
willingness and disposition, when calculating transparency, it is important to notice how
reachable the security readiness data and information actually are. It will not matter the extent to
which the security facts about an organization are at hand if they are not presented in an
organized and easily understandable way for cloud service users and auditors, the transparency
of the organization can then also be rated relatively small.

6. Hypervisor Related Issues


Virtualization means the logical abstraction of computing resources from physical restrictions
and constraints. But this poses new challenges for factors like user authentication, accounting,
and authorization. The hypervisor manages multiple Virtual Machines and therefore becomes the
target of adversaries. Different from the physical devices that are independent of one another,
Virtual Machines in the cloud usually reside in a single physical device that is managed by the
same hypervisor. The compromise of the hypervisor will hence put various virtual machines at
risk. Moreover, the newness of the hypervisor technology, which includes isolation, security
hardening, access control, etc. provides adversaries with new ways to exploit the system.

7. Managerial Issues
There are not only technical aspects of cloud privacy challenges but also non-technical and
managerial ones. Even on implementing a technical solution to a problem or a product and not
managing it properly is eventually bound to introduce vulnerabilities. Some examples are lack of
control, security and privacy management for virtualization, developing comprehensive service
level agreements, going through cloud service vendors and user negotiations, etc.

Infrastructure Security
Here, we discuss the threats, challenges, and guidance associated with securing an organization’s
core IT infrastructure at the network, host, and application levels. Information security
practitioners commonly use this approach; therefore, it is readily familiar to them. We discuss this
infrastructure security in the context of SPI service delivery models (SaaS, PaaS, and IaaS). Non-
information security professionals are cautioned not to simply equate infrastructure security to
infrastructure-as-a-service (IaaS) security. Although infrastructure security is more highly relevant
to customers of IaaS, similar consideration should be given to providers’ platform-as-a-service
(PaaS) and software-as-a-service (SaaS) environments, since they have ramifications to your
customer threat, risk, and compliance management. Another dimension is the cloud business
model (public, private, and hybrid clouds), which is orthogonal to the SPI service delivery model;
what we highlight is the relevance of discussion points as they apply to public and private clouds.
When discussing public clouds the scope of infrastructure security is limited to the layers of
infrastructure that move beyond the organization’s control and into the hands of service providers
(i.e., when responsibility to a secure infrastructure is transferred to the cloud service provider or
CSP, based on the SPI delivery model). Information in this chapter is critical for customers in
gaining an understanding of what security a CSP provides and what security you, the customer,
are responsible for providing.

Infrastructure Security: The Network Level

When looking at the network level of infrastructure security, it is important to distinguish between
public clouds and private clouds. With private clouds, there are no new attacks, vulnerabilities, or
changes in risk specific to this topology that
information security personnel need to consider.
Although your organization’s IT architecture
may change with the implementation of a private
cloud, your current network topology will
probably not change significantly. If you have a
private extranet in place (e.g., for premium
customers or strategic partners), for practical
purposes you probably have the network
topology for a private cloud in place already. The
security considerations you have today apply to a
private cloud infrastructure, too. And the security
tools you have in place (or should have in place)
are also necessary for a private cloud and operate
in the same way. Figure shows the topological
similarities between a secure extranet and a
private cloud. However, if you choose to use
public cloud services, changing security
requirements will require changes to your
network topology. You must address how your
existing network topology interacts with your
cloud provider’s network topology. There are
four significant risk factors in this use case:
Ensuring the confidentiality and integrity of
your organization’s data-in-transit to and from
your public cloud provider
Ensuring proper access control (authentication,
authorization, and auditing) to whatever
resources you are using at your public cloud provider
Ensuring the availability of the Internet-facing resources in a public cloud that are being used
by your organization, or have been assigned to your organization by your public cloud providers
Replacing the established model of network zones and tiers with domains

Infrastructure Security: The Host Level

When reviewing host security and assessing risks, you should consider the context of cloud
services delivery models (SaaS, PaaS, and IaaS) and deployment models (public, private, and
hybrid). Although there are no known new threats to hosts that are specific to cloud computing,
some virtualization security threats—such as VM escape, system configuration drift, and insider
threats by way of weak access control to the hypervisor—carry into the public cloud computing
environment. The dynamic nature (elasticity) of cloud computing can bring new operational
challenges from a security management perspective. The operational model motivates rapid
provisioning and fleeting instances of VMs. Managing vulnerabilities and patches is therefore
much harder than just running a scan, as the rate of change is much higher than in a traditional data
center.
In addition, the fact that the clouds harness the power of thousands of compute nodes, combined
with the homogeneity of the operating system employed by hosts, means the threats can be
amplified quickly and easily—call it the “velocity of attack” factor in the cloud. More importantly,
you should understand the trust boundary and the responsibilities that fall on your shoulders to
secure the host infrastructure that you manage. And you should compare the same with providers’
responsibilities in securing the part of the host infrastructure the CSP manages.

Infrastructure Security: The Application Level

Application or software security should be a critical element of your security program. Most
enterprises with information security programs have yet to institute an application security
program to address this realm. Designing and implementing applications targeted for deployment
on a cloud platform will require that existing application security programs reevaluate current
practices and standards. The application security spectrum ranges from standalone single-user
applications to sophisticated multiuser e-commerce applications used by millions of users. Web
applications such as content management systems (CMSs), wikis, portals, bulletin boards, and
discussion forums are used by small and large organizations. A large number of organizations also
develop and maintain custom-built web applications for their businesses using various web
frameworks (PHP,† .NET,‡ J2EE,§ Ruby on Rails, Python, etc.). According to SANS, until 2007
few criminals attacked vulnerable websites because other attack vectors were more likely to lead
to an advantage in unauthorized economic or information access. Increasingly, however, advances
in cross-site scripting (XSS) and other attacks have demonstrated that criminals looking for
financial gain can exploit vulnerabilities resulting from web programming errors as new ways to
penetrate important organizations. In this section, we will limit our discussion to web application
security: web applications in the cloud accessed by users with standard Internet browsers, such as
Firefox, Internet Explorer, or Safari, from any computer connected to the Internet.
Since the browser has emerged as the end user client for accessing in-cloud applications, it is
important for application security programs to include browser security into the scope of
application security. Together they determine the strength of end-to-end cloud security that helps
protect the confidentiality, integrity, and availability of the information processed by cloud
services.
Data Security
With regard to data-in-transit, the primary risk is in not using a vetted encryption algorithm.
Although this is obvious to information security professionals, it is not common for others to
understand this requirement when using a public cloud, regardless of whether it is IaaS, PaaS,or
SaaS. It is also important to ensure that a protocol provides confidentiality as well as integrity (e.g.,
FTP over SSL [FTPS], Hypertext Transfer Protocol Secure [HTTPS], and Secure Copy Program
[SCP])—particularly if the protocol is used for transferring data across the Internet.
Merely encrypting data and using a non-secured protocol (e.g., “vanilla” or “straight” FTP or
HTTP) can provide confidentiality, but does not ensure the integrity of the data (e.g., with the use
of symmetric streaming ciphers). Although using encryption to protect data-at-rest might seem
obvious, the reality is not that simple. If you are using an IaaS cloud service (public or private) for
simple storage (e.g., Amazon’s Simple Storage Service or S3), encrypting data-at-rest is
possible—and is strongly suggested. However, encrypting data-at-rest that a PaaS or SaaS cloud-
based application is using (e.g., Google Apps, Salesforce.com) as a compensating control is not
always feasible. Data-at-rest used by a cloud-based application is generally not encrypted, because
encryption would prevent indexing or searching of that data.

Provider Data and Its Security


In addition to the security of your own customer data, customers should also be concerned about
what data the provider collects and how the CSP protects that data. Specifically with regard to
your customer data, what metadata does the provider have about your data, how is it secured, and
what access do you, the customer, have to that metadata? As your volume of data with a particular
provider increases, so does the value of that metadata. Additionally, your provider collects and
must protect a huge amount of security-related data. For example, at the network level, your
provider should be collecting, monitoring, and protecting firewall, intrusion prevention system
(IPS), security incident and event management (SIEM), and router flow data. At the host level
your provider should be collecting system logfiles, and at the application level SaaS providers
should be collecting application log data, including authentication and authorization information.
What data your CSP collects and how it monitors and protects that data is important to the provider
for its own audit purposes (e.g., SAS 70, as discussed in Chapter 8). Additionally, this information
is important to both providers and customers in case it is needed for incident response and any
digital forensics required for incident analysis.
Storage
For data stored in the cloud (i.e., storage-as-a-service), we are referring to IaaS and not data
associated with an application running in the cloud on PaaS or SaaS. The same three information
security concerns are associated with this data stored in the cloud (e.g., Amazon’s S3) as with data
stored elsewhere: confidentiality, integrity, and availability.
Confidentiality
When it comes to the confidentiality of data stored in a public cloud, you have two potential
concerns. First, what access control exists to protect the data? Access control consists of both
authentication and authorization. CSPs generally use weak authentication mechanisms (e.g.,
username + password), and the authorization (“access”) controls available to users tend to be quite
coarse and not very granular. For large organizations, this coarse authorization presents significant
security concerns unto itself. Often, the only authorization levels cloud vendors provide are
administrator authorization (i.e., the owner of the account itself) and user authorization (i.e., all
other authorized users)—with no levels in between (e.g., business unit administrators, who are
authorized to approve access for their own business unit personnel). Again, these access control
issues are not unique to CSPs, and we discuss them in much greater detail in the following chapter.
What is definitely relevant to this section, however, is the second potential concern: how is the
data that is stored in the cloud actually protected? For all practical purposes, protection of data
stored in the cloud involves the use of encryption.
So, is a customer’s data actually encrypted when it is stored in the cloud? And if so, with what
encryption algorithm, and with what key strength? It depends, and specifically, it depends on
which CSP you are using. For example, EMC’s MozyEnterprise does encrypt a customer’s data.
However, AWS S3 does not encrypt a customer’s data. Customers are able to encrypt their own
data themselves prior to uploading, but S3 does not provide encryption.
If a CSP does encrypt a customer’s data, the next consideration concerns what encryption
algorithm it uses. Not all encryption algorithms are created equal. Cryptographically, many
algorithms provide insufficient security. Only algorithms that have been publicly vetted by a
formal standards body (e.g., NIST) or at least informally by the cryptographic community should
be used. Any algorithm that is proprietary should absolutely be avoided. Note that we are talking
about symmetric encryption algorithms here. Symmetric encryption involves the use of a single
secret key for both the encryption and decryption of data. Only symmetric encryption has the speed
and computational efficiency to handle encryption of large volumes of data. It would be highly
unusual to use an asymmetric algorithm for this encryption use case.
Symmetric Encryption use only a single key for both encryption and decryption. Asymmetric
Encryption use only a single key for both encryption and decryption.
Confidentiality makes sure that only real receiver should receive the data. Authenticity makes
sure that only real sender should send the data. Integrity is the accuracy and consistency of data
as well as the completeness and reliability of systems.
Identity and Access Management
In a typical organization where applications are deployed within the organization’s perimeter the
“trust boundary” is mostly static and is monitored and controlled by the IT department. In that
traditional model, the trust boundary encompasses the network, systems, and applications hosted
in a private data center managed by the IT department (sometimes third-party providers under IT
supervision). And access to the network, systems, and applications is secured via network security
controls including virtual private networks (VPNs), intrusion detection systems (IDSs), intrusion
prevention systems (IPSs), and multifactor authentication.
With the adoption of cloud services, the organization’s trust boundary will become dynamic and
will move beyond the control of IT. With cloud computing, the network, system, and application
boundary of an organization will extend into the service provider domain. (This may already be
the case for most large enterprises engaged in e-commerce, supply chain management,
outsourcing, and collaboration with partners and communities.) This loss of control continues to
challenge the established trusted governance and control model (including the trusted source of
information for employees and contractors), and, if not managed properly, will impede cloud
service adoption within an organization.
To compensate for the loss of network control and to strengthen risk assurance, organizations will
be forced to rely on other higher-level software controls, such as application security and user
access controls. These controls manifest as strong authentication, authorization based on role or
claims, trusted sources with accurate attributes, identity federation, single sign-on (SSO), user
activity monitoring, and auditing. In particular, organizations need to pay attention to the identity
federation architecture and processes, as they can strengthen the controls and trust between
organizations and cloud service providers (CSPs).

IAM Challenges
One critical challenge of IAM concerns managing access for diverse user populations (employees,
contractors, partners, etc.) accessing internal and externally hosted services. IT is constantly
challenged to rapidly provision appropriate access to the users whose roles and responsibilities
often change for business reasons. Another issue is the turnover of users within the organization.
Turnover varies by industry and function—seasonal staffing fluctuations in finance departments,
for example—and can also arise from changes in the business, such as mergers and acquisitions,
new product and service releases, business process outsourcing, and changing responsibilities. As
a result, sustaining IAM processes can turn into a persistent challenge.
Access policies for information are seldom centrally and consistently applied. Organizations can
contain disparate directories, creating complex webs of user identities, access rights, and
procedures. This has led to inefficiencies in user and access management processes while exposing
these organizations to significant security, regulatory compliance, and reputation risks.
To address these challenges and risks, many companies have sought technology solutions to enable
centralized and automated user access management. Many of these initiatives are entered into with
high expectations, which is not surprising given that the problem is often large and complex. Most
often those initiatives to improve IAM can span several years and incur considerable cost. Hence,
organizations should approach their IAM strategy and architecture with both business and IT
drivers that address the core inefficiency issues while preserving the control’s efficacy (related to
access control). Only then will the organizations have a higher likelihood of success and return on
investment.
IAM Definitions
To start, we’ll present the basic concepts and definitions of IAM functions for any service:
Authentication
Authentication is the process of verifying the identity of a user or system (e.g., Lightweight
Directory Access Protocol [LDAP] verifying the credentials presented by the user, where the
identifier is the corporate user ID that is unique and assigned to an employee or contractor).
Authentication usually connotes a more robust form of identification. In some use cases, such as
service-to-service interaction, authentication involves verifying the network service requesting
access to information served by another service (e.g., a travel web service that is connecting to a
credit card gateway to verify the credit card on behalf of the user).
Authorization
Authorization is the process of determining the privileges the user or system is entitled to once the
identity is established. In the context of digital services, authorization usually follows the
authentication step and is used to determine whether the user or service has the necessary privileges
to perform certain operations—in other words, authorization is the process of enforcing policies.
Auditing
In the context of IAM, auditing entails the process of review and examination of authentication,
authorization records, and activities to determine the adequacy of IAM system controls, to verify
compliance with established security policies and procedures (e.g., separation of duties), to detect
breaches in security services (e.g., privilege escalation), and to recommend any changes that are
indicated for countermeasures.
IAM Architecture and Practice
IAM is not a monolithic solution that can be easily deployed to gain capabilities immediately. It is
as much an aspect of architecture (see Figure 5-1) as it is a collection of technology components,
processes, and standard practices. Standard enterprise IAM architecture encompasses several
layers of technology, services, and processes. At the core of the deployment architecture is a
directory service (such as LDAP or Active Directory) that acts as a repository for the identity,
credential, and user attributes of the organization’s user pool. The directory interacts with IAM
technology components such as authentication, user management, provisioning, and federation
services that support the standard IAM practice and processes within the organization. It is not
uncommon for organizations to use several directories that were deployed for environment-
specific reasons (e.g., Windows systems using Active Directory, Unix systems using LDAP) or
that were integrated into the environment by way of business mergers and acquisitions. The IAM
processes to support the business can be broadly categorized as follows:
User management Activities for the effective governance and management of identity life cycles
Authentication management Activities for the effective governance and management of the
process for determining that an entity is who or what it claims to be
Authorization management Activities for the effective governance and management of the
process for determining entitlement rights that decide what resources an entity is permitted to
access in accordance with the organization’s policies
Access management Enforcement of policies for access control in response to a request from an
entity (user, services) wanting to access an IT resource within the organization
Data management and provisioning Propagation of identity and data for authorization to IT
resources via automated or manual processes
Monitoring and auditing Monitoring, auditing, and reporting compliance by users regarding
access to resources within the organization based on the defined policies

IAM
processes support the following operational activities:
Provisioning This is the process of on-boarding users to systems and applications. These processes
provide users with necessary access to data and technology resources. The term typically is used
in reference to enterprise-level resource management. Provisioning can be thought of as a
combination of the duties of the human resources and IT departments, where users are given access
to data repositories or systems, applications, and databases based on a unique user identity.
Deprovisioning works in the opposite manner, resulting in the deletion or deactivation of an
identity or of privileges assigned to the user identity.
Credential and attribute management These processes are designed to manage the life cycle of
credentials and user attributes— create, issue, manage, revoke—to minimize the business risk
associated with identity impersonation and inappropriate account use. Credentials are usually
bound to an individual and are verified during the authentication process. The processes include
provisioning of attributes, static (e.g., standard text password) and dynamic (e.g., one-time
password) credentials that comply with a password standard (e.g., passwords resistant to dictionary
attacks), handling password expiration, encryption management of credentials during transit and
at rest, and access policies of user attributes (privacy and handling of attributes for various
regulatory reasons).
Entitlement management Entitlements are also referred to as authorization policies. The
processes in this domain address the provisioning and deprovisioning of privileges needed for the
user to access resources including systems, applications, and databases. Proper entitlement
management ensures that users are assigned only the required privileges (least privileges) that
match with their job functions. Entitlement management can be used to strengthen the security of
web services, web applications, legacy applications, documents and files, and physical security
systems.
Compliance management This process implies that access rights and privileges are monitored
and tracked to ensure the security of an enterprise’s resources. The process also helps auditors
verify compliance to various internal access control policies, and standards that include practices
such as segregation of duties, access monitoring, periodic auditing, and reporting. An example is
a user certification process that allows application owners to certify that only authorized users have
the privileges necessary to access business-sensitive information.
Identity federation management Federation is the process of managing the trust relationships
established beyond the internal network boundaries or administrative domain boundaries among
distinct organizations. A federation is an association of organizations that come together to
exchange information about their users and resources to enable collaborations and transactions
(e.g., sharing user information with the organizations’ benefits systems managed by a third-party
provider). Federation of identities to service providers will support SSO to cloud services.
Centralization of authentication (authN) and authorization (authZ)
A central authentication and authorization infrastructure alleviates the need for application
developers to build custom authentication and authorization features into their applications.
Furthermore, it promotes a loose coupling architecture where applications become agnostic to the
authentication methods and policies. This approach is also called an “externalization of authN and
authZ” from applications.
Audit and Compliance
audit and compliance refers to the internal and external processes that an organization implements
to:
 Identify the requirements with which it must abide—whether those requirements are
driven by business objectives, laws and regulations, customer contracts, internal
corporate policies and standards, or other factors
 Put into practice policies, procedures, processes, and systems to satisfy such requirements
 Monitor or check whether such policies, procedures, and processes are consistently
followed
Audit and compliance functions have always played an important role in traditional outsourcing
relationships. However, these functions take on increased importance in the cloud given the
dynamic nature of software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), and platform-
as-a-service (PaaS) environments. Cloud service providers (CSPs) are challenged to establish,
monitor, and demonstrate ongoing compliance with a set of controls that meets their customers’
business and regulatory requirements. Maintaining separate compliance efforts for different
regulations or standards is not sustainable. A practical approach to audit and compliance in the
cloud includes a coordinated combination of internal policy compliance, regulatory compliance,
and external auditing.
Internal Policy Compliance
CSPs, like other enterprises, need to establish processes, policies, and procedures for managing
their IT systems that are appropriate for the nature of the service offering, can be operationalized
in the culture of the organization, and satisfy relevant external requirements. In designing their
service offerings and supporting processes, CSPs need to:
 Address the requirements of their current and planned customer base
 Establish a strong control foundation that will substantially meet customer requirements,
thereby minimizing the need for infrastructure customization that could reduce
efficiencies and diminish the value proposition of the CSP’s services
 Set a standard that is high enough to address those requirements
 Define standardized processes to drive efficiencies
The Figure shows a life cycle approach for determining, implementing, operating, and
monitoring controls over a CSP.

Here is
an explanation of each stage of the life cycle:
1. Define strategy
As a CSP undertakes to build out or take a fresh look at its service offerings, the CSP should clearly
define its business strategy and related risk management philosophy. What market segments or
industries does the CSP intend to serve?
This strategic decision will drive the decision of how high the CSP needs to “set the bar” for its
controls. This is an important decision, as setting it too low will make it difficult to meet the needs
of new customers and setting it too high will make it difficult for customers to implement and
difficult for the CSP to maintain in a cost-effective manner. A clear strategy will enable the CSP
to meet the baseline requirements of its customers in the short term and provide the flexibility to
incorporate necessary changes while resisting unnecessary or potentially unprofitable
customization.
2. Define requirements
Having defined its strategy and target client base, the CSP must define the requirements for
providing services to that client base. What specific regulatory or industry requirements are
applicable? Are there different levels of requirements for different sets of clients?
The CSP will need to determine the minimum set of requirements to serve its client base and the
incremental industry-specific requirements. For example, the CSP will need to determine whether
it supports all of those requirements as part of a base product offering or whether it offers
incremental product offerings with additional capabilities at a premium, now or in a future release.
3. Define architecture
Driven by its strategy and requirements, the CSP must now determine how to architect and
structure its services to address customer requirements and support planned growth. As part of the
design, for example, the CSP will need to determine which controls are implemented as part of the
service by default and which controls (e.g., configuration settings, selected platforms, or
workflows) are defined and managed by the customer.
4. Define policies
The CSP needs to translate its requirements into policies. In defining such policies, the CSP should
draw upon applicable industry standards as discussed in the sections that follow. The CSP will
also need to take a critical look at its staffing model and ensure alignment with policy requirements.
5. Define processes and procedures
The CSP then needs to translate its policy requirements into defined, repeatable processes and
procedures—again using applicable industry standards and leading practices guidance. Controls
should be automated to the greatest extent possible for scalability and to facilitate monitoring.
6. Ongoing operations
Having defined its processes and procedures, the CSP needs to implement and execute its defined
processes, again ensuring that its staffing model supports the business requirements.
7. Ongoing monitoring
The CSP should monitor the effectiveness of its key control activities on an ongoing basis with
instances of non-compliance reported and acted upon. Compliance with the relevant internal and
external requirements should be realized as a result of a robust monitoring program.
8. Continuous improvement
As issues and improvement opportunities are identified, the CSP should ensure that there is a
feedback loop to guarantee that processes and controls are continuously improved as the
organization matures and customer requirements evolve.

Governance, Risk, and Compliance (GRC)

CSPs are typically challenged to meet the requirements of a diverse client base. To build a
sustainable model, it is essential that the CSP establish a strong foundation of controls that can be
applied to all of its clients. In that regard, the CSP can use the concept of GRC that has been
adopted by a number of leading traditional outsourced service providers and CSPs. GRC
recognizes that compliance is not a point-in-time activity, but rather is an ongoing process that
requires a formal compliance program. Figure 8-2 depicts such a programmatic approach to
compliance.

Key components of this approach include:


1. Risk assessment
This approach begins with an assessment of the risks that face the CSP and identification of the
specific compliance regimes/requirements that are applicable to the CSP’s services. The CSP
should address risks associated with key areas such as appropriate user authentication mechanisms
for accessing the cloud, encryption of sensitive data and associated key management controls,
logical separation of customers’ data, and CSP administrative access.
2. Key controls
Key controls are then identified and documented to address the identified risks and compliance
requirements. These key controls are captured in a unified control set that is designed to meet the
requirements of the CSP’s customers and other external requirements. The CSP drives compliance
activities based on its key controls rather than disparate sets of externally generated compliance
requirements.
3. Monitoring
Monitoring and testing processes are defined and executed on an ongoing basis for key controls.
Gaps requiring remediation are identified with remediation progress tracked. The results of
ongoing monitoring activities may also be used to support any required external audits. Refer to
“Auditing the Cloud for Compliance” on page 194 for a discussion of external audit approaches.
4. Reporting
Metrics and key performance indicators (KPIs) are defined and reported on an ongoing basis.
Reports of control effectiveness and trending are made available to CSP management and external
customers, as appropriate.
5. Continuous improvement
Management improves its controls over time—acting swiftly to address any significant gaps
identified during the course of monitoring and taking advantage of opportunities to improve
processes and controls.
6. Risk assessment—new IT projects and systems
The CSP performs a risk assessment as new IT projects, systems, and services are developed to
identify new risks and requirements, to assess the impact on the CSP’s current controls, and to
determine whether additional or modified controls and monitoring processes are needed.
The CSP also performs an assessment when considering entry into a new industry or market or
taking on a major new client with unique control requirements.

You might also like