Cloud Computing
Cloud Computing
PARALA MAHARAJA
ENGINEERING COLLEGE
Uniprocessor
by either increasing the number of transistor or quality of transistor
By utilizing the concept of Pipelining, for proper utilization of individual subsection of the
processor
Processor is basically made up of set of transistors
Multiprocessor
Try to increase the processing capacity of the processor by creating multiple processing unit (core)
with in a single processor.
Parallel vs Distributed
UMA Non Uniform Memory Access
server => GRID => CLOUD
There are certain services and models working behind the scene making the cloud computing feasible
and accessible to end users. Following are the working models for cloud computing:
Deployment Models
Deployment models define the type of access to the cloud, i.e., how the cloud is located? Cloud can
have any of the four types of access: Public, Private, Hybrid and Community.
PUBLIC CLOUD : The Public Cloud allows systems and services to be easily accessible to the general public.
Public cloud may be less secure because of its openness, e.g., e-mail.
PRIVATE CLOUD : The Private Cloud allows systems and services to be accessible within an organization.
It offers increased security because of its private nature.
COMMUNITY CLOUD : The Community Cloud allows systems and services to be accessible by group of
organizations.
HYBRID CLOUD : The Hybrid Cloud is mixture of public and private cloud. However, the critical activities
are performed using private cloud while the non-critical activities are performed using public cloud.
Service Models
Though service-oriented architecture advocates "Everything as a
service" (with the acronyms EaaS), cloud-computing providers
offer their "services" according to different models, of which the
three standard models per NIST are Infrastructure as a Service
(IaaS), Platform as a Service (PaaS), and Software as a Service
(SaaS). These models offer increasing abstraction; they are thus
often portrayed as layers in a stack: infrastructure-, platform- and
software-as-a-service, but these need not be related. For example,
one can provide SaaS implemented on physical machines, without
using underlying PaaS or IaaS layers, and conversely one can run a
program on IaaS and access it directly, without wrapping it as SaaS.
Cloud computing Architecture is a combination of service oriented architecture and event driven
architecture. Cloud computing architecture is divided into following two parts:
1. Front End
The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes User-agents ( like web
browsers including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and
mobile devices.
2. Back End
The back end is used by the service provider. It manages all the resources that are required to
provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.
Components of Cloud Computing Architecture
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to interact
with the cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s requirement.
i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS applications
run directly through the web browser means we do not require to download and install these
applications. Some important example of SaaS is given below –
ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite similar to SaaS, but
the difference is that PaaS provides a platform for software creation, but using SaaS, we can access
software over the internet without the need of any platform.
Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.
iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is responsible for
managing applications data, middleware, and runtime environments.
Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod.
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge amount of
storage capacity in the cloud to store and manage data.
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure includes
hardware and software components such as servers, storage, network devices, virtualization software,
and other storage resources that are needed to support the cloud computing model.
7. Management
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.
8. Security
Security is an in-built back end component of cloud computing. It implements a security mechanism in
the back end.
9. Internet
The Internet is medium through which front end and back end can interact and communicate with each
other.
Scenarios in Cloud: 1
1. Cloud consumer interacts with the cloud broker instead of contacting a cloud provider directly.
2. The cloud broker may create a new service (mash up) by combining multiple services or by enhancing
an existing service.
3. Actual cloud providers are invisible to the cloud consumer.
Scenarios in Cloud: 2
1. Cloud carriers provide the connectivity and transport of cloud services from cloud providers to cloud
consumers.
2. Cloud provider participates in and arranges for two unique service level agreements (SLAs), one with a
cloud carrier (e.g. SLA2) and one with a cloud consumer (e.g. SLA1).
3. A cloud provider may request cloud carrier to provide dedicated and encrypted connections to ensure
the cloud services (SLA’s).
Scenarios in Cloud: 3
1. Cloud auditor conducts independent assessments for the operation and security of the cloud service.
2. The audit may involve interactions with both the Cloud Consumer and the Cloud Provider.
Cloud Consumer
Cloud consumer browses & uses the service.
Cloud consumer sets up contracts with the cloud provider.
Cloud consumers need SLAs to specify the technical performance requirements that should be
fulfilled by a cloud provider.
SLAs cover the quality of service, security, remedies for performance failures.
A cloud provider list some SLAs that limit and obligate the cloud consumers by must acceptance.
Cloud consumer can freely choose a cloud provider with better pricing with favorable conditions.
Pricing policy and SLAs are non-negotiable.
SaaS consumers
SaaS consumers can be organizations that provide their members with access to software applications,
end users who directly use software applications, or software application administrators who configure
applications for end users.
SaaS consumers can be billed based on the number of end users, the time of use, the network bandwidth
consumed, the amount of data stored or duration of stored data.
PaaS consumers
PaaS consumers can be application developers or administrators
1. who design and implement application software
2. application testers who run and test applications
3. who publish applications into the cloud
4. who configure and monitor application performance.
PaaS consumers can be billed according to, processing, database storage and network resources
consumed by the PaaS application, and the duration of the platform usage.
IaaS consumer
IaaS consumer can be system developers, system administrators and IT managers who are interested in
creating, installing, managing and monitoring services for IT infrastructure operations.
IaaS consumer can be billed according to the amount or duration of the resources consumed, such as
CPU hours used by virtual computers, volume and duration of data stored, network bandwidth consumed,
number of IP addresses used for certain intervals.
Cloud Provider
Cloud Provider acquires and manages the computing infrastructure required for providing the services,
runs the cloud software that provides the services, and makes arrangement to deliver the cloud services
to the Cloud Consumers through network access.
SaaS provider deploys, configures, maintains and updates the operation of the software applications on
a cloud infrastructure. SaaS provider maintains the expected service levels to cloud consumers.
PaaS Provider manages the computing infrastructure for the platform and components (runtime
software execution stack, databases, and other middleware).
IaaS Cloud Provider provides physical hardware and cloud software that makes the provisioning of these
infrastructure services, for example, the physical servers, network equipment, storage devices, host OS
and hypervisors for virtualization.
Cloud auditor
Audits are performed to verify conformance to standards.
Auditor evaluates the security controls, privacy impact, performance, etc.
Auditing is especially important for federal agencies.
Security auditing, can make an assessment of the security controls to determine the extent to which
the controls are implemented correctly, operating as intended, and producing the desired outcome.
This is done by verification of the compliance with regulation and security policy.
Privacy audit helps in Federal agencies comply with applicable privacy laws and regulations governing
an individual's privacy, and to ensure confidentiality, integrity, and availability of an individual's personal
information at every stage of development and operation.
Cloud Broker
Integration of cloud services can be complex for consumers. Hence cloud broker, is needed.
Broker manages the use, performance and delivery of cloud services and negotiates relationships
between cloud providers and cloud consumers.
In general, a cloud broker can provide services in three categories:
Service Intermediation: Broker enhances a service by improving capability and providing value-
added services to consumers. The improvement can be managing access to cloud services,
identity management, performance reporting, enhanced security, etc.
Service Aggregation: Broker combines and integrates multiple services into one or more new
services. The broker provides data integration and ensures the secure data movement.
Service Arbitrage: It is similar to service aggregation with the flexibility to choose services from
multiple agencies. For example, broker can select service with the best response time.
Cloud Carrier
Cloud carriers provide access to consumers through network, telecommunication and other access
devices.
For example, cloud consumers can obtain cloud services through network access devices, such as
computers, laptops, mobile phones, mobile internet devices (MIDs), etc.
The distribution of cloud services is normally provided by network and telecommunication carriers or a
transport agent, where a transport agent refers to a business organization that provides physical
transport of storage media such as high-capacity hard drives.
Cloud provider can set up SLAs with a cloud carrier to provide services consistent with the level of SLAs
offered to cloud consumers.
On the other hand, the target of choreography language is the co-ordination of long running interaction
between multiple distributed parties, where each one of the parties uses web services to offer his
externally accessible operations. Choreography languages depict the compositions from a global view
point, showing the interchange of message between the involved parties. The languages used for web
service choreography are:
1. WSCI:
It is an XML-based language to illustrate the interface of a web service, which participate in a
choreographed interaction with other services. This interface shows the flow of messages exchanged by
the web services. This language has been developed by Companies like Sun, SAP, BEA, and Intalio.
WSCI also describes how the choreography of these operations should expose relevant information such
as message correlation, exception handling, transaction description and dynamic participation
capabilities. This behavior is expressed by means of temporal and logical dependencies in the flow of
messages. For that purpose WSCI includes sequencing rules, correlation, exception handling and
transacxtion. But the internal implementation of the web service is not addressed by WSCI.
Th WSCDL is an XML based language to describe the peer to peer collaborations of web services talking
part in choreography. This description defines (from a global view point) the common behavior of the
services, and the ordered message interchanges make reaching a common business goal possible. The
Choreography modeling with WS-CDL consist of the following elements
a. Participant: groups all the parts of the collaboration that must be implemented by the same
entity.
b. Role: Potential behavior of the participant.
c. Relationship: identifies the mutual obligations the must be fulfilled in a collaboration to
succeed.
d. Type: Kind of information corresponding to a variable.
e. Variables: information about the common objects in collaboration.
f. Token: alias to the reference part of a variable.
g. Choreographies: A choreography defines collaboration between part6icipants using the
following means.
a. Choreography Composition
b. Choreography Lifeline
c. Choreography Recovery
h. Channel: a point of collaboration between participant.
i. Activities: an activity is the lowest level element of a choreography that perform some work.
j. Ordering Structure: include Sequence, parallel, & choice.
k. Semantics: allows the creation of description with the semantic definitions.
In OWL-S each service is considered as a set of atomic processes with inputs and outputs associated. In
that way, when the mapping from abstract definition to concrete utilization must be done. OWL-S is
complemented with the use of WSDL for the concrete definition of services.
4.
=> file-structure
multiple layer of classification with permission to the user to decide the type of classification
used for storing the data in HDD.
With security
=> database
Sequential storing of data in a well-structured manner.
Removing the redundancy
Maiintating the ACID property
With proper intermediate relationship
With proper security
=> big-data 5V
Volume
Value
Velocity
Variety
Veracity
Data Center:
A data center is a physical facility that organizations use to house their critical applications and data. A data
center's design is based on a network of computing and storage resources that enable the delivery of shared
applications and data. The key components of a data center design include routers, switches, firewalls, storage
systems, servers, and application-delivery controllers.
Availability
Amazon Web Services provides services from dozens of data centers spread across availability zones (AZs) in
regions across the world. An AZ is a location that contains multiple physical data centers. A region is a
collection of AZs in geographic proximity connected by low-latency network links.
A business will choose one or multiple availability zones for a variety of reasons, such as compliance and
proximity to end customers. For example, an AWS customer can spin up virtual machines (VMs) and replicate
data in different AZs to achieve a highly reliable infrastructure that is resistant to failures of individual servers or
an entire data center.
Amazon Elastic Compute Cloud (EC2) is a service that provides virtual servers (called EC2 instances) for
compute capacity. The EC2 service offers dozens of instance types with varying capacities and sizes, tailored to
specific workload types and applications, such as memory-intensive and accelerated-computing jobs. AWS also
provides an Auto Scaling tool to dynamically scale capacity to maintain instance health and performance.
Storage
Amazon Simple Storage Service (S3) provides scalable object storage for data backup, collection and
analytics. An IT professional stores data and files as S3 objects -- which can range up to 5 gigabytes (GB) --
inside S3 buckets to keep them organized. A business can save money with S3 through its Infrequent Access
storage tier or by using Amazon Glacier for long-term cold storage.
Amazon Elastic Block Store provides block-level storage volumes for persistent data storage when using EC2
instances. Amazon Elastic File System offers managed cloud-based file storage.
A business can also migrate data to the cloud via storage transport devices, such as AWS Snowball and
Snowmobile, or use AWS Storage Gateway to enable on-premises apps to access cloud data.
Developer tools
A developer can take advantage of AWS command-line tools and software development kits (SDKs)
to deploy and manage applications and services. This includes:
The AWS Command Line Interface, which is Amazon's proprietary code interface.
A developer can use AWS Tools for Powershell to manage cloud services from Windows
environments.
Developers can use AWS Serverless Application Model to simulate an AWS environment to
test Lambda functions.
AWS SDKs are available for a variety of platforms and programming languages, including Java, PHP,
Python, Node.js, Ruby, C++, Android and iOS.
Amazon API Gateway enables a development team to create, manage and monitor custom application
program interfaces (APIs) that let applications access data or functionality from back-end services. API
Gateway manages thousands of concurrent API calls at once.
AWS also provides a packaged media transcoding service (like Amazon Elastic Transcoder) and a
service that visualizes workflows for microservices-based applications (AWS Step Functions).
A development team can also create continuous integration and continuous delivery pipelines with
services like:
AWS CodePipeline
AWS CodeBuild
AWS CodeDeploy
AWS CodeStar
A developer can also store code in Git repositories with AWS CodeCommit and evaluate the
performance of microservices-based applications with AWS X-Ray.
Management and monitoring
An admin can manage and track cloud resource configuration via AWS Config and AWS Config
Rules. Those tools, along with AWS Trusted Advisor, can help an IT team avoid improperly
configured and needlessly expensive cloud resource deployments.
AWS provides several automation tools in its portfolio. An admin can automate infrastructure
provisioning via AWS CloudFormation templates, and also use AWS OpsWorks and Chef to
automate infrastructure and system configurations.
An AWS customer can monitor resource and application health with Amazon CloudWatch and the
AWS Personal Health Dashboard, as well as use AWS CloudTrail to retain user activity and API
calls for auditing.
Artificial intelligence
AWS offers a range of AI model development and delivery platforms, as well as packaged AI-based applications.
The Amazon AI suite of tools includes:
Amazon Lex for voice and text chatbot technology;
Amazon Polly for text-to-speech translation; and
Amazon Rekognition for image and facial analysis.
AWS also provides technology for developers to build smart apps that rely on machine learning technology and
complex algorithms. With AWS Deep Learning Amazon Machine Images (AMIs), developers can create and
train custom AI models with clusters of graphics processing units (GPUs) or compute-optimized instances. AWS
also includes deep learning development frameworks for MXNet and TensorFlow.
On the consumer side, AWS technologies power the Alexa Voice Services, and a developer can use the Alexa
Skills Kit to build voice-based apps for Echo devices.
Mobile development
The AWS Mobile Hub offers a collection of tools and services for mobile app developers, including the AWS
Mobile SDK, which provides code samples and libraries. A mobile app developer can also use Amazon Cognito
to manage user access to mobile apps, as well as Amazon Pinpoint to send push notifications to application end
users and then analyze the effectiveness of those communications.
Amazon Simple Notification Service (SNS) enables a business to send publish/subscribe messages to endpoints,
such as end users or services. SNS includes a mobile messaging feature that enables push messaging to mobile
devices. Amazon Simple Email Service (SES) provides a platform for IT professionals and marketers to send
and receive emails.
Game development
AWS can also be used for game development. Large game developing companies, such as Ubisoft, will use
AWS services for their games, like For Honor. AWS can provide services for each part of a game's lifecycle.
For example, AWS will provide a developer back-end services, analytics and developer tools. Developer tools
should help aid developers in making their game, while back-end services might be able to help with building,
deploying or scaling a developer's platform. Analytics might help developers better know their customers and
how they play the game. Developers can also store data, or host game data on AWS servers.
Internet of Things
AWS also has a variety of services that enable the internet of things (IoT) deployments. The AWS IoT service
provides a back-end platform to manage IoT devices and data ingestion to other AWS storage and database
services. The AWS IoT Button provides hardware for limited IoT functionality and AWS Greengrass brings
AWS compute capabilities to IoT devices.
Other services
Amazon Web Services has a range of business productivity SaaS options, including:
The Amazon Chime service enables online video meetings, calls and text-based chats across devices.
Amazon WorkDocs, which is a file storage and sharing service.
Amazon WorkMail, which is a business email service with calendaring features.
Desktop and streaming application services include Amazon WorkSpaces, a remote desktop-as-a-service
platform (DaaS), and Amazon AppStream, a service that lets a developer stream a desktop application from AWS
to an end user's web browser.
History
The AWS platform was originally launched in 2002 with only a few services. In 2003, AWS was re-envisioned
to make Amazon's compute infrastructure standardized, automated and web service focused. This re-
envisioning included the thought of selling access to virtual servers as a service platform. One year later, in
2004, the first publicly available AWS service (Amazon SQS) was launched.
In 2006, AWS was relaunched to include three services -- including Amazon S3 cloud storage, SQS, and EC2 --
officially making AWS a suite of online core services. In 2009, S3 and EC2 were launched in Europe, and the
Elastic Block Store and Amazon CloudFront were released and adopted to AWS. In 2013, AWS started to offer
a certification process in AWS services, and 2018 saw the release of an autoscaling service.
Over time, AWS has added plenty of services that helped make it a low-cost infrastructure platform that is
highly available and scalable. AWS now has a focus on the cloud, with data centers placed around the world, in
places such as the United States, Australia, Europe, Japan and Brazil.
Acquisitions
Over time, AWS has acquired multiple organizations, increasing its focus on technologies it wants to further
incorporate. Recently AWS' acquisitions haven't concentrated on larger well-established companies, but instead
on organizations that could bolster and overall improve the cloud vendor's existing offerings. These acquisitions
don't add to AWS, but rather enhance its core services. For example, AWS has acquired TSO Logic, Sqrrl and
CloudEndure.
TSO Logic was a cloud migration company that provides analytics, enabling customers to view the state of their
current data center and model a migration to the cloud. Sqrrl was a security startup that collects data from points
such as gateways, servers and routers, and then puts those findings inside a security dashboard. Cloud Endure is
a company that focuses on workload migrations to the public cloud, disaster recovery and backup.
These acquisitions shouldn't majorly change AWS; they will position it better. For example, the acquisition of
CloudEndure should accelerate movement of on-premises workloads to the AWS cloud.
EC2 history
EC2 was the idea of engineer Chris Pinkham who conceived it as a way to scale Amazon's internal
infrastructure. Pinkham and engineer Benjamin Black presented a paper on their ideas to Amazon CEO Jeff
Bezos, who liked what he read and requested details on virtual cloud servers. EC2 was then developed by
a team in Cape Town, South Africa. Pinkham provided the initial architecture guidance for EC2, gathered
a development team and led the project along with Willem van Biljon.
In 2006, Amazon announced a limited public beta test of EC2, and in 2007 added two new instance types -
- Large and Extra-Large. Amazon announced the addition of static IP addresses, availability zones, and
user selectable kernels in spring 2008, followed by the release of the Elastic Block Store (EBS) in August.
Amazon EC2 went into full production on October 23, 2008. Amazon also released a service level
agreement (SLA) for EC2 that day, along with Microsoft Windows and SQL Server in beta form on EC2.
Amazon added the AWS Management Console, load balancing, autoscaling, and cloud monitoring services
in 2009.
Cost
On-Demand instances allow a developer to create resources as needed and to pay for them by the hour.
Reserved instances (RIs) provide a price discount in exchange for one and three-year contract commitments
-- a developer can also opt for a convertible RI, which allows for the flexibility to change the instance type,
operating system or tenancy.
There's also an option to purchase a second-hand RI from the Amazon EC2 reserved instances marketplace.
A developer can also submit a bid for spare Amazon EC2 capacity, called Spot instances, for a workload
that has a flexible start and end time.
If a business needs dedicated physical server space, a developer can opt for EC2 dedicated hosts, which
charge hourly and let the business use existing server-bound software licenses, including Windows Server
and SQL Server.
A breakdown of Amazon EC2 instances and their associated prices.
Benefits
Getting started with EC2 is easy, and because EC2 is controlled by APIs developers can commission any
number of server instances at the same time to quickly increase or decrease capacity. EC2 allows for complete
control of instances which makes operation as simple as if the machine were in-house.
The flexibility of multiple instance types, operating systems, and software packages and the fact that EC2 is
integrated with most AWS Services -- S3, Relational Database Service (RDS), Virtual Private Cloud (VPC) --
makes it a secure solution for computing, query processing, and cloud storage.
Challenges
Resource utilization -- developers must manage the number of instances they have to avoid costly large, long-
running instances.
Security -- developers must make sure that public facing instances are running securely.
Deploying at scale -- running a multitude of instances can result in cluttered environments that are difficult to
manage.
Management of AMI lifecycle -- developers often begin by using default Amazon Machine Images. As
computing needs change, custom configurations will likely be required.
Ongoing maintenance -- Amazon EC2 instances are virtual machines that run in Amazon's cloud. However,
they ultimately run on physical hardware which can fail. AWS alerts developers when an instance must be
moved due to hardware maintenance. This requires ongoing monitoring.
EC2 vs. S3
Both Amazon EC2 and Amazon S3 are important services that allow developers to maximize use of the AWS
cloud. The main difference between Amazon EC2 and S3 is that EC2 is a computing service that allows
companies to run servers in the cloud. While S3 is an object storage service used to store and retrieve data from
AWS through the Internet. S3 is like a giant hard drive in the cloud, while EC2 offers CPU and RAM in
addition to storage. Many developers use both services for their cloud computing needs.
Amazon Simple Storage Service (Amazon S3)
Amazon Simple Storage Service (Amazon S3) is a scalable, high-speed, web-based cloud storage service.
The service is designed for online backup and archiving of data and applications on Amazon Web Services
(AWS). Amazon S3 was designed with a minimal feature set and created to make web-scale computing
easier for developers.
Amazon S3 features
S3 provides 99.999999999% durability for objects stored in the service and supports multiple security and
compliance certifications. An administrator can also link S3 to other AWS security and monitoring services,
including CloudTrail, CloudWatch and Macie. There's also an extensive partner network of vendors that
link their services directly to S3.
Data can be transferred to S3 over the public internet via access to S3 application programming interfaces
(APIs). There's also Amazon S3 Transfer Acceleration for faster movement over long distances, as well as
AWS Direct Connect for a private, consistent connection between S3 and an enterprise's own data center.
An administrator can also use AWS Snowball, a physical transfer device, to ship large amounts of data
from an enterprise data center directly to AWS, which will then upload it to S3.
In addition, users can integrate other AWS services with S3. For example, an analyst can query data directly
on S3 either with Amazon Athena for ad hoc queries or with Amazon Redshift Spectrum for more complex
analyses.
Use cases
Amazon S3 can be used by organizations ranging in size from small businesses to large enterprises. S3's
scalability, availability, security and performance capabilities make it suitable for a variety of data storage
use cases. Common use cases for S3 include the following:
data storage;
data archiving;
application hosting for deployment, installation and management of web apps;
software delivery;
data backup;
disaster recovery (DR);
running big data analytics tools on stored data;
data lakes;
mobile applications;
internet of things (IoT) devices;
media hosting for images, videos and music files; and
website hosting -- particularly well suited to work with Amazon CloudFront for content delivery.
A user can also implement life cycle management policies to curate data and move it to the most appropriate
tier over time.
Competitor services
Competitor services to Amazon S3 include other object storage software tool services. Comparable object
storage services are offered by other major cloud service providers (CSPs), such as Google, Microsoft,
IBM and Alibaba. Main competitor services to Amazon S3 include the following:
Amazon Simple Queue Service supports tasks that process asynchronously. Instead of one application having to
invoke another application directly, the service enables an application to submit a message to a queue, which
another application can then pick up at a later time.
An SQS queue can be FIFO (first-in, first-out) or standard. A FIFO queue maintains the exact order in which
messages strings are sent and received. Standard queues attempt to preserve the order of messages, but can be
flexible and when processing demands require it. FIFO queues provide exactly-once delivery, while standard
queues provide at-least-once delivery.
SQS is compatible with other Amazon Web Services, including Amazon Relational Database Service, Amazon
Elastic Compute Cloud and Amazon Simple Storage Service.
VMware first introduced the vCloud tag at the Las Vegas 2008 VMworld conference. In the early days,
there were many iterations from vCloud Pavilion, through to vCloud Hybrid Service and vCloud Air. The
latter providing public Infrastructure-as-a-Service (IaaS) running VMware vSphere, which was eventually
acquired in 2017 by French cloud computing company OVH.
Over the last few years, VMware has shifted its focus towards cloud-agnostic software, and the integration
of its products with leading cloud providers from Amazon, Microsoft, Google, IBM, and Oracle.
Furthermore, VMware aims to bring the benefits of cloud computing to customer’s existing data centers
through private and hybrid cloud deployments, as well as to provide platforms for cloud-native
application development.
Although VMware still partners with OVH on go-to-market solutions and customer support for vCloud Air,
the acquisition suggested a move away from VMware itself being a cloud provider, and more towards
engineering the building blocks for deployment and management of multi-cloud platforms.
VMware now classifies vCloud Suite as a cloud infrastructure management solution, and VMware Cloud
Director (VCD) a cloud-service delivery platform for Cloud Providers.
According to VMware’s Public Cloud Solution Service Definition, VMware Cloud Providers are a global
network of ‘service providers who have built their cloud and hosting services on VMware software.’
VMware powered private clouds, service provider-managed or unmanaged, use VMware vSphere
with the vRealize Suite, which forms VMware vCloud Suite.
VMware powered public clouds use VMware vSphere, with VMware Cloud Director, and
generally with vCloud Application Programming Interfaces (APIs) exposed to its tenants.
The original vCloud Air is available through OVH as a hosted private cloud with enterprise support
including vSphere, vCenter, and NSX.
vCloud Suite
VMware vCloud Suite is the combination of enterprise-proven virtualization platform vSphere, and multi-
cloud management solution vRealize. VMware vSphere includes the hypervisor ESXi, providing server
virtualization, and vCenter Server, which centralizes the management of physical ESXi hosts and Virtual
Machines, as well as enabling some of the enterprise features like High Availability.
Included with vSphere in the vCloud Suite is vRealize, delivering automation, orchestration, and intelligent
IT operations for multi-cloud management and modern applications. The vRealize Suite contains the
following products:
vRealize Automation: for self-service provisioning, service catalog, governance, and policy
enforcement, with aligned orchestration to automate runbooks and workload deployments.
vRealize Operations: offers Machine Learning (ML) powered and self-driving operational
capabilities, monitoring, automated remediation, performance optimization, capacity management
and planning, usage metering, service pricing, and chargeback.
vRealize Log Insight: enables centralized log management and intelligent log analytics for
operational visibility, troubleshooting, and compliance.
vRealize Suite Lifecycle Manager: provides a comprehensive application lifecycle management
solution for vCloud Suite.
Additionally, vCloud Suite fully supports vSphere with Kubernetes and integrates seamlessly with other
Software-Defined Data components such as NSX and vSAN.
With multi-tenancy, each vRealize Automation tenant can have its own branding, services, and fine-grained
permissions. The following screenshot shows an example of tenant branding at the login page:
In the screenshot below the vRealize Automation design canvas is shown, administrators drag
and drop the relevant components for automated builds with corresponding catalog items:
These features allow cloud providers to upscale from IaaS hosting to a profitable
portfolio of cloud-based services, providing the following key benefits:
o Operational efficiency of deploying and maintaining cloud infrastructure for
tenants across multi-cloud environments.
o A unified management plane for the entire service portfolio.
o Reduced time-to-market for new and expanding services.
o Additional revenue streams from publishing custom service suites and
integration with Independent Software Vendors (ISVs).
o VCD is one of the main steps towards becoming Cloud Verified, providing an
industry-standard mark of recognition.
VMware Cloud Customers:
o VMware Cloud-as-a-Service consumption model of the full VMware
Software-Defined Data Center, as a managed service or with a complete set
of self-service controls.
o Ease of provisioning and scaling cloud services and partner services from a
single web interface or set of APIs.
o The fastest available path to hybrid cloud services and workload migration,
whether that be for portability between cloud platforms, or backup and
evacuation of existing data centers.
o Leverage Infrastructure-as-Code (IaC) capabilities across various cloud
platforms with native container services and Platform-as-a-Service (PaaS)
for Kubernetes and Bitnami.
Many of the benefits above work in turn for both parties, alongside taking advantage of
economies of scale to facilitate business growth with minimal operational overhead.
You can try both vCloud Suite (vSphere with vRealize) and VMware Cloud Director
using VMware Hands on Labs. At the time of writing the Cloud Director lab is still
running v9.7, so is still branded vCloud:
vCloud Connector
Accompanying VMware Cloud Director, vCloud Air customers can make use of vCloud
Connector, a vSphere plugin that connects up to 10 private and public clouds. Using
vCloud Connector, customers can harness the full power of hybrid cloud from a single
interface to help with private data center extension and migration to a public cloud, or
management of hybrid cloud setups.
One of the great features of managing distributed environments from the vCloud
Connector plugin is the content sync, creating a single content library across the
entire cloud environment for increased operational efficiency and simplified source
catalog management.
The vCloud Connector itself has been available as a free download since v2.6. Although
the latest version of the product is v2.8.2, updated in March 2016, it remains available to
support vCloud Air customers with multi-cloud management.
To summarise With the modern vCloud Suite, we can standardize, automate, and
monitor distributed vSphere environments with vCenter Server and vRealize Suite.
VMware’s cloud-agnostic slogan Any App, Any Device, Anywhere, aims to keep the
companies existing market-leading products, and recent acquisitions, relevant for
customers with cloud and multi-cloud strategies. By embedding further native PaaS
services for developers building modern applications, and a wide range of additional
SaaS offerings, both vCloud Suite and VMware Cloud Director are crucial elements of
this vision.
In my post Cloud Adoption for SMBs and End Users – Easy and Affordable, I talked
about how it makes perfect sense that SMBs move to the cloud. vCloud
Express, offered by a number of providers, is an ideal service for SMBs (and
enterprises alike) because it's quick, easy, and pay-as-you-go on a credit card.
It had been some time since I tried out vCloud Express so I was thankful when recently I
had the opportunity to try out vCloud Express from Terremark. Quickly, I found out that
vCloud Express had grown up a lot since I last saw it. Before I show you how to get
started with vCloud Express, here are a few things that you should know:
1. Go to the vCloud Express from Terremark page and click Order Now to go to the signup page.
2. Fill out the New User Signup & activate your account.
3. At this point, you'll need to provide a credit card to Terremark to bill your per hour usage on.
4. When you Sign In, you'll be brought to the Resources page so click on Servers to get started creating
your first server.
5. At this point, you have a number of options. You can create Rows and Groups to help organize
servers if you'll have more than a couple of servers. However, minimally, if you're just going to
create one server like I am then you can select either Create Server or Create Blank Server. The
difference between these two is that "Create Server" creates a new server from pre-built templates
where "Create Blank Server" does what it says and creates an empty VM where you would install
your own OS. In my case, I want to demonstrate a VM that has a pre-built OS (a template) so we'll
choose Create Server. (note that we could even create a server with an OS and a SQL database).
6. This brings up the Create Server Wizard that will guide us through the process. First
we need to specify the type of VM (OS, OS & Database, or Cohesive FT). I specified
OS only then set my OS to Windows 2008 Standard R2 64-bit. The only servers that I
saw with additional monthly fees were the SQL database servers.
7. Next, I had to specify the number of virtual processors (VPU) and the amount of RAM
that I wanted this server to have. Notice how as the CPU and RAM rises, so does the
cost per hour of this VM (also add in the cost for the virtual hard drive).
8. From here, I specified the server name, admin password, and IP
settings.
9. Next, I had to specify what row and group this server should be contained in (I
created new rows and groups then named them whatever I wanted).
10. Finally, I reviewed what we were about to deploy (including the associated costs),
opted to power on the server, and accepted the license agreement.
At this point, I was told that the new server could take up to 45 minutes to be created
however, after just 5 minutes my new Windows server in the cloud was ready to be
used.
11. Next, select the server and click Connect. Likely you will have to install the VMware
MKS plugin, as I did, to use the console. I did have some trouble connecting to the
server console however, I was successful when using FireFox, installing the MKS plug-
in as directed, and connecting to the VPN with VPN Connect (a SSL VPN that required
me to install the Cisco AnyConnect VPN Client). Here's what my web console looked
like:
12. From the server console, I updated the VMware Tools by mounting the provided
ISO, installing, and rebooting.
Note that you aren't recommended to use this web-based server console for daily
administration, only to get the server up a running to the point that you can connect to it
via RDP.
After only about 15 minutes of using vCloud Express, I have a working Windows 2008
R2 server with VMware Tools installed, but what remains?
In summary, think about this – never before could you have a new Windows or Linux
server, up and running on the Internet, in under 15 minutes, and only pay a few cents
per hour for the resources that you use? vCloud Express is revolutionary in its
simplicity, affordability, and easy of use.
David Davis is a VMware Evangelist and vSphere Video Training Author for Train
Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years
in the IT industry. David has authored hundreds of articles on the Internet and nine
different video training courses for TrainSignal.com including the popular vSphere video
training package. Learn more about David at his blog or on Twitter and check out a
sample of his VMware vSphere video training course from TrainSignal.com.
Google AppEngine:
A scalable runtime environment, Google App Engine is mostly used to run Web applications.
These dynamic scales as demand change over time because of Google’s vast computing
infrastructure. Because it offers a secure execution environment in addition to a number of
services, App Engine makes it easier to develop scalable and high-performance Web apps.
Google’s applications will scale up and down in response to shifting demand. Croon tasks,
communications, scalable data stores, work queues, and in-memory caching are some of these
services.
The App Engine SDK facilitates the testing and professionalization of applications by emulating
the production runtime environment and allowing developers to design and test applications on
their own PCs. When an application is finished being produced, developers can quickly migrate it
to App Engine, put in place quotas to control the cost that is generated, and make the programmer
available to everyone. Python, Java, and Go are among the languages that are currently supported.
The development and hosting platform Google App Engine, which powers anything from web
programming for huge enterprises to mobile apps, uses the same infrastructure as Google’s large-
scale internet services. It is a fully managed PaaS (platform as a service) cloud computing platform
that uses in-built services to run your apps. You can start creating almost immediately after
receiving the software development kit (SDK). You may immediately access the Google app
developer’s manual once you’ve chosen the language you wish to use to build your app.
AppEngine is the Google’s Platform to Build Web Application on Cloud. It is the dynamic
Web server with full support for common web technologies. It supports Automatic Scaling &
Load balancing concept. It also has Transactional Datastore model.
Google App Engine (often referred to as GAE or simply App Engine) is a platform as a service
(PaaS) cloud computing platform that provides Web app developers and enterprises with access
to Google's scalable hosting and tier 1 Internet service (for developing and hosting web
applications in Google-managed data centers). The App Engine requires that apps be written in
Java or Python, store data in Google BigTable and use the Google query language. Non-
compliant applications require modification to use App Engine.
Applications are sandboxed and run across multiple servers. App Engine offers automatic scaling
for web applications—as the number of requests increases for an application, AppEngine
automatically allocates more resources for the web application to handle the additional demand.
Google App Engine is free up to a certain level of consumed resources. Fees are charged for
additional storage, bandwidth, or instance hours required by the application. It was first released
as a preview version in April 2008, and came out of preview in September 2011.
1. Infrastructure for Security: The Internet infrastructure that Google uses is arguably the safest
in the entire world. Since the application data and code are hosted on extremely secure servers,
there has rarely been any kind of illegal access to date.
2. Faster Time to Market: For every organization, getting a product or service to market quickly
is crucial. When it comes to quickly releasing the product, encouraging the development and
maintenance of an app is essential. A firm can grow swiftly with Google Cloud App Engine’s
assistance.
3. Quick to Start: You don’t need to spend a lot of time prototyping or deploying the app to users
because there is no hardware or product to buy and maintain.
4. Easy to Use: The tools that you need to create, test, launch, and update the applications are
included in Google App Engine (GAE).
5. Rich set of APIs & Services: A number of built-in APIs and services in Google App Engine
enable developers to create strong, feature-rich apps.
6. Scalability: This is one of the deciding variables for the success of any software. When using
the Google app engine to construct apps, you may access technologies like GFS, Big Table,
and others that Google uses to build its own apps.
7. Performance and Reliability: Among international brands, Google ranks among the top ones.
Therefore, you must bear that in mind while talking about performance and reliability.
8. Cost Savings: To administer your servers, you don’t need to employ engineers or even do it
yourself. The money you save might be put toward developing other areas of your company.
9. Platform Independence: Since the app engine platform only has a few dependencies, you can
easily relocate all of your data to another environment.
Components of AppEngine:
1. SDK
a. APIs
b. Easy Deployment Software
c. Locally RunSoftwares
2. Runtime Language
a. Python and
b. JAVA
3. Scalable Infrastructure
Azure History
Microsoft unveiled Windows Azure in early October 2008 but it went to live after February
2010. Later in 2014, Microsoft changed its name from Windows Azure to Microsoft Azure.
Azure provided a service platform for .NET services, SQL Services, and many Live Services.
Many people were still very skeptical about “the cloud”. As an industry, we were entering a
brave new world with many possibilities. Microsoft Azure is getting bigger and better in the
coming days. More tools and more functionalities are getting added. It has two releases as of
now. It’s a famous version of Microsoft Azure v1 and later Microsoft Azure v2. Microsoft
Azure v1 was more JSON script-driven than the new version v2, which has interactive UI for
simplification and easy learning. Microsoft Azure v2 is still in the preview version.
Microsoft Azure, formerly known as Windows Azure, is Microsoft's public cloud computing
platform. It provides a broad range of cloud services, including compute, analytics, storage and
networking. Users can pick and choose from these services to develop and scale new applications
or run existing applications in the public cloud.
The Azure platform aims to help businesses manage challenges and meet their organizational
goals. It offers tools that support all industries -- including e-commerce, finance and a variety of
Fortune 500 companies -- and is compatible with open source technologies. This gives users the
flexibility to use their preferred tools and technologies. In addition, Azure offers four different
forms of cloud computing: infrastructure as a service (IaaS), platform as a service (PaaS), software
as a service (SaaS) and serverless functions.
Microsoft charges for Azure on a pay-as-you-go (PAYG) basis, meaning subscribers receive a
bill each month that only charges them for the specific resources and services they have used.
Windows Azure
Windows Azure provides a virtual Windows runtime for executing applications and storing data
on computers in Microsoft data center which includes computational services, basic storage,
queues, web servers, management services, and load-balancers. This also offers a local
development fabric for building and testing services before they are deployed to Windows Azure
in the cloud. The application that are developed for Windows Azure scales better and more
reliable, requires less administration than that are developed through traditional Windows
programming model. Users just spend money for the computing and storage they are consuming,
instead of maintaining an enormous set of servers.
SQL Azure
Azure Marketplace
The Windows Azure marketplace contains data and various other application market segments
including data and web services from leading commercial data providers and authorized public
data sources. The Windows Azure Marketplace is further divided into the following two
categories:
1. Create a Windows Azure account and Login using Microsoft Live ID.
2. Prepare the development fabric to build an application in the local cloud platform.
3. Test the application in the development fabric.
4. Package the application for cloud deployment.
5. Test the application on Windows Azure in the cloud.
6. Deploy the application in the production farm.
Salesforce
Salesforce, Inc. is a cloud computing and social enterprise software-as-a-service (SaaS) provider
based in San Francisco. Founded in March 1999 by former Oracle executive Marc Benioff, Parker
Harris, Dave Moellenhoff and Frank Dominguez, the company started off as a customer
relationship management (CRM) platform vendor. Salesforce has transformed into a SaaS
powerhouse over time, offering multiple cloud platforms that serve specialized purposes. In
August 2022, Salesforce announced it had revenue of $7.72 billion, growing 22% year over year.
The main premise behind Salesforce is to deliver affordable CRM software as an online service.
Before Salesforce, most companies hosted CRM software on their servers or used local resources,
which required a great deal of time and financial investment.
Salesforce offers a pay-as-you-go subscription model and houses all the data in the cloud, which
makes it easily accessible from any internet-connected device. Contact Salesforce for pricing
information.
Salesforce offers a diverse infrastructure of software products designed to help teams from
different industries -- including marketing, sales, IT, commerce and customer service -- connect
with their customers. For example, by accessing the Salesforce Customer 360 app, teams across
an entire organization can connect and share a single view of customer data on an integrated
platform.
Salesforce CRM provides helpful insights into customer behavior and needs through customer
data analysis. By bridging the gaps between data silos from different departments, Salesforce
provides a holistic view of every customer interaction with a brand.
Salesforce enables organizations of every size and industry to better understand and connect with
their customers at a deeper level and grow their customer base. Businesses typically integrate
Salesforce into their ecosystem so employees can share customer views from any device,
regardless of their department or location.
Salesforce provides a 360-degree view of the customer lifecycle with streamlined workflows,
centralized cloud-based data management and real-time tracking of customer analytics. According
to Salesforce, more than 150,000 companies -- from small businesses to Fortune 500 companies -
- use its secure and scalable cloud platform.
For example, Pardot, which was renamed Marketing Cloud Account Engagement in April 2022,
is a Salesforce business-to-business (B2B) marketing automation tool that's designed to help
organizations accelerate their sales with better sales intelligence, generate high-quality leads with
powerful marketing tools, automate lead qualification and nurturing, and track campaign
performance.
Salesforce cloud services
Salesforce offers a diverse portfolio of products and services -- from CRM software and marketing
and sales management options to advanced analytics. Of its cloud platforms and applications, the
company is best known for its Salesforce CRM product, which is composed of the Sales Cloud,
Marketing Cloud, Service Cloud, Experience Cloud, Commerce Cloud and the Analytics Cloud.
Other Salesforce cloud offerings that address specific applications and industries include the App
Cloud, IoT Cloud, Financial Services Cloud, Health Cloud, Integration Cloud, Manufacturing
Cloud, Education Cloud, Nonprofit Cloud and the Vaccine Cloud.
The following list examines Salesforce cloud services and their prominent features.
1. Salesforce Sales Cloud enables sales teams to focus on the sales components of CRM in
addition to customer support. The main features of the Sales Cloud include the following:
2. Salesforce Marketing Cloud combines all marketing channels in one place and automates the
marketing processes. The main features of the Marketing Cloud include the following:
3. Salesforce Service Cloud provides a fast, artificial intelligence (AI)-driven customer service
and support experience to customers and enables businesses to scale their operations efficiently.
The main features of the Salesforce Service Cloud include the following:
enables service teams to communicate in real time with customers through the Live Agent
tool;
offers seamless collaboration with customers and faster query resolutions with the
integration of Slack-First Customer 360;
enables customers to reach across multiple digital channels, including mobile messaging,
AI-powered live chat, social media and email;
helps set up a self-service center for customers that includes communities and convenient
options for booking appointments, paying bills or checking account balances;
uses omnichannel routing to automatically deliver cases and leads to certain employees
based on their skill sets or availability;
helps turn insights into actions with the Salesforce Wave analytics app; and
provides a comprehensive view of workforce management -- including order placement,
delivery, scheduling, installation and tracking -- through the Salesforce Field Service
option.
5. Salesforce Commerce Cloud unifies the way businesses engage with customers over any
channel. It offers a suite of apps and software services that focus on the e-commerce business. The
main features of the Salesforce Commerce Cloud include the following:
7. Salesforce App Cloud is a collection of development tools that enable developers to quickly
and intuitively create applications that run on the Salesforce platform without writing code. App
Cloud provides native integration, eliminating the need for IT. It enables users to build apps that
integrate customer data for more engaging customer experiences. It helps automate business
processes and extend powerful APIs for added security. Tools in the App Cloud include the
following:
8. Salesforce IoT Cloud uses the power of IoT to turn data generated by customers, smart
devices, partners and sensors into useful customer data. The main features of the IoT Cloud
include the following:
enables users to process massive quantities of data received from different processes,
locations and network devices;
builds orchestration rules with intuitive tools to provide a low-code approach;
engages with customers in real time;
uses Einstein Analytics to provide advanced analytics gathered through a variety of
sources, including sensors, hardware components and portals; and
records and evaluates previous activities and actions through the customer context tool to
make real-time decisions.
10. Salesforce Health Cloud 2.0 enables businesses and government agencies to offer better
safety and health for their employees, communities and customers. Its mission is to improve
patient care during each step of the healthcare process -- from the first point of contact to
medical billing. The main features of the Health Cloud 2.0 include the following:
creates a profile for each member that includes demographics, communications and other
pertinent information in one location;
monitors cases and prioritizes tasks based on levels of importance;
enhances electronic health record systems by unlocking them and incorporating apps in a
secure and flexible platform;
enables patients to track progress toward health goals, care plans and post-acute care; and
helps track patient itineraries and detect system loopholes.
11. Salesforce Integration Cloud provides a single view of customer data for large businesses
and enterprises. This cloud helps users connect large amounts of data spread across the various
cloud platforms. The main features of the Integration Cloud include the following:
provides the Lightning Flow feature, which enables the creation of personalized customer
experience across all units including sales, service and marketing;
enables customer service reps to transform service interactions into cross-selling and
upselling opportunities, without ever leaving their console through the Lightning app builder
feature;
provides easy integration with third-party apps to optimize business and development
processes; and
helps with smart decisions and data optimization as data is pulled from all sources.
12. Salesforce Manufacturing Cloud is geared toward manufacturing companies and enables
them to view and collaborate between the operations and sales departments. Workers can also
access customer information through sales agreements and account-based forecasting. The main
features of the Manufacturing Cloud include the following:
provides a sales agreement feature that offers visibility into all customer negotiations and
contract terms;
offers native lead management that can be modified for any business needs;
enables manufacturers to view the current business as well as identify future
opportunities for improvements through the account-based forecasting feature; and
enables account managers to customize permissions and different settings for each
position on the team and for each workflow.
13. Salesforce Education Cloud combines Salesforce Lightning with the Education Data
Architecture to provide student management and engagement, academics, admissions and other
support functions. It delivers the technology required to manage the entire student lifecycle, from
kindergarten to graduation. The main features of the Salesforce Education Cloud include the
following:
maps student journeys and provides budget tracking, campaign management, social
marketing and personalized messaging through the marketing automation feature;
provides a sales automation feature that offers easy enrollment from the pre-lead stage to
final enrollment;
automates the grant concepts, funding, budget tracking, sponsor updates and loan
applications for both internal and third-party vendors. This cuts down on the communication
and follow-up involved in student loan processing;
offers easy recruitment and outreach to prospective students by consolidating data in one
single place; and
provides a collaborative experience for students across the campus by connecting multiple
departments.
14. Salesforce Nonprofit Cloud helps nonprofit organizations, such as fundraising organizations
and educational institutions, expand their reach digitally and enhance their connections with
people. It aligns fundraising, marketing, program management and technology teams and offers a
consolidated view across all activities and operations. The main features of the Nonprofit Cloud
include the following:
comes with a fundraising feature that provides a holistic view of partners and donors by
streamlining communications between both entities;
connects with potential donors, regardless of their geographic location, through the
digital-first fundraising strategy to help establish viable donor relationships across
various channels; and
offers built-in templates that enable companies to engage with their constituents,
supporters and partners through personalized messages and email marketing.
15. Salesforce Vaccine Cloud was introduced in early 2021 to help healthcare organizations,
nonprofits and schools operate safely by building and managing vaccine programs at scale quickly
and efficiently. The main features of the Vaccine Cloud include the following:
consolidates all data sources into a single view for easy accessibility;
provides vaccine inventory management that helps organizations maintain adequate
vaccine doses, syringes and personal protective equipment stock levels as well as provides
a forecast of demand;
helps screen vaccine registrants and gather digital consent;
helps with analysis of communitywide vaccine results; and
provides contactless visits through quick response codes, on-demand appointment
scheduling and self-service options.
Salesforce technologies
Salesforce offers several innovative technologies that help connect customers, companies,
developers and business partners. Apex is an object-oriented programming language that enables
developers to execute flow and transaction control statements on the Salesforce platform. Apex is
integrated, easy to use, data-focused, hosted, multi-tenant aware, automatically upgradeable, easy
to test and versioned.
Visualforce is a framework that enables developers to create dynamic, reusable interfaces that
can be hosted natively on Salesforce. They can create entire custom pages inside a Salesforce
organization or associate their logic with a controller class written in Apex. Developers can use
Visualforce pages to override standard buttons and tab overview pages, define custom tabs,
embed components in detail page layouts, create dashboard components, customize sidebars in
the Salesforce Console and add menu items.
Salesforce Einstein is a comprehensive AI technology for CRM developed for the Salesforce
Customer Success Platform. Einstein is designed to give sales and marketing departments more
complete and up-to-date views of customers and potential clients. It's designed to make
Salesforce Customer 360 more intelligent and to bring AI to trailblazers everywhere.
Benefits of Salesforce
Salesforce products are designed to help organizations meet customer expectations and enhance
customer satisfaction. The following are popular benefits of using Salesforce:
5. Time management. Salesforce offers comprehensive customer information and planning
resources in one centralized location. Organizations can save time, as they don't have to
search through logs and other important files. The built-in calendar tool helps them
visualize the daily, weekly, monthly or yearly schedules, which helps with setting
meetings, planning projects and staying on top of leads so they can be quickly
transformed into customers.
6. Increased revenue. Salesforce helps organizations sort through vast amounts of data,
which if done manually can take a lot of time and effort. By incorporating Salesforce,
organizations can spend less time on administrative tasks and more time building
successful customer relationships.
7. Easy accessibility. Salesforce enables organizations to safely access important files and
client updates anywhere with an internet connection. The Salesforce app is supported on
various mobile platforms and devices, including Apple iOS and Android OS. This
provides great flexibility for business owners who are always on the go or travel
frequently.
8. Enhanced collaboration. Salesforce Chatter provides swift and easy communication
between team members. It enables team members to collaborate individually or within
groups regarding work-related tasks, and members from different teams can be added to
accounts or activities that require extra attention.
9. Business scalability. Salesforce's underlying architecture can rapidly scale to
accommodate the needs of businesses and their customers.
10. Seamless integration. Salesforce can be easily integrated with most third-party apps, such
as Gmail or accounting software. Some of the third-party apps that Salesforce integrates
with include Google Cloud, WhatsApp, QuickBooks, LinkedIn, Mailchimp, Dropbox and
Heroku.
11. Trustworthy reporting. Salesforce keeps track of pertinent business data from all business
channels -- social media, app information, website analytics, business software -- and
keeps it organized. This feature is designed to sort and analyze vast amounts of data with
accuracy.
Salesforce
Architecture
In its basic form, salesforce
architecture is a multi-
tenant architecture built-up
of a series of
interconnected layers. The
important thing about this
architecture is that it shares
database resources with all
its tenants and stores data
securely. This architecture offers an easy-to-use interface so users can operate Salesforce
software effortlessly.
Of the many layers of the Salesforce architecture, the Salesforce platform layer serves as the
foundation of the architecture. This layer is powered by metadata and includes vital
components such as data services, AI services, and API. Here, metadata consists of custom
setups, scripts, and functions. It helps to access data from databases quickly. And APIs help
to communicate with other systems seamlessly. Moreover, the top layer of the architecture
consists of the Salesforce Apps such as sales, services, marketing, and so on.
Salesforce Database
Essentially, the Salesforce database is a relational
database. This is where you can store your customer
information in database objects. And Salesforce
database uses object record tables for storing data.
The data may include customer names, email
addresses, contact numbers, sales history, etc. The
Salesforce database provides many excellent
features, such as reliability, security, and flexibility to
its users. Additionally, the functionalities of the
Salesforce database remain unaffected even when
variations occur in the scaling of applications. Not
only that, it remains balanced regardless of the
changes in the data storage and processing power.
1. Objects
Objects are nothing but tables in the Salesforce database. Three types of objects are used
in the Salesforce database: Standard, Custom, and External. The first one, standard
objects, are the prebuilt objects used in the database. The prebuilt objects are named
account objects, contact objects, and lead objects. Then, Custom objects are the ones you
can create based on the business needs. For example, if you are running a retail industry,
you can create an object like ‘orders’. Custom objects provide custom layouts that help to
build analytic functions and reporting in the objects. External objects, the third one, support
mapping data outside the Salesforce database.
Know that every object in the salesforce database consists of records and fields. The rows
of tables are known as records, and the columns are known as fields.
2. Fields
When considering fields, standard objects consist of three prebuilt standard fields. Similarly,
custom objects also consist of standard fields. The standard fields are known as identity,
name, and system. At first, identity is one of the essential components of the Salesforce
database. Every record in an object will have a field that has a unique identifier for the
record. The second one, the name, is another standard field with the record's name.
Sometimes, it can be a number too. The third one, the system, is a read-only object field. It
will show the identity of the person who modified the data in the record for the last time.
Apart from all these three standard fields, every object will have fields such as checkboxes,
dates, formulas, numbers, etc.
3. Records
Salesforce database allows creating records on the objects once you have finalized the
required fields for the objects. For example, suppose you need to insert a new customer
into the customer table in the database. In that case, you can generate a new record for the
new customer on the customer table.
4. Relationships
5. Look-up
This relationship represents a relationship between two objects in the Salesforce database.
In this type, one object looks up another based on their relation. So, you can use a look-up
relationship only when two tables are related based on certain aspects. Besides, there can
be two types of relationships: one-to-one or one-to-many.
6. Master-detail
In this relationship, one object acts as the master, and another acts as the detail. The master
object can control the behaviors of the detail object. For instance, who can view the detailed
object data can be decided by the master object. In a way, the master-detail relationship is a
complex one. If you delete the master object, the detail object will also be deleted along with
the master object.
A simple but essential note is that you can use the master-detail relationship when objects
are always related. But you can use look-up relationships only when objects are related
sometimes.
7. Hierarchical
It is yet another type of relationship but a special one. You can use this relationship only for
user objects. This relationship helps to build management chains between users.
8. Schema Builder
It is a tool that can be used to visualise, understand, and edit data models. Not just that, you
can create fields and objects using schema builder. With this tool, you can quickly brief team
members about the customisations you have made in the Salesforce software. Besides, we
can clearly understand how data flows in systems from this tool.
OneDrive was previously known as SkyDrive; Office Online was previously known as Office
Web Apps. You may occasionally see references to SkyDrive and Office Web Apps while
using these services.
Office Online is a free basic version of the most popular programs in the Microsoft Office
suite. It lets you create Word documents, Excel spreadsheets, and more without having to
buy or install software. There are four Office Online apps:
What is OneDrive?
OneDrive is a free online storage space you can use as your own
personal online hard drive. When you create a document with Office
Online, it will be saved to your OneDrive. You can store other files
there as well. This type of online storage is referred to as the cloud.
Because Office Online and OneDrive are based in the cloud, you can
access them from any device with an Internet connection at any time.
Review our lesson on Understanding the Cloud to learn more about the
basics of cloud computing.
Once you’ve used Office Online and OneDrive to store files in the cloud, you can edit and share them
without ever having to download them to your computer. You can also upload files from your
computer, including photos and music. You can even sync your computer and OneDrive so any
changes you make to your files are automatically copied between the cloud and your computer. As you
can see below, working with the cloud makes all of these things possible.
To use Office Online and OneDrive, you'll need a Microsoft account. Getting a Microsoft account
will also give you access to features like email and instant messaging. You'll learn how to create an
account in our lesson on Getting Started with OneDrive.
Visit our Microsoft Account tutorial to learn more about its features.
\
Why use Office Online and OneDrive?
OneDrive is one of the most popular cloud storage services available today, offering five
gigabytes (5GB) of free storage space. And because OneDrive allows you to share
and edit documents with Office Online, it's easy to collaborate with others.
Of course, Office Online and OneDrive aren't the only services that let you create and store files in
the cloud. Google Drive and Apple's iCloud provide similar features. However, Office Online offers
one major advantage over these other services: It is similar to the desktop versions of Microsoft
Office applications. If you already know how to use these applications, it will be easy for you to start
using Office Online. Also, Office Online and the regular Office applications use the same file types.
This means you can edit the same file in both Office Online and the desktop version.
While Office Online is a useful tool, it's not perfect. Office Online is a limited version of Microsoft
Office, which means it may be missing some of the features you like to use. You can still create
documents, spreadsheets, and presentations, but they may not look as polished without certain tools.
For example, here are the page layout tools in Word Online:
As you can see, the desktop version includes several additional features. Still, if you can't afford to
purchase the full version of Microsoft Office, Office Online is a great (and free) alternative. Keep in
mind that you need to have access to the Internet to use Office Online and OneDrive. If your Internet
connection is unreliable, you may want to keep copies of important files on your computer as well.
1. Scalability is one of the most popular features of Microsoft 365. Unlike traditional IT, hampered
by the amount of hardware and software that can only be implemented on the leading site, cloud
services like Microsoft 365 are highly flexible. Microsoft 365 is supported by a scalable
infrastructure that can be used in different ways based on a company's business needs. Using it,
you only pay for the features you use. Even in the early stages of your business, you don't have
to worry about wasting money on features you won't use. As your business grows, you will not
be forced to switch to another business software to ensure that your growing needs are met.
Instead, you have to pay for more services and data storage. By choosing Microsoft 365 from the
beginning, you will save a lot of time and problems.
2. Unification of UI and Updating: Another problem for companies is that they often need a lot of
software and apps to do business. MS 365 instead allows you to have a unified experience. Microsoft
provides a single management. In this way, all business activities are easily managed by all employees
using just one software. Furthermore, you can modify the main Microsoft page according to your
business needs. If you want to share an app with your employees, just add it to the home page.
Another advantage of using Microsoft 365 is that you will have all the features up to date.
Since all the apps are developed and managed by Microsoft, these apps will be compatible with each
other and will be updated automatically by the provider. Not having to face compatibility and update
problems regularly will increase employee productivity and save you time.
3. Data Security: Your company data is what needs to be protected. As a result, most companies do
everything they can to protect their data and avoid loss. Microsoft 365 simplifies data loss prevention.
It offers numerous backup and data protection features that will allow you to feel comfortable.
4. Data Migration: If you are not yet using Microsoft software, you may be wondering what the data
migration process might be like. The answer offered by Microsoft is Microsoft 365!
In fact, it makes this process very simple regardless of the storage tools currently used by your
company. Furthermore, once you switch to Microsoft 365, you will never have to worry about
migrating your data again in the future, because Microsoft is continuously updating the system.
Microsoft will make updates to ensure maximum software efficiency and continue to meet all the
business needs.
Windows Live Mesh is a syncing and remote desktop access solution that allows users to
sync files and folders across different computers and Windows SkyDrive, and access their desktops via
Internet from anywhere.
Windows Live Mesh is an online and offline syncing solution that keeps selected documents, photos,
files and program setting preferences synced on supported operating systems up to more than 100,000
files and 50 GB of cumulative data. Windows Live Mesh was formerly known as Live Sync and Windows
Live Folders.
The Windows Live Mesh utility is primarily a file syncing and collaboration solution designed to keep the
selected content on all the synced devices similar and up to date. Windows Live Mesh provides syncing
between different workstations running Live Mesh client applications – even if they are not on the same
network – and update changes made on any workstation automatically on the other when they are
connected to the Internet.
Windows Live Mesh’s online and offline client application can be integrated with SkyDrive to back up
and sync files and folders on cloud storage. These folders are globally accessible over the Internet,
providing remote access to data, as well as remote program execution and complete access on the
remote workstation.
Microsoft OneDrive
OneDrive is an online cloud storage service from
Microsoft. OneDrive integrates with Windows 11 as
a default location for saving documents, giving
Microsoft account users five gigabytes of free
storage space before giving upgrade options.
How it works
OneDrive integrates with Microsoft Office so users can access Word, Excel and PowerPoint
documents from OneDrive. It doesn’t require a download and should already be a part of
Windows 11. A Microsoft account is required to use One Drive and users will need to sign in
before using it. To sign in, users will need to go to onedrive.com and select “Sign in” which
appears at the top of the page.
The system allows users to simultaneously edit Office documents, edit documents in browsers,
and create and share folders. OneDrive also offers Facebook integration, automatic camera roll
backup and the ability for users to email slide shows. Users can also scan documents and store
them in OneDrive.
Users can choose where to save data -- on OneDrive or File Explorer. Those who want to use
OneDrive as a data backup platform should have data saved in both locations. However, other
users can choose to store their files in either or.
OneDrive also lets users share files stored in OneDrive with anyone. In OneDrive, the user will
need to select the folder they want to share, go to the share button on the top toolbar and select to
invite people. Users then can enter the email address of those they want to share the file with. If
the recipient also has Office 365, then the user can select an option to allow the shared recipient
to edit the page. There are also additional options for choosing access privileges in the drop-
down menus. From this step, users can click the shared button. Users can also generate links to
share files by going to the same share option and choosing “Get a Link.” Additional options
include allowing the recipient to edit or not. Users then create a link, select it, and can copy and
paste it to whoever they may want to. OneDrive is also available on mobile platforms -- on Mac,
iPhone and Android.
Another feature, called Personal Vault, allows users to store important files with additional
protection. Personal Vault allows users to access stored files only with a strong authentication
method or adding another layer of identity verification. For example, biometric authentication,
PIN, or a code sent to the user via email or SMS.
Cloud Security:
Cloud computing which is one of the most demanding technology of the current time, starting
from small to large organizations have started using cloud computing services. Where there are
different types of cloud deployment models are available and cloud services are provided as per
requirement like that internally and externally security is maintained to keep the cloud system safe.
Cloud computing security or cloud security is an important concern which refers to the act of
protecting cloud environments, data, information and applications against unauthorized access,
DDOS attacks, malwares, hackers and other similar attacks. Community Cloud : These allow to
a limited set of organizations or employees to access a shared cloud computing service
environment.
Planning of security in Cloud Computing :
As security is a major concern in cloud implementation, so an organization have to plan for security
based on some factors like below represents the three main factors on which planning of cloud
security depends.
Resources that can be moved to the cloud and test its sensitivity risk are picked.
The type of cloud is to be considered.
The risk in the deployment of the cloud depends on the types of cloud and service
models.
Types of Cloud Computing Security Controls :
There are 4 types of cloud computing security controls i.e.
Deterrent Controls : Deterrent controls are designed to block nefarious attacks on a cloud
system. These come in handy when there are insider attackers.
Preventive Controls : Preventive controls make the system resilient to attacks by eliminating
vulnerabilities in it.
Detective Controls : It identifies and reacts to security threats and control. Some examples
of detective control software are Intrusion detection software and network security
monitoring tools.
Corrective Controls : In the event of a security attack these controls are activated. They limit
the damage caused by the attack.
Importance of cloud security :
For the organizations making their transition to cloud, cloud security is an essential factor while
choosing a cloud provider. The attacks are getting stronger day by day and so the security needs
to keep up with it. For this purpose it is essential to pick a cloud provider who offers the best
security and is customized with the organization’s infrastructure. Cloud security has a lot of
benefits –
Centralized security : Centralized security results in centralizing protection. As managing all the
devices and endpoints is not an easy task cloud security helps in doing so. This results in
enhancing traffic analysis and web filtering which means less policy and software updates.
Reduced costs : Investing in cloud computing and cloud security results in less expenditure in
hardware and also less manpower in administration
Reduced Administration : It makes it easier to administer the organization and does not have
manual security configuration and constant security updates.
Reliability : These are very reliable and the cloud can be accessed from anywhere with any
device with proper authorization.
When we are thinking about cloud security it includes various types of security like access
control for authorized access, network segmentation for maintaining isolated data, encryption for
encoded data transfer, vulnerability check for patching vulnerable areas, security monitoring for
keeping eye on various security attacks and disaster recovery for backup and recovery during
data loss.
There are different types of security techniques which are implemented to make the cloud
computing system more secure such as SSL (Secure Socket Layer) Encryption, Multi Tenancy
based Access Control, Intrusion Detection System, firewalls, penetration testing, tokenization,
VPN (Virtual Private Networks), and avoiding public internet connections and many more
techniques.
But the thing is not so simple how we think, even implementation of number of security
techniques there is always security issues are involved for the cloud system. As cloud system is
managed and accessed over internet so a lot of challenges arises during maintaining a secure
cloud. Some cloud security challenges are
5. Transparency Issues
In cloud computing security, transparency means the willingness of a cloud service provider to
reveal different details and characteristics on its security preparedness. Some of these details
compromise policies and regulations on security, privacy, and service level. In addition to the
willingness and disposition, when calculating transparency, it is important to notice how
reachable the security readiness data and information actually are. It will not matter the extent to
which the security facts about an organization are at hand if they are not presented in an
organized and easily understandable way for cloud service users and auditors, the transparency
of the organization can then also be rated relatively small.
7. Managerial Issues
There are not only technical aspects of cloud privacy challenges but also non-technical and
managerial ones. Even on implementing a technical solution to a problem or a product and not
managing it properly is eventually bound to introduce vulnerabilities. Some examples are lack of
control, security and privacy management for virtualization, developing comprehensive service
level agreements, going through cloud service vendors and user negotiations, etc.
Infrastructure Security
Here, we discuss the threats, challenges, and guidance associated with securing an organization’s
core IT infrastructure at the network, host, and application levels. Information security
practitioners commonly use this approach; therefore, it is readily familiar to them. We discuss this
infrastructure security in the context of SPI service delivery models (SaaS, PaaS, and IaaS). Non-
information security professionals are cautioned not to simply equate infrastructure security to
infrastructure-as-a-service (IaaS) security. Although infrastructure security is more highly relevant
to customers of IaaS, similar consideration should be given to providers’ platform-as-a-service
(PaaS) and software-as-a-service (SaaS) environments, since they have ramifications to your
customer threat, risk, and compliance management. Another dimension is the cloud business
model (public, private, and hybrid clouds), which is orthogonal to the SPI service delivery model;
what we highlight is the relevance of discussion points as they apply to public and private clouds.
When discussing public clouds the scope of infrastructure security is limited to the layers of
infrastructure that move beyond the organization’s control and into the hands of service providers
(i.e., when responsibility to a secure infrastructure is transferred to the cloud service provider or
CSP, based on the SPI delivery model). Information in this chapter is critical for customers in
gaining an understanding of what security a CSP provides and what security you, the customer,
are responsible for providing.
When looking at the network level of infrastructure security, it is important to distinguish between
public clouds and private clouds. With private clouds, there are no new attacks, vulnerabilities, or
changes in risk specific to this topology that
information security personnel need to consider.
Although your organization’s IT architecture
may change with the implementation of a private
cloud, your current network topology will
probably not change significantly. If you have a
private extranet in place (e.g., for premium
customers or strategic partners), for practical
purposes you probably have the network
topology for a private cloud in place already. The
security considerations you have today apply to a
private cloud infrastructure, too. And the security
tools you have in place (or should have in place)
are also necessary for a private cloud and operate
in the same way. Figure shows the topological
similarities between a secure extranet and a
private cloud. However, if you choose to use
public cloud services, changing security
requirements will require changes to your
network topology. You must address how your
existing network topology interacts with your
cloud provider’s network topology. There are
four significant risk factors in this use case:
Ensuring the confidentiality and integrity of
your organization’s data-in-transit to and from
your public cloud provider
Ensuring proper access control (authentication,
authorization, and auditing) to whatever
resources you are using at your public cloud provider
Ensuring the availability of the Internet-facing resources in a public cloud that are being used
by your organization, or have been assigned to your organization by your public cloud providers
Replacing the established model of network zones and tiers with domains
When reviewing host security and assessing risks, you should consider the context of cloud
services delivery models (SaaS, PaaS, and IaaS) and deployment models (public, private, and
hybrid). Although there are no known new threats to hosts that are specific to cloud computing,
some virtualization security threats—such as VM escape, system configuration drift, and insider
threats by way of weak access control to the hypervisor—carry into the public cloud computing
environment. The dynamic nature (elasticity) of cloud computing can bring new operational
challenges from a security management perspective. The operational model motivates rapid
provisioning and fleeting instances of VMs. Managing vulnerabilities and patches is therefore
much harder than just running a scan, as the rate of change is much higher than in a traditional data
center.
In addition, the fact that the clouds harness the power of thousands of compute nodes, combined
with the homogeneity of the operating system employed by hosts, means the threats can be
amplified quickly and easily—call it the “velocity of attack” factor in the cloud. More importantly,
you should understand the trust boundary and the responsibilities that fall on your shoulders to
secure the host infrastructure that you manage. And you should compare the same with providers’
responsibilities in securing the part of the host infrastructure the CSP manages.
Application or software security should be a critical element of your security program. Most
enterprises with information security programs have yet to institute an application security
program to address this realm. Designing and implementing applications targeted for deployment
on a cloud platform will require that existing application security programs reevaluate current
practices and standards. The application security spectrum ranges from standalone single-user
applications to sophisticated multiuser e-commerce applications used by millions of users. Web
applications such as content management systems (CMSs), wikis, portals, bulletin boards, and
discussion forums are used by small and large organizations. A large number of organizations also
develop and maintain custom-built web applications for their businesses using various web
frameworks (PHP,† .NET,‡ J2EE,§ Ruby on Rails, Python, etc.). According to SANS, until 2007
few criminals attacked vulnerable websites because other attack vectors were more likely to lead
to an advantage in unauthorized economic or information access. Increasingly, however, advances
in cross-site scripting (XSS) and other attacks have demonstrated that criminals looking for
financial gain can exploit vulnerabilities resulting from web programming errors as new ways to
penetrate important organizations. In this section, we will limit our discussion to web application
security: web applications in the cloud accessed by users with standard Internet browsers, such as
Firefox, Internet Explorer, or Safari, from any computer connected to the Internet.
Since the browser has emerged as the end user client for accessing in-cloud applications, it is
important for application security programs to include browser security into the scope of
application security. Together they determine the strength of end-to-end cloud security that helps
protect the confidentiality, integrity, and availability of the information processed by cloud
services.
Data Security
With regard to data-in-transit, the primary risk is in not using a vetted encryption algorithm.
Although this is obvious to information security professionals, it is not common for others to
understand this requirement when using a public cloud, regardless of whether it is IaaS, PaaS,or
SaaS. It is also important to ensure that a protocol provides confidentiality as well as integrity (e.g.,
FTP over SSL [FTPS], Hypertext Transfer Protocol Secure [HTTPS], and Secure Copy Program
[SCP])—particularly if the protocol is used for transferring data across the Internet.
Merely encrypting data and using a non-secured protocol (e.g., “vanilla” or “straight” FTP or
HTTP) can provide confidentiality, but does not ensure the integrity of the data (e.g., with the use
of symmetric streaming ciphers). Although using encryption to protect data-at-rest might seem
obvious, the reality is not that simple. If you are using an IaaS cloud service (public or private) for
simple storage (e.g., Amazon’s Simple Storage Service or S3), encrypting data-at-rest is
possible—and is strongly suggested. However, encrypting data-at-rest that a PaaS or SaaS cloud-
based application is using (e.g., Google Apps, Salesforce.com) as a compensating control is not
always feasible. Data-at-rest used by a cloud-based application is generally not encrypted, because
encryption would prevent indexing or searching of that data.
IAM Challenges
One critical challenge of IAM concerns managing access for diverse user populations (employees,
contractors, partners, etc.) accessing internal and externally hosted services. IT is constantly
challenged to rapidly provision appropriate access to the users whose roles and responsibilities
often change for business reasons. Another issue is the turnover of users within the organization.
Turnover varies by industry and function—seasonal staffing fluctuations in finance departments,
for example—and can also arise from changes in the business, such as mergers and acquisitions,
new product and service releases, business process outsourcing, and changing responsibilities. As
a result, sustaining IAM processes can turn into a persistent challenge.
Access policies for information are seldom centrally and consistently applied. Organizations can
contain disparate directories, creating complex webs of user identities, access rights, and
procedures. This has led to inefficiencies in user and access management processes while exposing
these organizations to significant security, regulatory compliance, and reputation risks.
To address these challenges and risks, many companies have sought technology solutions to enable
centralized and automated user access management. Many of these initiatives are entered into with
high expectations, which is not surprising given that the problem is often large and complex. Most
often those initiatives to improve IAM can span several years and incur considerable cost. Hence,
organizations should approach their IAM strategy and architecture with both business and IT
drivers that address the core inefficiency issues while preserving the control’s efficacy (related to
access control). Only then will the organizations have a higher likelihood of success and return on
investment.
IAM Definitions
To start, we’ll present the basic concepts and definitions of IAM functions for any service:
Authentication
Authentication is the process of verifying the identity of a user or system (e.g., Lightweight
Directory Access Protocol [LDAP] verifying the credentials presented by the user, where the
identifier is the corporate user ID that is unique and assigned to an employee or contractor).
Authentication usually connotes a more robust form of identification. In some use cases, such as
service-to-service interaction, authentication involves verifying the network service requesting
access to information served by another service (e.g., a travel web service that is connecting to a
credit card gateway to verify the credit card on behalf of the user).
Authorization
Authorization is the process of determining the privileges the user or system is entitled to once the
identity is established. In the context of digital services, authorization usually follows the
authentication step and is used to determine whether the user or service has the necessary privileges
to perform certain operations—in other words, authorization is the process of enforcing policies.
Auditing
In the context of IAM, auditing entails the process of review and examination of authentication,
authorization records, and activities to determine the adequacy of IAM system controls, to verify
compliance with established security policies and procedures (e.g., separation of duties), to detect
breaches in security services (e.g., privilege escalation), and to recommend any changes that are
indicated for countermeasures.
IAM Architecture and Practice
IAM is not a monolithic solution that can be easily deployed to gain capabilities immediately. It is
as much an aspect of architecture (see Figure 5-1) as it is a collection of technology components,
processes, and standard practices. Standard enterprise IAM architecture encompasses several
layers of technology, services, and processes. At the core of the deployment architecture is a
directory service (such as LDAP or Active Directory) that acts as a repository for the identity,
credential, and user attributes of the organization’s user pool. The directory interacts with IAM
technology components such as authentication, user management, provisioning, and federation
services that support the standard IAM practice and processes within the organization. It is not
uncommon for organizations to use several directories that were deployed for environment-
specific reasons (e.g., Windows systems using Active Directory, Unix systems using LDAP) or
that were integrated into the environment by way of business mergers and acquisitions. The IAM
processes to support the business can be broadly categorized as follows:
User management Activities for the effective governance and management of identity life cycles
Authentication management Activities for the effective governance and management of the
process for determining that an entity is who or what it claims to be
Authorization management Activities for the effective governance and management of the
process for determining entitlement rights that decide what resources an entity is permitted to
access in accordance with the organization’s policies
Access management Enforcement of policies for access control in response to a request from an
entity (user, services) wanting to access an IT resource within the organization
Data management and provisioning Propagation of identity and data for authorization to IT
resources via automated or manual processes
Monitoring and auditing Monitoring, auditing, and reporting compliance by users regarding
access to resources within the organization based on the defined policies
IAM
processes support the following operational activities:
Provisioning This is the process of on-boarding users to systems and applications. These processes
provide users with necessary access to data and technology resources. The term typically is used
in reference to enterprise-level resource management. Provisioning can be thought of as a
combination of the duties of the human resources and IT departments, where users are given access
to data repositories or systems, applications, and databases based on a unique user identity.
Deprovisioning works in the opposite manner, resulting in the deletion or deactivation of an
identity or of privileges assigned to the user identity.
Credential and attribute management These processes are designed to manage the life cycle of
credentials and user attributes— create, issue, manage, revoke—to minimize the business risk
associated with identity impersonation and inappropriate account use. Credentials are usually
bound to an individual and are verified during the authentication process. The processes include
provisioning of attributes, static (e.g., standard text password) and dynamic (e.g., one-time
password) credentials that comply with a password standard (e.g., passwords resistant to dictionary
attacks), handling password expiration, encryption management of credentials during transit and
at rest, and access policies of user attributes (privacy and handling of attributes for various
regulatory reasons).
Entitlement management Entitlements are also referred to as authorization policies. The
processes in this domain address the provisioning and deprovisioning of privileges needed for the
user to access resources including systems, applications, and databases. Proper entitlement
management ensures that users are assigned only the required privileges (least privileges) that
match with their job functions. Entitlement management can be used to strengthen the security of
web services, web applications, legacy applications, documents and files, and physical security
systems.
Compliance management This process implies that access rights and privileges are monitored
and tracked to ensure the security of an enterprise’s resources. The process also helps auditors
verify compliance to various internal access control policies, and standards that include practices
such as segregation of duties, access monitoring, periodic auditing, and reporting. An example is
a user certification process that allows application owners to certify that only authorized users have
the privileges necessary to access business-sensitive information.
Identity federation management Federation is the process of managing the trust relationships
established beyond the internal network boundaries or administrative domain boundaries among
distinct organizations. A federation is an association of organizations that come together to
exchange information about their users and resources to enable collaborations and transactions
(e.g., sharing user information with the organizations’ benefits systems managed by a third-party
provider). Federation of identities to service providers will support SSO to cloud services.
Centralization of authentication (authN) and authorization (authZ)
A central authentication and authorization infrastructure alleviates the need for application
developers to build custom authentication and authorization features into their applications.
Furthermore, it promotes a loose coupling architecture where applications become agnostic to the
authentication methods and policies. This approach is also called an “externalization of authN and
authZ” from applications.
Audit and Compliance
audit and compliance refers to the internal and external processes that an organization implements
to:
Identify the requirements with which it must abide—whether those requirements are
driven by business objectives, laws and regulations, customer contracts, internal
corporate policies and standards, or other factors
Put into practice policies, procedures, processes, and systems to satisfy such requirements
Monitor or check whether such policies, procedures, and processes are consistently
followed
Audit and compliance functions have always played an important role in traditional outsourcing
relationships. However, these functions take on increased importance in the cloud given the
dynamic nature of software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), and platform-
as-a-service (PaaS) environments. Cloud service providers (CSPs) are challenged to establish,
monitor, and demonstrate ongoing compliance with a set of controls that meets their customers’
business and regulatory requirements. Maintaining separate compliance efforts for different
regulations or standards is not sustainable. A practical approach to audit and compliance in the
cloud includes a coordinated combination of internal policy compliance, regulatory compliance,
and external auditing.
Internal Policy Compliance
CSPs, like other enterprises, need to establish processes, policies, and procedures for managing
their IT systems that are appropriate for the nature of the service offering, can be operationalized
in the culture of the organization, and satisfy relevant external requirements. In designing their
service offerings and supporting processes, CSPs need to:
Address the requirements of their current and planned customer base
Establish a strong control foundation that will substantially meet customer requirements,
thereby minimizing the need for infrastructure customization that could reduce
efficiencies and diminish the value proposition of the CSP’s services
Set a standard that is high enough to address those requirements
Define standardized processes to drive efficiencies
The Figure shows a life cycle approach for determining, implementing, operating, and
monitoring controls over a CSP.
Here is
an explanation of each stage of the life cycle:
1. Define strategy
As a CSP undertakes to build out or take a fresh look at its service offerings, the CSP should clearly
define its business strategy and related risk management philosophy. What market segments or
industries does the CSP intend to serve?
This strategic decision will drive the decision of how high the CSP needs to “set the bar” for its
controls. This is an important decision, as setting it too low will make it difficult to meet the needs
of new customers and setting it too high will make it difficult for customers to implement and
difficult for the CSP to maintain in a cost-effective manner. A clear strategy will enable the CSP
to meet the baseline requirements of its customers in the short term and provide the flexibility to
incorporate necessary changes while resisting unnecessary or potentially unprofitable
customization.
2. Define requirements
Having defined its strategy and target client base, the CSP must define the requirements for
providing services to that client base. What specific regulatory or industry requirements are
applicable? Are there different levels of requirements for different sets of clients?
The CSP will need to determine the minimum set of requirements to serve its client base and the
incremental industry-specific requirements. For example, the CSP will need to determine whether
it supports all of those requirements as part of a base product offering or whether it offers
incremental product offerings with additional capabilities at a premium, now or in a future release.
3. Define architecture
Driven by its strategy and requirements, the CSP must now determine how to architect and
structure its services to address customer requirements and support planned growth. As part of the
design, for example, the CSP will need to determine which controls are implemented as part of the
service by default and which controls (e.g., configuration settings, selected platforms, or
workflows) are defined and managed by the customer.
4. Define policies
The CSP needs to translate its requirements into policies. In defining such policies, the CSP should
draw upon applicable industry standards as discussed in the sections that follow. The CSP will
also need to take a critical look at its staffing model and ensure alignment with policy requirements.
5. Define processes and procedures
The CSP then needs to translate its policy requirements into defined, repeatable processes and
procedures—again using applicable industry standards and leading practices guidance. Controls
should be automated to the greatest extent possible for scalability and to facilitate monitoring.
6. Ongoing operations
Having defined its processes and procedures, the CSP needs to implement and execute its defined
processes, again ensuring that its staffing model supports the business requirements.
7. Ongoing monitoring
The CSP should monitor the effectiveness of its key control activities on an ongoing basis with
instances of non-compliance reported and acted upon. Compliance with the relevant internal and
external requirements should be realized as a result of a robust monitoring program.
8. Continuous improvement
As issues and improvement opportunities are identified, the CSP should ensure that there is a
feedback loop to guarantee that processes and controls are continuously improved as the
organization matures and customer requirements evolve.
CSPs are typically challenged to meet the requirements of a diverse client base. To build a
sustainable model, it is essential that the CSP establish a strong foundation of controls that can be
applied to all of its clients. In that regard, the CSP can use the concept of GRC that has been
adopted by a number of leading traditional outsourced service providers and CSPs. GRC
recognizes that compliance is not a point-in-time activity, but rather is an ongoing process that
requires a formal compliance program. Figure 8-2 depicts such a programmatic approach to
compliance.