Unit - II (Cloud Computing)
Unit - II (Cloud Computing)
If simply put, Cloud Computing is a model that offers convenient, on-demand network access to
a pool of shared resources. This may include data storage, databases, servers, networking, tools,
or any other resources that can be accessed through the internet. Cloud computing services come
mainly in three types of service models: SaaS (Software as a Service), IaaS (Infrastructure as a
Service), and PaaS (Platform as a Service). Each of the cloud models has its own set of benefits
that could serve the needs of various businesses.
Choosing between them requires an understanding of these cloud models, evaluating your
requirements, and finding out how the chosen model can deliver your intended set of workflows.
Software as a Service(SaaS)
Software-as-a-Service (SaaS) is a way of delivering services and applications over the Internet.
Instead of installing and maintaining software, we simply access it via the Internet, freeing
ourselves from the complex software and hardware management. It removes the need to install
and run applications on our own computers or in the data centers eliminating the expenses of
hardware as well as software maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a
cloud service provider. Most SaaS applications can be run directly from a web browser without
any downloads or installations required. The SaaS applications are sometimes called Web-based
software, on-demand software, or hosted software.
Advantages of SaaS
● Cost-Effective: Pay only for what you use.
● Reduced time: Users can run most SaaS apps directly from their web browser without
needing to download and install any software. This reduces the time spent in installation
and configuration and can reduce the issues that can get in the way of the software
deployment.
● Accessibility: We can Access app data from anywhere.
● Automatic updates: Rather than purchasing new software, customers rely on a SaaS
provider to automatically perform the updates.
● Scalability: It allows the users to access the services and features on-demand.
Disadvantages of SaaS
● SaaS applications are totally dependent on Internet connection. They are not usable
without an Internet connection.
● It is difficult to switch amongst the SaaS vendors.
Studies reveal that Supply Chain Management, Business Intelligence, Enterprise Resource
Planning (ERP), and Project and Portfolio Management will see the fastest growth in end-user
spending on SaaS applications, through 2022.
Advantages of PaaS:
● Simple and convenient for users: It provides much of the infrastructure and other IT
services, which users can access anywhere via a web browser.
● Cost-Effective: It charges for the services provided on a per-use basis thus eliminating
the expenses one may have for on-premises hardware and software.
● Efficiently managing the lifecycle: It is designed to support the complete web
application lifecycle: building, testing, deploying, managing, and updating.
● Efficiency: It allows for higher-level programming with reduced complexity thus, the
overall development of the application can be more effective.
Disadvantages of PaaS
● One developer can write the applications as per the platform provided by PaaS vendor
hence moving the application to another PaaS vendor is a problem.
The various companies providing Platform as a service are Amazon Web services Elastic
Beanstalk, Salesforce, Windows Azure, Google App Engine, cloud Bess and IBM smart cloud.
Besides, it is flexible and delivers the necessary speed in the process, which will rapidly improve
your development times. A typical disadvantage with PaaS is that since it is built on virtualized
technology, you will have less control over the data processing. In addition, it is also less flexible
compared to the IaaS cloud model.
A study by Market Reports World estimates that the global PaaS market will grow at a CAGR of
24.17% during 2019-2023 and will get valued at 28.4 billion USD by the end of 2023.
Infrastructure as a Service
Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on an
outsourced basis to support various operations. Typically IaaS is a service where infrastructure is
provided as an outsource to enterprises such as networking equipment, devices, database, and
web servers.
It is also known as Hardware as a Service (HaaS). IaaS customers pay on a per-user basis,
typically by the hour, week, or month. Some providers also charge customers based on the
amount of virtual machine space they use.
It simply provides the underlying operating systems, security, networking, and servers for
developing such applications, services, and for deploying development tools, databases, etc.
Advantages of IaaS:
● Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS customers
pay on a per-user basis, typically by the hour, week, or month.
● Website hosting: Running websites using IaaS can be less expensive than traditional web
hosting.
● Security: The IaaS Cloud Provider may provide better security than your existing
software.
● Maintenance: There is no need to manage the underlying data center or the introduction
of new releases of the development or underlying software. This is all handled by the
IaaS Cloud Provider.
Disadvantages of IaaS
The various companies providing Infrastructure as a service are Amazon web services,
Bluestack, IBM, Openstack, Rackspace, and Vmware.
Whether you are running a startup or a large enterprise, IaaS gives access to computing resources
without the need to invest in them separately. However, the only downside with IaaS is that it is
much costlier than SaaS or PaaS cloud models.
Anything as a Service
Most of the cloud service providers nowadays offer anything as a service that is a compilation of
all of the above services including some additional services.
Advantages of XaaS: As this is a combined service, it has all the advantages of every type of
cloud service.
The machine on which the virtual machine is going to be built is known as Host Machine and
that virtual machine is referred to as a Guest Machine.
Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM, Sun xVM
Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following diagram shows the
Type 1 hypervisor.
The type1 hypervisor does not have any host operating system because they are installed on a
bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a system normally
interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual Server 2005 R2,
Windows Virtual PC and VMWare workstation 6.0 are examples of Type 2 hypervisor. The
following diagram shows the Type 2 hypervisor.
In Emulation, the virtual machine simulates the hardware and hence becomes independent of it.
In this, the guest operating system does not require modification.
Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own isolated
domains.
BENEFITS OF VIRTUALIZATION
1. More flexible and efficient allocation of resources.
2. Enhance development productivity.
3. It lowers the cost of IT infrastructure.
4. Remote access and rapid scalability.
5. High availability and disaster recovery.
6. Pay peruse of the IT infrastructure on demand.
7. Enables running multiple operating systems.
Types of Virtualization:
1.Application Virtualization.
2.Network Virtualization.
3.Desktop Virtualization.
4.Storage Virtualization.
5.Server Virtualization.
6.Data virtualization.
1. Application Virtualization:
Application virtualization helps a user to have remote access of an application from a server. The
server stores all personal information and other characteristics of the application but can still run
on a local workstation through the internet. Example of this would be a user who needs to run
two different versions of the same software. Technologies that use application virtualization are
hosted applications and packaged applications.
2. Network Virtualization:
The ability to run multiple virtual networks with each has a separate control and data plan. It
co-exists together on top of one physical network. It can be managed by individual parties that
potentially confidential to each other.
Network virtualization provides a facility to create and provision virtual networks—logical
switches, routers, firewalls, load balancer, Virtual Private Network (VPN), and workload security
within days or even in weeks.
3. Desktop Virtualization:
Desktop virtualization allows the users’ OS to be remotely stored on a server in the data centre. It
allows the user to access their desktop virtually, from any location by a different machine. Users
who want specific operating systems other than Windows Server will need to have a virtual
desktop. Main benefits of desktop virtualization are user mobility, portability, easy management
of software installation, updates, and patches.
4. Storage Virtualization:
Storage virtualization is an array of servers that are managed by a virtual storage system. The
servers aren’t aware of exactly where their data is stored, and instead function more like worker
bees in a hive. It makes managing storage from multiple sources to be managed and utilized as a
single repository. storage virtualization software maintains smooth operations, consistent
performance and a continuous suite of advanced functions despite changes, break down and
differences in the underlying equipment.
5. Server Virtualization:
This is a kind of virtualization in which masking of server resources takes place. Here, the
central-server(physical server) is divided into multiple different virtual servers by changing the
identity number, processors. So, each system can operate its own operating systems in isolate
manner. Where each sub-server knows the identity of the central server. It causes an increase in
the performance and reduces the operating cost by the deployment of main server resources into
a sub-server resource. It’s beneficial in virtual migration, reduce energy consumption, reduce
infrastructural cost, etc.
6. Data virtualization:
This is the kind of virtualization in which the data is collected from various sources and managed
that at a single place without knowing more about the technical information like how data is
collected, stored & formatted then arranged that data logically so that its virtual view can be
accessed by its interested people and stakeholders, and users through the various cloud services
remotely. Many big giant companies are providing their services like Oracle, IBM, At scale,
Cdata, etc.
The following use cases are especially suitable for running containers in the cloud:
● Microservices: containers are lightweight, making them well suited for applications with
microservices architectures consisting of a large number of loosely coupled,
independently deployable services.
● DevOps: many DevOps teams build applications using a microservices architecture, and
deploy services using containers. Containers can also be used to deploy and scale the
DevOps infrastructure itself, such as CI/CD tools.
● Hybrid and multi-cloud: for organizations operating in two or more cloud environments,
containers are highly useful for migrating workloads. They are a standardized unit that
can be flexibly moved between on-premise data centers and any public cloud.
● Application modernization: A common way to modernize a legacy application is to
containerize it, and move it as is to the cloud (a model known as “ lift and shift '').
What Is Containerization?
Hopefully, you understand the containers concept pretty clearly now. You may have already
guessed that the applications hosted in containers have different coding standards than regular
applications. But what is containerization, and how do you create containerized applications?
Container Orchestration
Container orchestration is the process of creating an environment that automates most of the
maintenance tasks for containerized workloads, applications, and services. Most companies rely
on container orchestration platforms that provide fully managed container services to their users.
Pricing Models
There are many pricing models used on cloud, and they can be classified in two main types from
the perspective of the changing period: fixed and dynamic.
1. Pay-per-use Model
In this model, user only have to pay for what they use. Customer pays in function of the time or
quantity he consumes on a specific service. Amazon Web Services (AWS) Salesforce as shown
in Table 1 and Table 2 use this model.
Dynamic pricing
Dynamic pricing models are also known as Real-time pricing, these models are very flexible, and
they can be considered as a result of a function that take as parameters the cost, time, and other
parameters like the location, user perceiving value and others.
In Dynamic pricing, the price is calculated based on pricing mechanism whenever there is a
request. As compared to fixed prices, the dynamic pricing that reflects the real-time supply
demand relationship represents a more promising charge strategy that can better exploit user
payment potentials and thus larger profit gains at the cloud provider.
As example Amazon changes there prices every 10 minutes, and Best buy and Walmart
implements price changes over 50000 time per month.
There are many dynamic pricing models like:
● Cost-based model, which merge the profit with the level of Cost.
● Value-Based, which take into consideration the basis of user perceiving value
● Competition-based, which take into consideration the competitor price of services
● Customer-Based, which consider what the customer is prepared to pay.
● Location-Based, where price is set according to the customer location.
While some are targeted at individual customer groups, others discuss issues relevant to entire
companies. This is because the needs of one user differ from another. Here are some types of
SLAs used by businesses today and how each one is utilized for specific situations:
1. Customer-based SLA: This type of agreement is used for individual customers and
comprises all relevant services that a client may need while leveraging only one contract. It
contains details regarding the type and quality of service that has been agreed upon.
For example, a telecommunication service includes voice calls, messaging, and internet services,
but all exist under a single contract.
2. Service-based SLA: This SLA is a contract that includes one identical type of service for all
of its customers. Because the service is limited to one unchanging standard, it is more
straightforward and convenient for vendors.
For example, using a service-based agreement regarding an IT helpdesk would mean that the
same service is valid for all end-users that sign the service-based SLA.
3. Multi-level SLA: This agreement is customized according to the needs of the end-user
company. It allows the user to integrate several conditions into the same system to create a more
convenient service. This type of SLA can be divided into the following subcategories:
● Corporate level: This SLA does not require frequent updates since its issues are
typically unchanging. It includes a comprehensive discussion of all the relevant aspects
of the agreement and applies to all customers in the end-user organization.
● Customer level: This contract discusses all service issues that are associated with a
specific group of customers. However, it does not take into consideration the type of user
services.
For example, when an organization requests that the security level in one of its
departments is strengthened. In this situation, the entire company is secured by one
security agency but requires that one of its customers is more secure for specific reasons.
● Service level: In this agreement, all aspects attributed to a particular service regarding a
customer group are included.
Components of SLA
An SLA highlights what the client and the service provider want to achieve with their
cooperation and outlines the obligations of the participants, the expected performance level, and
the results of cooperation.
An SLA usually has a defined duration time that is provided in the document. The services that
the provider agrees to deliver are often described in detail to avoid misunderstanding, including
procedures of performance monitoring, assessment, and troubleshooting. Here are the following
components necessary for a good agreement:
● Document overview: This first section sets forth the basics of the agreement, including
the parties involved, the start date, and a general introduction of the services provided.
● Strategic goals: Description of the agreed purpose and objectives.
● Description of services: The SLA needs detailed descriptions of every service offered
under all possible circumstances, including the turnaround times. Service definitions
should include how the services are delivered, whether maintenance service is offered,
what the hours of operation are, where dependencies exist, an outline of the processes,
and a list of all technology and applications used.
● Exclusions:Specific services that are not offered should also be clearly defined to avoid
confusion and eliminate room for assumptions from other parties.
● Service performance: Performance measurement metrics and performance levels are
defined. The client and service provider should agree on a list of all the metrics they will
use to measure the provider's service levels.
● Redressing: Compensation or payment should be defined if a provider cannot properly
fulfill their SLA.
● Stakeholders:Clearly defines the parties involved in the agreement and establishes their
responsibilities.
● Security: All security measures that the service provider will take are defined. Typically,
this includes the drafting and consensus on anti poaching, IT security, and nondisclosure
agreements.
● Risk management and disaster recovery:Risk management processes and a disaster
recovery plan are established and communicated.
● Service tracking and reporting: This section defines the reporting structure, tracking
intervals, and stakeholders involved in the agreement.
● Periodic review and change processes. The SLA and all established key performance
indicators (KPIs) should be regularly reviewed. This process is defined as well as the
appropriate process for making changes.
● Termination process. The SLA should define the circumstances under which the
agreement can be terminated or will expire. The notice period from either side should
also be established.
● Finally, all stakeholders and authorized participants from both parties must sign the
document to approve every detail and process.
Common Metrics of SLA
Service-level agreements can contain numerous service-performance metrics with corresponding
service-level objectives. A common case in IT-service management is a call center or service
desk. Metrics commonly agreed to in these cases include:
● Abandonment Rate: Percentage of calls abandoned while waiting to be answered.
● ASA(Average Speed to Answer): Average time (usually in seconds) it takes for a call to
be answered by the service desk.
● Resolution time:The time it takes for an issue to be resolved once logged by the service
provider.
● Error rate:The percentage of errors in a service, such as coding errors and missed
deadlines.
● TSF(Time Service Factor): Percentage of calls answered within a definite timeframe,
e.g., 80% in 20 seconds.
● FCR(First-Call Resolution): A metric that measures a contact center's ability for its
agents to resolve a customer's inquiry or problem on the first call or contact.
● TAT(Turn-Around-Time): Time is taken to complete a particular task.
● TRT(Total Resolution Time): Total time is taken to complete a particular task.
● MTTR(Mean Time To Recover): Time is taken to recover after an outage of service.
● Security:The number of undisclosed vulnerabilities, for example. If an incident occurs,
service providers should demonstrate that they've taken preventive measures.
Uptime is also a common metric used for data services such as shared hosting, virtual private
servers, and dedicated servers. Standard agreements include the percentage of network uptime,
power uptime, number of scheduled maintenance windows, etc. Many SLAs track to the ITIL
specifications when applied to IT services.
Types of SLA Penalties
A natural reply to any violation is a penalty. An SLA penalty depends on the industry and
business. Here are the two most common SLA penalty types.
1. Financial penalty: This kind of penalty requires a vendor to pay the customer compensation
of damages equal to the one written in the agreement. The amount will depend on the extent of a
violation and damage and may not fully reimburse what a customer paid for the eCommerce
service or eCommerce support.
● License extension or support:It requires the vendor to extend the license term or offer
additional customer support without charge. This could include development and
maintenance.
2. Service credit: In this case, a service provider will have to provide a customer with
complimentary services for a specific time. To avoid any confusion or misunderstanding between
the two parties in SLA violation, such penalties must be clearly articulated in the agreement.
Otherwise, they won't be legitimate.
● Service availability:It includes factors such as network uptime, data center resources,
and database availability. Penalties should be added as deterrents against service
downtime, which could negatively affect the business.
● Service quality:It involves performance guarantee, the number of errors allowed in a
product or service, process gaps, and other issues that relate to quality.
These penalties must be specified in the language of the SLA, or they won't be enforceable. In
addition, some customers may not think the service credit or license extension penalties are
adequate compensation. They may question the value of continuing to receive a vendor's services
that cannot meet its quality levels.
Consequently, it may be worth considering a combination of penalties and including an
incentive, such as a monetary bonus, for more than satisfactory work.
Revising and Changing an SLA
Since business requirements are subject to change, it's important to revise an SLA regularly. It
will help to always keep the agreement in line with the business's service level objectives. The
SLA should be revised when changes of the following occur:
● A company's requirements
● Workload volume
● Customer's needs
● Processes and tools
The contract should have a detailed plan for its modification, including change frequency,
change procedures, and changelog.
1. SLA Calculation: SLA assessment and calculation determine a level of compliance with the
agreement. There are many tools for SLA calculation available on the internet.
2. SLA uptime: Uptime is the amount of time the service is available. Depending on the type of
service, a vendor should provide minimum uptime relevant to the average customer's demand.
Usually, a high uptime is critical for websites, online services, or web-based providers as their
business relies on its accessibility.
3. Incident and SLA violations: This calculation helps determine the extent of an SLA breach
and the penalty level foreseen by the contract. The tools usually calculate a downtime period
during which service wasn't available, compare it to SLA terms and identify the extent of the
violation.
4. SLA credit: If a service provider fails to meet the customer's expectations outlined in the
SLA, a service credit or other type of penalty must be given as a form of compensation. A
percentage of credit depends directly on the downtime period, which exceeded its norm indicated
in a contract.
Service level management is the process of managing SLAs that helps companies to define,
document, monitor, measure, report, and review the performance of the provided services. The
professional SLA management services should include:
● Setting realistic conditions that a service provider can ensure.
● Meeting the needs and requirements of the clients.
● Establishing the right metrics for evaluating the performance of the services.
● Ensuring compliance with the terms and conditions agreed with the clients.
● Avoiding any violations of SLA terms and conditions.
An SLA is a preventive means to establish a transparent relationship between both parties
involved and build relationships in the cooperation. Such a document is fundamental to a
successful collaboration between a client and a service provider.
IaaS Networking Options
Networking is one of the fundamental elements of cloud computing and also one of the hazards
to users of cloud computing. Network performance degradation and instability can greatly affect
the consumption of cloud resources. Applications that are relatively isolated or are specially
designed to deal with network disruptions have an advantage running in the cloud.
From a different perspective, network resources can be virtualized and used in cloud computing
just as other resources are. In this section, we first discuss basic use of IP addresses in a cloud
context and then cover virtual networks.
Delivery of cloud services takes place over networks at different levels using different protocols.
This is one of the key differences in cloud models. In PaaS and SaaS clouds, delivery of services
is via an application protocol, typically HTTP. In IaaS, cloud services can be delivered over
multiple layers and protocols—for example, IPSec for VPN access and SSH for command-line
access.
Management of the different layers of the network system also is the responsibility of either the
cloud provider or the cloud consumer, depending on the type of cloud. In a SaaS model, the
cloud provider manages all the network layers. In an IaaS model, the cloud consumer manages
the network levels, except for the physical and data link layers. However, this is a simplification
because, in some cases, the network services relate to the cloud infrastructure and some services
relate to the images. The PaaS model is intermediate between IaaS and SaaS.
Table 1.5 summarizes the management of network layers in different cloud scenarios.
This table is a simplification of the many models on the market. However, it shows that an IaaS
gives cloud consumers considerably more flexibility in network topology and services than PaaS
and SaaS clouds (but at the expense of managing the tools that provide the flexibility).
IP Addresses
One of the first tasks in cloud computing is determining how to connect to the virtual machine.
Several options exist when creating a virtual machine: system generated, reserved, and VLAN IP
address solutions. System-generated IP addresses are analogous to Dynamic Host Control
Protocol (DHCP)–assigned addresses. They are actually static IP addresses, but the IaaS cloud
assigns them. This is the easiest option if all you need is a virtual machine that you can log into
and use.
Reserved IP addresses are addresses that can be provisioned and managed independently of a
virtual machine. Reserved IP addresses are useful if you want to assign multiple IP addresses to a
virtual machine.
IPv6 is an Internet protocol intended to supersede IPv4. The Internet needs more IP addresses
than IP v4 can support, which is one of the primary motivations for IPv6. The last top-level
block of IPv4 addresses was assigned in February 2011. The Internet Engineering Task Force
(IETF) published Request for Comments: 2460 Internet Protocol, Version 6 (IPv6), which was
the specification for IPv6 in 1998. IPv6 also provides other features not present in IPv4. Network
security is integrated into the design of IPv6, which makes IPSec a mandatory part of the
implementation. IPv6 does not specify interoperability with IPv4 and essentially creates an
independent network. Today usage rates of IPv6 are very low, and most providers operate in
compatibility/tolerance mode. However, that could change.
Network Virtualization
When dealing with systems of virtual machines and considering network security, you need to
manage networks. Network resources can be virtualized just like other cloud resources. To do
this, a cloud uses virtual switches to separates a physical network into logical partitions. Figure
shows this concept.
Figure 1.13. Physical and virtual networks in a cloud
VLANs can act as an extension of your enterprise’s private network. You can connect to it via an
encrypted VPN connection.
A hypervisor can share a single physical network interface with multiple virtual machines. Each
virtual machine has one or more virtual network interfaces. The hypervisor can provide
networking services to virtual machines in three ways:
● Bridging
● Routing
● Network address translation (NAT)
Bridging is usually the default mode. In this mode, the hypervisor works at the data link layer
and makes the virtual network interface externally visible at the Ethernet level. In routing mode,
the hypervisor works at the network layer and makes the virtual network interface externally
visible at the IP level.
In network address translation, the virtual network interface is not visible externally. Instead, it
enables the virtual machine to send network data out to the Internet, but the virtual machine is
not visible on the Internet. Network address translation is typically used to hide virtualization
network interfaces with private IP addresses behind a public IP address used by a host or router.
The NAT software changes the IP address information in the network packets based on
information in a routing table. The checksum values in the packet must be changed as well.
NAT can be used to put more servers on the network than the number of virtual machines you
have. It does this by port translation. This is one reason IPv6 is still not in wide use: Even though
the number of computers exceeds the number of IP addresses, you can do some tricks to share
them. For example, suppose that you have a router and three servers handling HTTP, FTP, and
mail, respectively. You can assign a public IP address to the router and private IP addresses to the
HTTP, FTP, and mail servers, and forward incoming traffic (see Table 1.6).
Table 1.6. Example of Network Address Translation
Imagine that a cloud provider’s infrastructure is a residential apartment building with multiple
families living inside. Being a public cloud tenant is akin to sharing an apartment with a few
roommates. In contrast, having a VPC is like having your own private condominium—no one
else has the key, and no one can enter the space without your permission.
A VPC’s logical isolation is implemented using virtual network functions and security features
that give an enterprise customer granular control over which IP addresses or applications can
access particular resources. It is analogous to the “friends-only” or “public/private” controls on
social media accounts used to restrict who can or can’t see your otherwise public posts.
Features
VPCs are a “best of both worlds” approach to cloud computing. They give customers many of
the advantages of private clouds, while leveraging public cloud resources and savings. The
following are some key features of the VPC model:
● Agility: Control the size of your virtual network and deploy cloud resources whenever
your business needs them. You can scale these resources dynamically and in real-time.
● Availability: Redundant resources and highly fault-tolerant availability zone
architectures mean your applications and workloads are highly available.
● Security: Because the VPC is a logically isolated network, your data and applications
won’t share space or mix with those of the cloud provider’s other customers. You have
full control over how resources and workloads are accessed, and by whom.
● Affordability: VPC customers can take advantage of the public cloud’s
cost-effectiveness, such as saving on hardware costs, labor times, and other resources.
Benefits
Each VPC’s main features readily translate into a benefit to help your business achieve agility,
increased innovation, and faster growth.
● Flexible business growth: Because cloud infrastructure resources—including virtual
servers, storage, and networking—can be deployed dynamically, VPC customers can
easily adapt to changes in business needs.
● Satisfied customers: In today’s “always-on” digital business environments, customers
expect uptime ratios of nearly 100%. The high availability of VPC environments enables
reliable online experiences that build customer loyalty and increase trust in your brand.
● Reduced risk across the entire data lifecycle: VPCs enjoy high levels of security at the
instance or subnet level, or both. This gives you peace of mind and further increases the
trust of your customers.
● More resources to channel toward business innovation: With reduced costs and fewer
demands on your internal IT team, you can focus your efforts on achieving key business
goals and exercising core competencies.
Architecture
In a VPC, you can deploy cloud resources into your own isolated virtual network. These cloud
resources—also known as logical instances—fall into three categories.
Compute: Virtual server instances (VSIs, also known as virtual servers) are presented to the user
as virtual CPUs (vCPUs) with a predetermined amount of computing power, memory, etc.
Storage: VPC customers are typically allocated a certain block storage quota per account, with
the ability to purchase more. It is akin to purchasing additional hard drive space.
Recommendations for storage are based on the nature of your workload.
Networking: You can deploy virtual versions of various networking functions into your virtual
private cloud account to enable or restrict access to its resources. These include public gateways,
which are deployed so that all or some areas of your VPC environment can be made available on
the public-facing Internet; load balancers, which distribute traffic across multiple VSIs to
optimize availability and performance; and routers, which direct traffic and enable
communication between network segments. Direct or dedicated links enable rapid and secure
communications between your on-premises enterprise IT environment or your private cloud and
your VPC resources on public cloud.
Three-tier architecture in a VPC
The majority of today’s applications are designed with a three-tier architecture comprised of the
following interconnected tiers:
● The web or presentation tier, which takes requests from web browsers and presents
information created by, or stored within, the other layers to end users.
● The application tier, which houses the business logic and is where most processing takes
place.
● The database tier, comprised of database servers that store the data processed in the
application tier.
● To create a three-tier application architecture on a VPC, you assign each tier its own
subnet, which will give it its own IP address range. Each layer is automatically assigned
its own unique ACL.
Security
VPCs achieve high levels of security by creating virtualized replicas of the security features used
to control access to resources housed in traditional data centers. These security features enable
customers to define virtual networks in logically isolated parts of the public cloud and control
which IP addresses have access to which resources.
Two types of network access controls comprise the layers of VPC security:
● Access control lists (ACLs): An ACL is a list of rules that limit who can access a
particular subnet within your VPC. A subnet is a portion or subdivision of your VPC; the
ACL defines the set of IP addresses or applications granted access to it.
● Security group: With a security group, you can create groups of resources (which may
be situated in more than one subnet) and assign uniform access rules to them. For
example, if you have three applications in three different subnets, and you want them all
to be public Internet-facing, you can place them in the same security group. Security
groups act like virtual firewalls, controlling the flow of traffic to your virtual servers, no
matter which subnet they are in.
VPC vs. …
VPC vs. virtual private network (VPN)
A virtual private network (VPN) makes a connection to the public Internet as secure as a
connection to a private network by creating an encrypted tunnel through which the information
travels. You can deploy a VPN-as-a-Service (VPNaaS) on your VPC to establish a secure
site-to-site communication channel between your VPC and your on-premises environment or
other location. Using a VPN, you can connect subnets in multiple VPCs so that they function as
if they were on a single network.
VPC vs. private cloud
Private cloud and virtual private cloud are sometimes—and mistakenly—used interchangeably.
In fact, a virtual private cloud is actually a public cloud offering. A private cloud is a
single-tenant cloud environment owned, operated, and managed by the enterprise, and hosted
most commonly on-premises or in a dedicated space or facility. By contrast, a VPC is hosted on
multi-tenant architecture, but each customer’s data and workloads are logically separate from
those of all other tenants. The cloud provider is responsible for ensuring this logical isolation.
IaaS - storage
● if the need for computing resources is not constant and can vary greatly depending on the
period, and there is no desire to overpay for unused capacity;
● when a company is just starting its way on the market and does not have working capital
in order to buy all the necessary infrastructure - a frequent option among startups;
● there is a rapid growth in business, and the network infrastructure must keep pace with it;
● if you need to reduce the cost of purchasing and maintaining equipment;
● when a new direction is launched, and it is necessary to test it without investing
significant funds in resources.
Difference between object storage, file storage and block storage
object storage
Object storage is a system that divides data into separate, self-contained units that are re-stored in
a flat environment, with all objects at the same level. There are no folders or sub-directories like
those used with file storage. Additionally, object storage does not store all data together in a
single file. Objects also contain metadata, which is information about the file that helps with
processing and usability. Users can set the value for fixed-key metadata with object storage, or
they can create both the key and value for custom metadata associated with an object.
Instead of using a file name and path to access an object, each object has a unique number.
Objects can be stored locally on computer hard drives and cloud servers. However, unlike with
file storage, you must use an Application Programming Interface (API) to access and manage
objects.
Pros
Handles large amounts of unstructured data: The format of object storage allows for easily
storing and managing a high volume of unstructured data, which is becoming increasingly
important with artificial intelligence, machine learning and big data analytics.
● Affordable consumption model: Instead of paying in advance for a set amount of storage
space, as is common with file storage, you purchase only for the object storage you need.
● Unlimited scalability: Because object storage uses a consumption model, you can add as
much additional storage as you need — even petabytes or more.
● Uses metadata: Because the metadata is stored with the objects, users can quickly gain
value from data and more easily retrieve the object they need.
● Advanced search capabilities: Object storage enables users to search for metadata, object
contents and other properties.
Cons
● Cannot lock files: All users with access to the cloud, network or hardware device can
access the objects stored there.
● Slower performance than other storage types: The file format requires more processing
time than file storage and block storage.
● Cannot modify a single portion of a file: Once an object is created, you cannot change the
object; you can only recreate a new object.
Use cases for object storage
● IoT data management: The ability to quickly scale and easily retrieve data makes object
storage a good choice for the rapidly increasing amounts of IoT data being gathered and
managed, especially in the manufacturing and healthcare industries.
● Email: Organizations that are required to store large volumes of emails for historical and
compliance purposes often turn to object storage as their primary repository for both
scalability and price.
● Backup/recovery: Organizations often turn to object storage for their backup and
recovery storage because performance is less of an issue for this use case.
● Video surveillance: Object storage provides an affordable option for organizations that
need to store many video recordings and keep the footage for several years.
File storage uses a hierarchical structure where files are organized by the user in folders and
subfolders, which makes it easier to find and manage files. To access a file, the user selects or
enters the path for the file, which includes the sub-directories and file name. Most users manage
file storage through a simple file system, such as File Manager.
Pros
● Easy to access on a small scale: With a small-to-moderate number of files, users can
easily locate and click on a desired file, and the file with the data opens. Users then save
the file to the same or a different location when they’re finished with it.
● Familiar to most users: As the most common storage type for end users, most people with
basic computer skills can easily navigate file storage with little assistance or additional
training.
● Users can manage their own files: Using a simple interface, end users can create, move
and delete their files.
● Allows access rights/file sharing/file locking to be set at user level: Users and
administrators can set a file as write (meaning users can make changes to the file),
read-only (users can only view the data) or locked (specific users cannot access the file
even as read only). Files can also be password-protected.
Cons
Challenging to manage and retrieve large numbers of files: While hierarchical storage works well
for, say, 20 folders with 10 subfolders each, file management becomes increasingly complicated
as the number of folders, subfolders and files increases. As the volume grows, the amount of
time for the search feature to find a desired file increases and becomes a significant waste of time
spread over employees throughout an organization.
● Hard to work with unstructured data: While it’s possible to save unstructured data like
text, mobile activity, social media posts and Internet of Things (IoT) sensor data in file
storage, it is typically not the best option for unstructured data storage, especially in large
amounts.
● Becomes expensive at large scales: When the amount of storage space on devices and
networks reaches capacity, additional hardware devices must be purchased.
● Use cases for file storage
● Collaboration of documents: While it’s easy to collaborate on a single document with
cloud storage or Local Area Network (LAN) file storage, users must create a versioning
system or use versioning software to prevent overwriting each other’s changes.
● Backup and recovery: Cloud backup and external backup devices typically use file
storage for creating copies of the latest versions of files.
● Archiving: Because of the ability to set permissions at a file level for sensitive data and
the simplicity of management, many organizations use file storage for archiving
documents for compliance or historical reasons.
Pros
● Fast: When all blocks are stored locally or close together, block storage has a high
performance with low latency for data retrieval, making it a common choice for
business-critical data.
● Reliable: Because blocks are stored in self-contained units, block storage has a low fail
rate.
● Easy to modify: Changing a block does not require creating a new block; instead, a new
version is created.
Cons
● Lack of metadata: Block storage does not contain metadata, making it less usable for
unstructured data storage.
● Not searchable: Large volumes of block data quickly become unmanageable because of
limited search capabilities.
● High cost: Purchasing additional block storage is expensive and often cost-prohibitive at
a high scale.
What are the key differences between object storage, block storage and file storage?
When determining which type of storage to use for different types of data, consider the
following:
● Cost: Because the costs involved with block and file storage are higher, many
organizations choose object storage for high volumes of data.
● Management ease: The metadata and searchability make object storage a top choice for
high volumes of data. File storage, with its hierarchical organization system, is more
appropriate for lower volumes of data.
● Volume: Organizations with high volumes of data often choose object or block storage.
● Retrievability: Data is relatively retrievable from all three types of storage, though file
and object storage are typically easier to access.
● Handling of metadata: Although file storage contains very basic metadata, information
with extensive metadata is typically best served by object storage.
● Data protection: While the data is stored, its essential the data is protected from breaches
and cybersecurity threats.
● Storage use cases: Each type of storage is most effective for different use cases and
workflows. By understanding their specific needs, organizations can select the type that
fits the majority of their storage use cases.
Additionally, organizations are moving data to the cloud, storing it in diverse locations that
introduce complexities, from simple public cloud and private cloud repositories, to complicated
architectures like multiclouds, hybrid clouds, and Software as a Service (SaaS) platforms.
The complexity of cloud architectures, coupled with increasingly demanding data protection and
privacy regulations, vendors shared responsibility models, create many security challenges. Here
are key challenges you might encounter:
Another major factor is compliance. Organizations are required to comply with data protection
and privacy laws and regulations, such as the European Union's General Data Protection
Regulation (GDPR), as well as the Health Insurance Portability and Accountability Act (HIPAA)
of 1996 (US).It can be very difficult for companies to consistently set and implement security
policies across multiple cloud environments, as well as prove auditors' compliance.
Know your Responsibilities in the Cloud: Using a cloud service does not mean that the cloud
provider is responsible for your data. You and your provider share responsibilities. This shared
responsibility model allows cloud providers to ensure that the hardware and software services
they provide are secure while cloud consumers remain responsible for the security of their data
assets.Cloud providers often offer better security than many companies can achieve on their own.
On the other hand, cloud consumers lose visibility because the cloud vendor is in charge of
infrastructure operations. Challenges can arise when development, operations, hosting, and
security responsibilities get mixed up between in-house teams and cloud vendors.
Ask About Provider’s Processes in Case of a Breach: Cloud vendors should provide transparent
and well-documented plans, outlining mitigation and support during breaches. A worst case
scenario can mean that multiple fail-safes and alarms are immediately triggered during a breach,
alerting all relevant parties that an attack is happening. Another emergency measure is triggering
a lockdown that secures data before it could be transferred or decrypted.
Identify Security Gaps Between Systems: Cloud environments are typically integrated with other
services, some in-house while some are third-party. The more systems and vendors you add to
the stack, the more gaps are created. Organizations need to identify each security gap and take
measures that ensure the security of the data and assets shared and used by these systems.While
some measures are implemented by third party vendors, organizations also need to implement
their own measures to ensure compliance and security. Each industry is required to uphold
certain security practices. Third-party vendors do not always offer the same level of compliance.
Transfer Data Securely: You can implement point-to-point security by combining additional
encryption with SSL for all communications. You can use secure email and file protection tools,
which enable you to track and control who can see your data, who can access that data, when and
how access is revoked (all actions or specific actions like forwarding). You can restrict the types
of data that are allowed to be transferred outside your organization ecosystems. You should also
restrict certain uses of data, and ensure that users and recipients comply with data protection
regulations.
Back Up Data Consistently: Create data replicates regularly and store them separately from the
main repository. Consistent backups can help protect your organization from critical data losses,
especially during a data wipeout or a lockdown. Data replicas enable you to continue working
off-line even when cloud assets are not available.
Expose Shadow IT in Your Cloud Deployment: A cloud security policy does not guarantee
proper use of cloud resources. Many employees are not well versed in security policies, or are
unaware of the security risks. When employees install software and download files without
consulting with the IT team, they create a shadow IT infrastructure that introduces many security
risks.There are certain measures organizations can take to protect against shadow IT risks. One
technique is to monitor firewalls, proxies, and SIEM logs to determine IT activity throughout the
organization. Next, you can assess activities and determine the risks introduced by users.Once
you gain a relatively accurate picture of activity and usage, you need to put in some measures
that prevent the transfer of corporate data from trusted systems and devices to unauthorized
endpoints. Another measure that can prevent risks is enforcing device security verification, to
prevent downloads from and to unauthorized devices.
Cloud services are becoming the main part of the infrastructure for many companies. Enterprises
should pay maximum attention to security issues, moving away from typical approaches used in
physical infrastructures, which are often insufficient in an atmosphere of constantly changing
business requirements. Although cloud providers do all they can to guarantee infrastructure
reliability, some of them limit their services to standard security measures, which can and should
be significantly expanded.
Protection Methodology
In order to reduce risks associated with information security, it is necessary to determine and
identify the levels of infrastructure that require attention and protection. For example, the
computing level (hypervisors), the data storage level, the network level, the UI and API level,
and so on.
Next you need to define protection methods at each level, distinguish the perimeter and cloud
infrastructure security zones, and select monitoring and audit tools.
Enterprises should develop an information security strategy that includes the following, at the
very least:
● Regular software update scheduling
● Patching procedures
● Monitoring and audit requirements
● Regular testing and vulnerability analysis
● IaaS Information Security Measures
Some IaaS providers already boast advanced security features. It is necessary to carefully
examine the services and systems service providers offer at their own level, as well as conditions
and guarantees of these offerings. Alternatively, consider implementing and utilizing them on
your own.
● Data Encryption: Encryption is the main and also the most popular method of data
protection. Meticulously managing security and encryption key storage control is an
essential condition of using any data encryption method. It is worth noting that the IaaS
provider must never be able to gain access to virtual machines and customer data.
● Network Encryption: It is mandatory also to encrypt network connections, which is
already a gold standard for cloud infrastructure.
● Access Control: Attention must also be paid to access control, for example, by using the
concept of federated cloud. With the help of federated services, it’s easy to organize a
flexible and convenient authentication system of internal and external users. The use of
multi-factor authentication, including OTP, tokens, smart cards, etc., will significantly
reduce the risks of unauthorized access to the infrastructure. Be sure not to forget
hardware and virtual firewalls as a means of providing network access control.
● Cloud Access Security Broker (CASB): A CASB is a unified security tool that allows
administrators to identify potential data loss risks and ensure a high level of protection.
The solution works in conjunction with the IaaS provider’s cloud infrastructure by
enabling users to monitor shared files and prevent data leakage. This way, administrators
know where important content is stored and who has access to the data.
● Vulnerability Control: Implementing and using vulnerability control, together with
regular software updates, can significantly reduce the risks associated with information
security, and it is an absolute must for both the IaaS provider and its clients.
● Monitor, Audit And Identify Anomalies: Monitoring and auditing systems allow you to
track standard indicators of infrastructure performance and identify abnormalities related
to system and service security. Using deep packet inspection (DPI) or intrusion detection
and prevention solutions (IDS/IPS) helps detect network anomalies and attacks.
● Staff Training: Conducting specialized training and adopting a general attitude of focused
attention to the technical competence of staff that have access to the virtual infrastructure
will enhance the overall level of information security.
Excel is also part of the programs that are shipped with Office 365. You can use the powerful
features found in desktop versions of Office 365 Excel in the cloud based version too.
Google G Suite
Google G suite is one of the best app used by businesses to increase the productivity of the team.
It’s the best email software in the market and not only provides email solutions but also offers
Google drive storage facilities and tools like Google Sheets and Google Docs.
Although the efficiency of the G suite depends on the type of business, you need to consider the
pros and cons of using this web-based platform.
Pros:
1. Ease of use: The G suite has a user-friendly interface that enables users to access the software
suite tools, programs, and share information automatically with users in the same workgroup via
a single login.
2. Compatibility: G suite apps are compatible with almost all devices. You can open the apps
through the mobile device and also makes it easy to switch between accounts. If you have been
using Gmail personal account you can easily switch to a G Suite account.
3. Share information: Data within your Google app is shared with users in the same group.
4. Cost-effective: The basic G-suite package is free and can be used in small organizations with
less than 10 users. It offers an email solution, calendar, office, and shared website applications
for each user.
5. Reliability: Google G suite is one of the reliable business software with no scheduled
downtime for maintenance purposes. All the googles apps data centers are built with redundant
infrastructure to enhance the app’s Uptime.
6. Secure: G suite is built on the Google cloud platform making it the highly secure platform on
the market. Google products are the most trusted services across the world and a leading
company with a security-first mindset.
7. Customized email domain: Google G suite enables a business to create its own email domain
making the company look more professional. This can contribute to more traffic and lead
generations to the company’s website.
8. Integration: There are a lot of third-party web application tools integrated with the G Suite.
These tools help businesses create an online presence and save money and time.
9. Advanced features: eDiscovery, search, email archiving, and other advanced features are
available at a low cost.
10. Cloud-based apps: All the apps in the G suite are cloud-based and this encourages more use
of cloud-based applications and collaboration among users.
Cons:
1. No forms in emailing clients: If you need built-in forms and workflows for your business, you
need to look for other third-party apps that offer that application.
2. Document conversion issues: You may experience challenges converting Google Sheets and
Docs to Microsoft documents and PDF formats. You’re forced to look for a third-party app to
help in conversion.
3. Time-consuming: It may take some time to import data and documents into the system from
other external sources.
4. Limited Hangout features: Google G suite has limited hangout features than slack.
5. Internet connection: For the apps to function, you need internet connectivity 24/7. Slow
connection or no internet limits your access to some apps.
6. Storage facilities: The free package has limited storage space. Although paid packages have
higher storage capacity, they do not offer more space like Office 365.
7. Integration of desktop email with the mobile client: The mobile version of the app is clunky
making users disappointed due to the lack of proper integration.
8. Limited recovery period: If you delete Gmail, or Drive data, it can be recovered from the
admin console within a limited period of time.
9. Single source: The service relies heavily on the third party and should Google have any outage
or server issues, it will affect the running of your business.
10. Web-based option: G suite is purely a web-based platform and if you’re used to using
software like Microsoft Office, you will find G suite features like Google Docs and Sheet not as
effective as Office product.
It can be very easy to get caught up in the bells-and-whistles of the software, or to make
assumptions about what a feature means, only to find out later that the software you just
purchased is missing a key function needed for day-to-day functioning.
Microsoft Office 365 is a great example of the need to understand licensing limitations. You may
read online that you can integrate VOIP calling functionality into Office 365, not realizing that
capability only exists if you buy a certain license type. A thorough evaluation and advice of an
IT professional is helpful when trying to determine licensing needs.
4. What’s their privacy policy? How do you know they won’t sell your data?
As the saying goes, “If the software is free, you are the product.” Many social media sites are
monetized through advertising. In asking this question, your job is to understand how they will
use the information stored on their servers, either in aggregate or through targeted marketing.
5. How can you ensure the SaaS vendor won’t lose your data?
What are their backup and recovery policies? Do they offer a service level agreement with an
up-time guarantee?
Technology start-ups are particularly vulnerable to data loss. They may be cash-crunched and
cutting corners to put all their time and energy into feature enhancements to gain new customers.
Only when tragedy strikes – a hurricane, fire, flood or burglary – do they realize that their
backup process failed – or that it will take weeks to get back up and running on a new server.
There are countless stories of software companies that have vanished overnight, leaving their
customers without critical accounting, customer or sales data.
With so many readily available, and affordable cloud-hosting backup solutions available, data
loss is inexcusable. Don’t let a SaaS vendor’s mistake cost you your business.
● Naming conventions may be different. What’s an account vs. a customer? What’s a lead
vs. a list?
● Functions may be hidden in unexpected places. If you’ve been using QuickBooks
forever, switching to FreshBooks or Wave may leaving you scratching your head on
where to find features that you know must exist, but they’re not on the page where you’d
expect to find them.
● The software is always evolving. As updates are published, how are new features
communicated to users? Do they send out emails, create walk-throughs, or expect you to
regularly visit their user forum?
● Is onboarding available? Many SaaS vendors offer a series of videos and walk-throughs
to orient new users to the system.
8. Is software customer support included? What types?
Software support can be free or paid; self-service or on-demand. Before you become a customer,
ask about customer support options like:
● Live chat
● Help “More Info” Icons within the software itself
● Phone support
● Support hours (Is it 24/7/365? Is it OK if it’s not?)
● Blogs and forums
● Help desk ticketing
● Facebook community pages
● Vendor or partner consulting services
9. Can we connect this software program to other software systems? Can we extend or customize
the software?
In a prior post, we shared how important it is to select small business software with API
Integration. Small business data silos create problems because it’s so easy to lose sight of the
“truth” and only see one aspect of the business. Ask if the software works natively with Zapier,
PieSync or other integration tools. Look and see if they have software partners that extend the
functionality of this solution.
● According to the latest research from the Cloud Industry Forum, 89 per cent of
organisations surveyed have an in-house server room, although this dropped to 73 per
cent for those employing less than 20 people.
● The combination of committed capital and potential of available labour supporting it has
to be assessed to determine if a new solution is most effectively housed on premise.
● Equally, if an organisation is considering expanding a legacy or bespoke application, or
creating an integration with such a system from a new application (e.g. a new workflow
service for an existing order management system), then the level of integration or the
nature of the legacy application may restrict choice of deployment options, to being on-
premise or hosted in a private Cloud or dedicated infrastructure. -
● Available network bandwidth to access the internet was not as high a practical issue - 80
per cent of organisations (across all size ranges) already had sufficient bandwidth for
normal business tasks at a contracted rate of 10Mbit/s or greater.
● Other commercial considerations organisations have to take into account are matters such
as the urgency of the solution (i.e. Time to Market) where a hosted or Cloud solution has
clear advantages for urgent or temporary projects, balanced against contributing
considerations of available experienced staff in the solution area and/or the ability to
identify a trusted partner for delivery via the Cloud.
● The technical considerations need to start at the planning and strategy phase. CIF's
research identified that 85 per cent of organisations now formally include consideration
of Cloud based solutions within their IT strategy, which is very positive as this number-is
some 32 per cent above the actual number of organisations that have adopted at least one
Cloud service formally.
● The technical delivery team inside any organisation is going to have look at initiatives
based on attributes such as their temporality/duration, urgency (time to market),
frequency in changes of scale in operations, skills and manpower requirements in order to
determine if a solution can be delivered in-house or should be delivered as a service.
● Other criteria will apply when considering issues such as Test or Back-Up/ Disaster
Recovery solutions where arguably an off-site presence is more appropriate to reflect and
mitigate the real world risks. -
● Over-riding all commercial and technical considerations are the Governance, Policy and
Regulatory constraints that the organisation must take into account. -Top of this list will
relate to issues of external regulation and notably around matters such as Data
Sovereignty.
● 42 per cent of organisations participating in the research stated they were subject to
external regulation, 12 per cent were subsidiaries of organisations that were
headquartered internationally.
● In the survey 90 per cent of respondents wanted their data kept on-premise or hosted in
the UK and not held within the wider EEA or other geographies.