Unit 4 Managing & Securing The Cloud
Unit 4 Managing & Securing The Cloud
• System Backups: It is required to audit the backups from time to time to ensure repair
of randomly selected files of different users. This might be done by cloud provider.
• Flows of Data in the System: The managers are responsible for designing a data flow
diagram that shows how the data is supposed to flow throughout the organization.
• Vendor Lock-In: The managers should know how to move their data from a server to
another in case the organization decides to switch providers.
• Security Procedures: The managers should know the security plans of the provider,
especially Multitenant use, E-Commerce processing, Employee screening and Encryption
policy.
• Monitoring the Capacity, Planning and Scaling Abilities: The manager should know if
their current cloud provider is going to meet their organization’s demand in the future
and also their scaling capabilities.
• Monitoring Audit Log: In order to identify errors in the system, logs are audited by the
managers on a regular basis.
• Solution Testing and Validation: It is necessary to test the cloud services and verify the
results and for error-free solutions.
Cloud Management Products and Emerging Standards:-
• Several providers have products designed for cloud computing management (VMware,
OpenQRM, CloudKick, and Managed Methods), along with the big players like BMC, HP,
IBM Tivoli and CA.
• Most support the on-the-fly creation and provisioning of new objects and the
destruction of unnecessary objects, like servers, storage, and/or apps.
• Most provide the usual suite of reports on status such as uptime, response time, quota
use, etc. and have a dashboard that can be drilled into.
• RightScale:
• They offer free edition with limitations on features and capacity, designed to
introduce you to the product
• Multi-Cloud Engine.
• Kaavo:
• Zeus:
• With Apache, and to a lesser-extent, IIS, dominating that market, not to mention
the glut of load balancers out there.
• Zeus took its expertise in the application server space and came up with the
Application Delivery Controller piece of the Zeus Traffic Controller.
• Zeus currently supports this on the Rackspace and, to a lesser extent, Amazon
platforms.
• Scalr:
• Scalr is a young project hosted on Google Code and Scalr.net that creates dynamic
clusters, similar to Kaavo and RightScale, on the Amazon platform.
• It support custom building of images for each server or server-type, also similar to
RightScale.
• Scalr does not support the wide number of platforms, operating systems,
applications, and databases.
• Morph:
• Its top-tier product, the Morph Cloud-Server is based on the IBM Blade-Center, and
supports hundreds of virtual machines.
• CloudWatch:
• Amazon’s CloudWatch works on Amazon’s platform only.
• Limits its overall usefulness
• It cannot be a hybrid cloud management tool.
• Since Amazon’s Elastic Compute Cloud (EC2) is the biggest platform.
Securing the cloud:-
• Cloud security includes keeping data private and safe across online-based infrastructure,
applications, and platforms.
• Cloud providers host services on their servers through always-on internet connections.
Since their business relies on customer trust, cloud security methods are used to keep
client data private and safely stored.
• These measures ensure user and device authentication, data and resource access
control, and data privacy protection.
• Cloud data security refers to the technologies, policies, services and security controls
that protect and type of data in the cloud from loss, leakage or misuse through
breaches, exfiltration and unauthorized access.
• Ensuring the security and privacy of data across networks as well as within applications,
containers, workloads and other cloud environments.
• The cloud data protection and security strategy must also protect data of all types.
• The core principles of information security and data governance data Confidentiality,
Integrity, and Availability (known as the CIA triad) also apply to the cloud:
• Availability: Ensuring the data is fully available and accessible when it’s needed.
• Data in use: Securing data being used by an application or endpoint through user
authentication and access control.
• Data in motion: Ensuring the safe transmission of sensitive, confidential or proprietary
data while it moves across the network through encryption and/or other email and
messaging security measures.
• Data at rest: Protecting data that is being stored on any network location, including the
cloud, through access restrictions and user authentication.
• Monitoring: The enterprise needs a way to determine if the access to cloud data is
authorized and appropriate.
• Deploy Encryption: Ensure that sensitive and critical data is encrypted both in transit
and at rest. Not all vendors offer encryption, and the enterprise should consider
implementing a third-party encryption solution for added protection.
• Back up the data: While vendors have their own backup procedures, it's essential to
back up cloud data locally as well.
• Implement Identity and Access Management (IAM): IAM technology and policies
ensure that the right people have appropriate access to data, and this framework need
to encompass the cloud environment.
• Cloud Identity is an Identity as a Service (IDaaS) solution that centrally manages users
and groups.
• There are several identity services that are organized to validate service such as
validating web sites, transactions, transaction participants, client, etc.
• Cloud Identity also gives you more control over the accounts that are used in your
organization.
• For example, if developers in your organization use personal accounts, such as Gmail
accounts, those accounts are outside of your control. When you adopt Cloud Identity,
you can manage access and compliance across all users in your domain.
• When you adopt Cloud Identity, you create a Cloud Identity account for each of your
users and groups.
• You can then use Identity and Access Management (IAM) to manage access to Google
Cloud resources for each Cloud Identity account.
• SSO has single authentication server, managing multiple accesses to other systems.
• With single sign-on employees, partners and customers get esy, fast and secure access
to all SaaS, Mobile application and enterprise applications with a single authentication.
• SSO allows the user to login only one time and manage the access to other systems.
• FIDM describes the technologies and protocols that enable a user to package security
authorizations/identification across security domains.
3. OpenID:
• Google, Yahoo!, Flickr, MySpace, WordPress.com are some of the companies that
support OpenID.
• MFA is more secure that the traditional method of entering usernames and passwords.
• Cloud providers help users to enable organizations to easily enable Multi Factor
Authentication.
• For example, users may be asked to enter the USB device into their system to log in,
along with a password. MFA provides more security that the classic username and
password method.
Benefits of Identity Management In Cloud Computing:
• Smooth Collaboration: SaaS is increasingly being designed and utilized as the hub for
connecting with virtual networks of several suppliers, distributors, and trading partners.
After knowing that SaaS is cloud-based, it has become easy to establish a new
connection between identification and access.
• Improved On-Demand Support: The problems that result from churn protect
organization with cloud-based solutions. Experts will be able to provide 24*7 supports
and monitoring, whenever required.
• Centralized Management System: Businesses will be able to manage entire services and
programs all at one place with help of cloud-based services. All the identity
management will be done with a single click on the single dashboard.
• Storage Area Networks are typically used to provide access to data storage.
• These make sure those storage devices such as disks, tape drives etc. can be accessed by
an operating system as system storage devices.
• It uses the fiber channel for connecting the several data storage devices.
• Storage Area Network is more complex than the Network Attached Storage.
• It depends on the Local Area Network and requires the TCP/IP network.
• ISCSI, FCoE, FCP, and Fc-NVMe are the protocols used in SAN.
• Allow the operating system to connect to the storage devices via the storage area
network.
• The networking devices in the storage area networks constitute the fabric layer.
• All these devices help move data in the storage area network, from source to
destination.
Storage Layer:
• The storage devices in the storage area network together form the storage layer.
• These devices include database server, file server, hard disk, magnetic tape etc.
• All the storage devices contain a number known as the logical unit number (LUN).
• They can be uniquely identified in the storage area network using this number.
• Effective Storage Usage: The consolidated and centralized SAN architecture enables
users to effectively leverage available storage capacity to its maximum potential.
• Disaster Recovery (DR) for critical Data: SAN systems come with native support for
enterprise DR applications. In the event of a disaster (natural or man-made), you can
recover your critical workloads in no time and ensure business continuity without fail.
• High Availability Architecture: Equipped with redundant storage controllers and RAID,
storage area networks do not have a single point of failure. In the event of a key
component failure, high availability SANs make sure that the data remains available with
minimum downtime.
• Highly Scalable Block-Level Storage: Storage area networks can support thousands of
drives, with RAID arrays and expansion units, to build petabyte-scale systems. Users can
start small with terabytes of storage capacity and easily scale up as data grows.
• Data Loss Resistant Infrastructure: In addition to native DR support, SAN systems also
support backup capabilities, such as immutable snapshots and more, to make sure that
in the event of a disaster, your critical bloc data is safe and recoverable.
• The term cloud disaster recovery refers to the strategies apply for the purpose of
backing up applications, resources, and data into a cloud environment.
• If disaster hits, enterprises can restore data from backed up from cloud environments.
• Cloud Disaster Recovery (CDR) is based on a program that provides you recover safety
functions fully from a remote access to a computer device in a protected virtual world.
• Companies no longer ought to waste a lot of time transmitting data backups from their
in-house databases or hard driver to restore after a tragedy.
• These make sure those storage devices such as disks, tape drivers etc. can be accessed
by an operating system as system storage devices.
• Natural disasters: Include events like earthquakes or floods. If an event strikes the area
containing an organization's servers hosting their cloud service, it cloud disrupt services
and require immediate disaster recovery operations.
• Human disasters: These include anything related to human beings, such as accidental
data loss, inadvertent misconfigurations, or malicious third-party access (Ransomware,
Malware, Data Breaches).
• Technical disasters: Include things that could go wrong with the technology, such as loss
of network connectivity or power failures.
1. Preventive: Ensuring your systems are as secure and reliable as possible, using tools and
techniques to prevent a disaster from occurring in the first place. This may include
backing up critical data or continuously monitoring environments for configuration
errors and compliance violations.
2. Detective: For rapid recovery, you'll need to know when a response is necessary. These
measures focus on detecting or discovering unwanted events as they happen in real
time.
3. Corrective: These measures are aimed at planning for potential DR scenarios, ensuring
backup operations to reduce impact, and putting recovery procedures into action to
restore data and systems quickly when the time comes.
1. Analysis:
• The analysis phase includes a complete risk assessment and impacts analysis of the
organization's existing IT infrastructure.
• After identifying the risks, the IT department can identify potential weaknesses and
disasters.
• The organization can then evaluate how its current infrastructure stands against the
identified challenges and determine the workloads.
2. Implementation:
• The implementation phase helps the organization outline the steps and technologies
needed to address disasters.
• The goal is to devise a plan that allows the organization to promptly implement all
necessary measures while responding to disasters.
• Preparedness: A detailed plan explaining how the organization will respond during
disaster events and includes clear roles and responsibilities
• Response: The manual and automated measures the organization will implement in
response to a disaster event
• Recovery: These are manual and automated measures in place to help the organization
quickly recover data it needs to resume normal operations
3. Testing:
• Organizations need to test their cloud-based disaster recovery strategies and plans and
update them regularly.
• It helps to ensure employees remain adequately trained and the plan relevant.
• Testing also ensures that the automated processes and technologies are working
correctly and ready for use.