Unit 4
Unit 4
Security management covers all aspects of protecting an organization’s assets – including computers,
people, buildings, and other assets – against risk. A security management strategy begins by identifying
these assets, developing and implementing policies and procedures for protecting them, and maintaining
and maturing these programs over time.
Security management and monitoring involves the processes and tools used to ensure the protection of an
organization's information systems, assets, and data from security threats. This field is broad and
encompasses several key areas, each with its own scope and activities:
Scope: Monitoring the organization's network traffic and communication to detect and respond to
security threats.
Activities:
o Using intrusion detection/prevention systems (IDS/IPS).
o Monitoring firewalls, routers, and switches for suspicious traffic.
o Packet inspection and flow analysis.
o Analyzing network behavior to detect anomalies.
o Event correlation and alerting through security information and event management
(SIEM) systems.
Scope: Ensuring the security of devices (workstations, servers, mobile devices) connected to the
organization's network.
Activities:
o Monitoring for malware or suspicious activity on devices.
o Implementing and monitoring antivirus and endpoint detection and response (EDR)
systems.
o Vulnerability management to ensure systems are patched.
o Device logging and activity monitoring.
Scope: Ensuring that sensitive data is protected and its use is monitored.
Activities:
o Monitoring data access, usage, and transmission.
o Implementing encryption for data at rest and in transit.
o Auditing data storage locations for compliance with regulations (e.g., GDPR, HIPAA).
o Monitoring database activities for anomalous queries or access patterns.
o Ensuring secure data deletion practices.
Scope: Managing and monitoring security in cloud environments, which often have shared
responsibility between cloud providers and the organization.
Activities:
o Implementing cloud-native security tools (e.g., AWS CloudTrail, Azure Security Center).
o Monitoring for misconfigurations in cloud environments.
o Auditing cloud access logs and user activity.
o Managing cloud identity and access management (IAM) policies.
o Ensuring cloud data is encrypted and secure.
Scope: Ensuring that the organization meets regulatory, legal, and policy requirements regarding
security and data protection.
Activities:
o Continuous monitoring of security policies and standards.
o Ensuring compliance with frameworks such as ISO 27001, NIST, or industry-specific
regulations.
o Performing regular security audits and vulnerability assessments.
o Monitoring third-party vendor security and compliance.
o Risk assessments to identify, mitigate, and prioritize potential security risks.
Scope: Protecting physical assets and ensuring they are not compromised.
Activities:
o Monitoring physical access control systems (e.g., biometric scanners, access badges).
o Surveillance systems monitoring.
o Protecting sensitive physical locations (e.g., data centers).
o Ensuring that physical security aligns with cyber security policies (e.g., ensuring that
only authorized personnel can access certain systems).
9. User Behavior Monitoring
Scope: Monitoring user activities and behaviors to detect insider threats or risky actions.
Activities:
o Implementing User and Entity Behavior Analytics (UEBA) to detect unusual behavior.
o Monitoring for suspicious activity like downloading large amounts of data or using
unapproved applications.
o Enforcing acceptable use policies through monitoring tools.
o Training users on safe behaviors and flagging suspicious activities.
Scope: Measuring and reporting on the effectiveness of the organization’s security posture.
Activities:
o Tracking key security metrics like number of incidents, response times, and
vulnerabilities.
o Reporting security status to management and regulatory bodies.
o Continuous improvement based on metrics and lessons learned from past incidents.
Conclusion
Security management can come in various different forms. Three common types of security management
strategies include information, network, and cyber security management.
Information security management includes implementing security best practices and standards designed
to mitigate threats to data like those found in the ISO/IEC 27000 family of standards. Information security
management programs should ensure the confidentiality, integrity, and availability of data.
Many organizations have internal policies for managing access to data, but some industries have external
standards and regulations as well. For example, healthcare organizations are governed by the Health
Insurance Portability and Accessibility Act (HIPAA), and the Payment Card Industry Data Security
Standard (PCI DSS) protects payment card information.
Cybersecurity management refers to a more general approach to protecting an organization and its IT
assets against cyber threats. This form of security management includes protecting all aspects of an
organization’s IT infrastructure, including the network, cloud infrastructure, mobile devices, Internet of
Things (IoT) devices, and applications and APIs.
Cloud-based services are now a crucial component of many businesses, with technology providers
adhering to strict privacy and data security guidelines to protect the privacy of user information. Cloud
security standards assist and guide organizations in ensuring secure cloud operations.
NIST is a federal organization in the US that creates metrics and standards to boost competition in the
scientific and technology industries. The National Institute of Regulations and Technology (NIST)
developed the Cybersecurity Framework to comply with US regulations such as the Federal
Information Security Management Act and the Health Insurance Portability and Accountability Act
(HIPAA) (FISMA). NIST places a strong emphasis on classifying assets according to their commercial
value and adequately protecting them.
2. ISO-27017
3. ISO-27018
The protection of personally identifiable information (PII) in public clouds that serve as PII processors
is covered by this standard. Despite the fact that this standard is especially aimed at public-cloud
service providers like AWS or Azure, PII controllers (such as a SaaS provider processing client PII in
AWS) nevertheless bear some accountability. If you are a SaaS provider handling PII, you should think
about complying with this standard.
4. CIS controls
Organizations can secure their systems with the help of Internet Security Center (CIS) Controls, which
are open-source policies based on consensus. Each check is rigorously reviewed by a number of
professionals before a conclusion is reached.
To easily access a list of evaluations for cloud security, consult the CIS Benchmarks customized for
particular cloud service providers. For instance, you can use the CIS-AWS controls, a set of controls
created especially for workloads using Amazon Web Services (AWS).
5. FISMA
In accordance with the Federal Information Security Management Act (FISMA), all federal agencies
and their contractors are required to safeguard information systems and assets. NIST, using NIST SP
800-53, was given authority under FISMA to define the framework security standards (see definition
below).
6. Cloud Architecture Framework
These frameworks, which frequently cover operational effectiveness, security, and cost-value factors,
can be viewed as best parties standards for cloud architects. This framework, developed by Amazon
Web Services, aids architects in designing workloads and applications on the Amazon cloud.
Customers have access to a reliable resource for architecture evaluation thanks to this framework,
which is based on a collection of questions for the analysis of cloud environments.
For the European Union, there are laws governing data protection and privacy. Even though this law
only applies to the European Union, it is something you should keep in mind if you store or otherwise
handle any personal information of residents of the EU.
8. SOC Reporting
A form of audit of the operational processes used by IT businesses offering any service is known as a
“Service and Organization Audits 2” (SOC 2). A worldwide standard for cybersecurity risk
management systems is SOC 2 reporting. Your company’s policies, practices, and controls are in place
to meet the five trust principles, as shown by the SOC 2 Audit Report. The SOC 2 audit report lists
security, availability, processing integrity, confidentiality, and confidentiality as security principles. If
you offer software as a service, potential clients might request proof that you adhere to SOC 2
standards.
9. PCI DSS
For all merchants who use credit or debit cards, the PCI DSS (Payment Card Industry Data Security
Standard) provides a set of security criteria. For businesses that handle cardholder data, there is PCI
DSS. The PCI DSS specifies fundamental technological and operational criteria for safeguarding
cardholder data. Cardholders are intended to be protected from identity theft and credit card fraud by
the PCI DSS standard.
10. HIPAA
The Health Insurance Portability and Accountability Act (HIPAA), passed by the US Congress to
safeguard individual health information, also has parts specifically dealing with information
security. Businesses that handle medical data must abide by HIPAA law . The HIPAA Security Rule
(HSR) is the best choice in terms of information security. The HIPAA HSR specifies rules for
protecting people’s electronic personal health information that a covered entity generates, acquires,
makes use of or maintains.
Organizations subject to HIPAA regulations need risk evaluations and risk management plans to reduce
threats to the availability, confidentiality, and integrity of the crucial health data they manage. Assume
your company sends and receives health data via cloud-based services (SaaS, IaaS, PaaS). If so, it is
your responsibility to make sure the service provider complies with HIPAA regulations and that you
have implemented best practices for managing your cloud setups.
Any business that uses Amazon Web Service cloud resources can help safeguard sensitive IT systems
and data by adhering to the CIS AWS Foundations Benchmark. Intelligence analysts developed a set of
objective, consensus-driven configuration standards known as the CIS (Center for Internet Security)
Benchmarks to help businesses improve their information security. Additionally, CIS procedures are
for fortifying AWS accounts to build a solid foundation for running jobs on AWS.
ACSC Essential 8 (also known as the ASD Top 4) is a list of eight cybersecurity mitigation strategies
for small and large firms. In order to improve security controls, protect businesses’ computer resources
and systems, and protect data from cybersecurity attacks, the Australian Signals Directorate (ASD) and
the Australian Cyber Security Centre (ACSC) developed the “Essential Eight Tactics.”
The ITIL service lifecycle is a powerful tool for improving IT service management and aligning IT
services with business goals. The ITIL (Information Technology Infrastructure Library) lifecycle is a
framework for IT Service Management (ITSM) that provides structured guidance on delivering IT
services in an efficient and customer-focused way. It is widely used in enterprises to ensure the alignment
of IT services with business goals. The ITIL lifecycle consists of five stages, each focusing on different
aspects of IT service management.
1. Service Strategy
Objective: Define the IT services that will meet business objectives and requirements.
Key Focus: The long-term vision and strategy for IT services.
Activities:
o Service Portfolio Management: Determine which services should be provided, continued,
or discontinued.
o Financial Management: Ensure cost-effectiveness of services and justify IT spending.
o Demand Management: Anticipate and respond to demand for services, balancing supply.
o Business Relationship Management: Maintain good relationships with customers and
stakeholders, understanding their needs.
Outcome: Strategic planning for IT services that supports business objectives, creates value, and aligns IT
with the business.
2. Service Design
Objective: Design IT services, along with the practices and processes necessary to meet business
needs.
Key Focus: Translate business requirements into effective service solutions.
Activities:
o Service Catalog Management: Define and maintain a catalog of all services offered.
o Service Level Management: Negotiate and monitor service level agreements (SLAs).
o Capacity Management: Ensure IT infrastructure is capable of meeting service needs
without over-provisioning.
o Availability Management: Ensure services are available as required.
o IT Service Continuity Management: Plan for disaster recovery and business continuity.
o Security Management: Design services with security considerations in mind.
o Supplier Management: Manage relationships with third-party vendors and suppliers.
Outcome: Well-designed services, processes, and infrastructure that meet current and future business
needs.
3. Service Transition
Objective: Manage the transition of new or changed services into the live environment while
minimizing disruptions.
Key Focus: Smooth and controlled changes, ensuring the reliability and quality of services before
they go live.
Activities:
o Change Management: Control and authorize changes to minimize the impact on existing
services.
o Release and Deployment Management: Plan, schedule, and manage the release of new
services into the production environment.
o Configuration Management: Keep track of all components (hardware, software,
documentation) and how they interact.
o Knowledge Management: Ensure relevant knowledge is available to help IT staff and
users.
o Service Asset and Configuration Management (SACM): Manage all service assets and
their configurations.
Outcome: Successful implementation of new or modified services with minimal risk and disruption.
4. Service Operation
Objective: Ensure that IT services are delivered effectively and efficiently on a day-to-day basis.
Key Focus: Operational stability and efficiency, delivering services according to the SLAs.
Activities:
o Incident Management: Restore normal service operation as quickly as possible after an
incident (service disruption).
o Problem Management: Identify and remove the root causes of incidents.
o Request Fulfillment: Manage service requests from users (e.g., password resets, software
installation).
o Event Management: Monitor for significant events (e.g., thresholds being exceeded) and
act upon them.
o Access Management: Ensure authorized users can access services and prevent
unauthorized access.
Outcome: Stable, efficient, and consistent delivery of services, maintaining the agreed service levels.
Objective: Continuously improve services, processes, and service delivery to create more value
for the business.
Key Focus: Identifying areas for improvement and ensuring those improvements are
implemented.
Activities:
o Reviewing Performance: Analyze data and metrics to identify underperformance.
o Assessing Processes: Evaluate existing processes for efficiency and effectiveness.
o Customer Feedback: Gather feedback to identify opportunities for improvement.
o Benchmarking: Compare performance against industry standards or best practices.
o Implementing Improvements: Make recommendations for improvements and implement
them.
Outcome: Continuous improvement in service quality, efficiency, and alignment with business objectives.
Each stage of the ITIL lifecycle interacts with the others, creating a feedback loop where services are
constantly evaluated, improved, and realigned with the evolving needs of the enterprise. This holistic
approach ensures that IT services not only support the business but also drive it forward.
The ITIL service lifecycle has many benefits for modern IT service management:
1. Practical and organized: The lifecycle provides a clear plan for managing IT services from start to
finish.
2. Clear roles and responsibilities: Each stage of the lifecycle outlines specific roles and
responsibilities, ensuring accountability and efficiency.
3. Focuses on continuous improvement: The lifecycle encourages ongoing efforts to improve IT
services, keeping organizations flexible and able to adapt to changes.
4. Aligns with business needs: Starting with Service Strategy, the lifecycle ensures that IT services
support the organization’s overall goals.
These are internationally recognized standards for information security management. Together, they form
a critical part of the ISO 27000 series, a set of standards designed to help organizations manage the
security of assets such as financial information, intellectual property, employee details, and information
entrusted by third parties.
ISO/IEC 27001
Overview
ISO/IEC 27001 is a specification for an Information Security Management System (ISMS). It provides a
systematic approach to managing sensitive company information, ensuring it remains secure. The
standard outlines how to establish, implement, maintain, and continually improve an ISMS, with a focus
on managing risks and protecting information confidentiality, integrity, and availability.
2. Management Responsibility:
o Senior management must be involved in and committed to the ISMS, ensuring that
security aligns with business objectives.
3. Security Objectives:
o Organizations need to define clear security objectives, tied to the organization's risk
assessment, and regularly measure performance against these objectives.
4. Continual Improvement:
o Organizations must regularly review and improve their ISMS to respond to new threats,
vulnerabilities, and changes in the business environment.
5. Documentation Requirements:
o Documented policies, procedures, and controls must be maintained and updated to reflect
the current state of information security management.
6. Internal Audits:
o The organization must conduct regular internal audits to ensure compliance with ISO/IEC
27001 requirements and identify areas for improvement.
8. ISMS Scope:
o The scope of the ISMS must be clearly defined, covering specific areas, departments, or
the entire organization depending on needs.
Certification:
ISO/IEC 27001 certification is a formal process where an accredited certification body audits an
organization’s ISMS to ensure it meets the standard's requirements. Achieving certification
demonstrates to stakeholders, clients, and regulators that the organization takes information
security seriously.
ISO/IEC 27002
Overview
ISO/IEC 27002 is a code of practice that provides best-practice guidance on implementing information
security controls. It complements ISO/IEC 27001 by providing detailed descriptions of the information
security controls that can be selected and implemented as part of the ISMS.
1. Security Policies:
o Guidelines for creating, approving, and managing security policies that are consistent
with an organization's security objectives.
4. Asset Management:
o Identifying organizational assets (data, devices, systems) and ensuring they are properly
managed and protected.
5. Access Control:
o Establishing rules for controlling access to information systems, ensuring that only
authorized individuals can access sensitive data.
6. Cryptography:
o Guidelines for the use of cryptographic controls to protect the confidentiality, integrity,
and availability of information.
8. Operations Security:
o Managing and controlling the operation of information systems, including backups,
logging, and monitoring for suspicious activities.
9. Communications Security:
o Ensuring secure communication within and outside the organization, protecting the
transfer of data over networks.
12. Compliance:
o Ensuring that information security controls comply with legal, regulatory, and contractual
requirements.
ISO/IEC 27002 provides a list of controls, divided into different categories, such as:
Each control describes an objective and the best practices for achieving it, helping organizations design
and implement the right safeguards.
ISO/IEC 27001 defines the overall requirements for an ISMS, focusing on risk management and
governance.
ISO/IEC 27002 is a guidance document that provides detailed best practices for implementing
security controls that can be chosen based on the risks identified during the ISMS development in
ISO/IEC 27001.
In other words, while ISO/IEC 27001 explains what needs to be done, ISO/IEC 27002 provides guidance
on how to implement specific controls.
Conclusion
ISO/IEC 27001 and 27002 are essential tools for enterprises looking to implement robust information
security management systems. They provide both a strategic framework (ISO/IEC 27001) and practical
guidance (ISO/IEC 27002) for safeguarding critical business information, ensuring compliance, and
managing risk effectively.
Organizations that certify their ISMS to the requirements of ISO/IEC 27001 gain several significant
benefits, including regulatory compliance, Systematic approach, Reduced risk, Reduced costs, and
Market advantage.
Technical controls - Primarily implemented in information systems, using software, hardware, and
firmware components.
Organizational controls - Implemented by defining rules to be followed by users, equipment, software,
and systems.
Legal controls - Implemented by ensuring that rules follow and enforce the laws, regulations, contracts,
and other similar legal instruments the organization must comply with.
Physical controls - Implemented by using equipment or devices that interact physically with people and
objects.
Human resource controls - Implemented by providing people with knowledge, education, skills, or
experience to enable them to perform their activities securely.
The following relevant processes as the recommended security management focus areas for securing
services in the cloud:
• Availability management (ITIL)
• Access control (ISO/IEC 27002, ITIL)
• Vulnerability management (ISO/IEC 27002)
• Patch management (ITIL)
• Configuration management (ITIL)
• Incident response (ISO/IEC 27002)
• System use and access monitoring (ISO/IEC 27002)
In the context of cloud delivery models (Software as a Service - SaaS, Platform as a Service - PaaS, and
Infrastructure as a Service - IaaS) and deployment models (private and public cloud), security
management plays a critical role in ensuring the confidentiality, integrity, and availability of services and
data. Each cloud model brings different levels of control, responsibility, and risk for both the cloud
provider and the cloud consumer.
Here’s a breakdown of relevant security management functions for each SPI cloud delivery model across
private and public deployment models:
In SaaS, the cloud provider manages most of the infrastructure and applications, while the consumer
focuses on the configuration and use of the application.
Access Control: The enterprise must ensure strict identity and access management (IAM) to
control who has access to the SaaS applications.
Data Encryption: Ensure that data is encrypted both at rest and in transit, especially if the SaaS
application is handling sensitive information.
Data Sovereignty: Implement controls to ensure data remains compliant with local regulations
regarding where it is stored and processed.
Compliance Audits: Regular audits to ensure the private cloud provider’s adherence to industry-
specific security standards (e.g., HIPAA, GDPR).
Application Security: Monitor for vulnerabilities in the SaaS application and perform regular
security testing (e.g., penetration testing).
Vendor Risk Management: Assess the security posture of the SaaS provider, ensuring they adhere
to SLAs, data protection, and regulatory requirements.
Data Backup and Recovery: Ensure that the SaaS provider has robust backup and disaster
recovery mechanisms in place, and that you understand your role in recovery processes.
User and Device Authentication: Use multi-factor authentication (MFA) and role-based access
control (RBAC) to limit exposure to unauthorized access.
Security Monitoring: Implement logging and monitoring to track suspicious activity and detect
potential breaches within the SaaS environment.
Data Isolation: Ensure that the public cloud provider uses proper data isolation mechanisms to
separate your data from that of other tenants.
In PaaS, the cloud provider manages the infrastructure, while the consumer controls application
deployment and configuration.
Security Management Functions in Private Cloud:
Secure Development Lifecycle: Implement security checks during application development and
testing phases within the PaaS environment.
Access Controls: Use robust access controls to limit developer and user access to critical PaaS
environments.
Vulnerability Management: Regularly patch and update applications deployed on the PaaS to
protect against vulnerabilities.
Network Segmentation: Ensure proper network segmentation within the private cloud to limit the
exposure of sensitive data and applications.
Application Security Controls: Implement strong application security testing and review (e.g.,
code review, static and dynamic testing).
API Security: Secure APIs that are used to interface with the PaaS platform, using techniques like
rate limiting, strong authentication, and encryption.
Secure Data Storage: Ensure that data stored within the PaaS platform is properly encrypted and
access is controlled through IAM policies.
Compliance Management: Implement compliance monitoring to ensure the public PaaS provider
adheres to relevant legal and regulatory requirements.
Monitoring and Logging: Use tools provided by the public cloud PaaS platform (e.g., AWS
CloudWatch, Azure Monitor) to track usage and detect anomalies.
Backup and Disaster Recovery: Ensure proper backup mechanisms are in place for the
applications and data hosted within the PaaS, with tested recovery plans.
In IaaS, the cloud provider manages the underlying infrastructure, while the consumer has full control
over the virtual machines, networking, and storage.
Shared Responsibility Model: Understand the shared responsibility model, where the cloud
provider secures the infrastructure and the consumer is responsible for securing their data,
applications, and networks.
Identity and Access Management (IAM): Implement fine-grained IAM controls to restrict access
to IaaS resources, including MFA, RBAC, and least privilege principles.
Virtual Network Security: Use public cloud networking features such as Virtual Private Clouds
(VPCs), Security Groups, and Network Access Control Lists (ACLs) to secure data and traffic.
Data Integrity and Encryption: Ensure proper encryption of data, both at rest (using tools like
AWS KMS or Azure Key Vault) and in transit using SSL/TLS.
Security Monitoring and Auditing: Use cloud-native monitoring services (e.g., AWS CloudTrail,
Azure Security Center) to log, audit, and detect security events.
Compliance and Governance: Utilize cloud provider tools to enforce governance policies and
ensure compliance with regulations such as GDPR, HIPAA, and others.
Private Cloud:
o Provides greater control over security settings, as the organization manages its own cloud
environment.
o Requires in-house expertise to manage and secure infrastructure, applications, and
services.
o Security responsibilities rest largely with the organization, including physical and
network security.
o Customization of security policies is easier, and organizations can tailor them to specific
industry standards.
Public Cloud:
o Involves shared responsibility between the cloud provider and the consumer (e.g.,
provider handles physical security, consumer manages data protection and access
control).
o Scalability and agility are higher, but at the cost of less direct control over infrastructure
and data.
o Security controls are often pre-configured and limited to what the cloud provider offers.
o Compliance and governance are trickier, as organizations need to ensure that their public
cloud provider complies with relevant regulations.
1. Data Protection: Whether in public or private cloud, ensuring data is encrypted and access is
limited is crucial.
2. Compliance: Both deployment models must adhere to industry-specific regulations (e.g., GDPR,
HIPAA), though managing compliance in a public cloud can be more complex.
3. Access Control: Proper IAM, MFA, and role-based access controls are essential to secure access
in all cloud models.
4. Monitoring and Response: Continuous security monitoring and the ability to quickly respond to
incidents are critical functions in both private and public cloud environments.
5. Shared Responsibility: Particularly in public cloud models, understanding the delineation of
responsibilities between the provider and consumer is crucial for security management.
By tailoring these security management functions to the specific cloud delivery and deployment model in
use, organizations can effectively mitigate risks and ensure the security of their data and services.
AVAILIABILTY MANAGEMENT
Cloud Services are not immune to outages (failure/interruption) and the severity and the scope of
impact on the customer can vary based on the situation. As it will depend on the criticality of the cloud
application and its relationship to internal business processes.
1. Impact on business: In the case of business-critical applications where businesses rely on the
continuous availability of service, even a few minutes of service failure can have a serious impact on
the organization’s productivity, revenue, customer satisfaction, and service-level compliance.
2. Impact on customers: During a cloud service disruption, affected customers will not be able to
access the cloud service and in some cases may suffer degraded performance or user experience. For
Example:- when a storage service is disrupted, it will affect the availability and performance of a
computing service that depends on the storage service.
For example, on December 20, 2005, Salesforce.com (the on-demand customer relationship
management service) said it suffered from a system outage that prevented users from accessing the
system during business hours. Users “experienced intermittent access” because of a database cluster
error in one of the company’s four global network nodes, company officials said in a statement the day
following the outage.
Factors Affecting Availability:
The cloud service’s ability to recover from an outage situation and availability depends on a few
factors, including the cloud service provider’s data center architecture, application architecture, hosting
location redundancy, diversity of Internet service providers (ISPs), and data storage architecture.
Following is a list of the major factors:
The redundant design of System as a Service and Platform as a Service application.
The architecture of the Cloud service data center should be fault-tolerant.
Having better Network connectivity and geography can resist disaster in most cases.
Customers of the cloud service should quickly respond to outages with the support team of the Cloud
Service Provider.
Sometimes the outage affects only a specific region or area of cloud services, so it is difficult in
those cases to troubleshoot the situation.
There should be reliability in the software and hardware used in delivering cloud services.
The infrastructure of the network should be efficient and should be able to cope-up with
DDoS(distributed denial of service ) attacks on the cloud service.
Not having proper security against internal and external threats, e.g., privileged users abusing
privileges.
Regular testing and maintenance of the cloud infrastructure and applications can help identify and
fix issues before they cause downtime.
Proper capacity planning is essential to ensure that the cloud service can handle peak traffic and
usage without becoming overloaded.
Adequate backups and disaster recovery plans can help minimize the impact of outages or data loss
incidents.
Monitoring tools and alerts can help detect and respond to issues quickly, reducing downtime and
improving overall availability.
Ensuring compliance with industry standards and regulations can help minimize the risk of security
breaches and downtime due to compliance issues.
Continuous updates and patches to the cloud infrastructure and applications can help address
vulnerabilities and improve overall security and availability.
Transparency and communication with customers during outages can help manage expectations and
maintain trust in the cloud service provider.
Managing your IaaS virtual infrastructure in the cloud depends on five factors:
• Availability of a CSP network, host, storage, and support application infrastructure. This factor depends
on the following:
— CSP data center architecture, including a geographically diverse and fault-tolerance
architecture.
— Reliability, diversity, and redundancy of Internet connectivity used by the customer and the
CSP.
— Reliability and redundancy architecture of the hardware and software components used for
delivering compute and storage services.
— Availability management process and procedures, including business continuity processes
established by the CSP.
— Web console or API service availability. The web console and API are required to manage the
life cycle of the virtual servers. When those services become unavailable, customers are unable to
provision, start, stop, and deprovision virtual servers.
— SLA. Because this factor varies across CSPs, the SLA should be reviewed and reconciled,
including exclusion clauses.
• Availability of your virtual servers and the attached storage (persistent and ephemeral) for
compute services (e.g., Amazon Web Services’ S3† and Amazon Elastic Block Store).
• Availability of virtual storage that your users and virtual server depend on for storage service.
This includes both synchronous and asynchronous storage access use cases.
Synchronous storage access use cases demand low data access latency and continuous
availability, whereas asynchronous use cases are more tolerant to latency and availability.
Examples for synchronous storage use cases include database transactions, video streaming, and user
authentication. Inconsistency or disruptions to storage in synchronous storage has a higher impact on
overall server and application availability. A common example of an asynchronous use case is a cloud-
based storage service for backing up your computer over the Internet.
• Availability of your network connectivity to the Internet or virtual network connectivity to IaaS
services. In some cases, this can involve virtual private network (VPN) connectivity between your
internal private data center and the public IaaS cloud (e.g., hybrid clouds).
• Availability of network services, including a DNS, routing services, and authenticationservices required
to connect to the IaaS service.
Similar to SaaS service monitoring, customers who are hosting applications on an IaaS platform
should take additional steps to monitor the health of the hosted application. For example, if
you are hosting an e-commerce application on your Amazon EC2 virtual cloud, you should
monitor the health of both the e-commerce application and the virtual server instances.
Customers should demand that CSPs become more transparent about their cloud security operations to
help customers understand and plan complementary security management functions.By and large, CSPs
are responsible for the vulnerability, patch, and configuration (VPC) management of the infrastructure
(networks, hosts, applications, and storage) that is CSPmanaged and operated, as well as third-party
services that they may rely on. However,
customers are not spared from their VPC duties and should understand the VPC aspects for which they
are responsible. A VPC management scope should address end-to-end security and should include
customer-managed systems and applications that interface with cloud services. As a standard practice,
CSPs may have instituted these programs within their security management domain, but typically the
process is internal to the CSP and is not apparent to customers. CSPs should assure their customers of
their technical vulnerability management program using ISO/IEC 27002 type control and assurance
frameworks.
The scope of patch management responsibility for customers will have a low-to-high relevance in the
order of SaaS, PaaS, and IaaS services—that is, customers are relieved from patch management duties in
a SaaS environment, whereas they are responsible for managing patches for the whole stack of software
(operating system, applications, and database) installed and operated on the IaaS platform. Customers are
also responsible for patching their applications deployed on the PaaS platform.
Vulnerability alerts
Customers should understand the means by which PaaS providers, companies, or communities supporting
the PaaS programming language disseminate vulnerabilityrelated information to customers. PaaS
providers can choose email, RSS, or a web portal to communicate with their customers. Likewise, you
should choose the appropriate
methods to stay informed of any new vulnerability in the platform or the third-party service providers.
Configuration standards
The OS, applications server, database, and web server must be installed and configured in accordance
with least-privilege and security hardening principles to reduce their overall attack surface. For example,
the Center for Internet Security publishes Internet security benchmarks for major OS, databases, and
application servers based on recognized best practices for deployment, configuration, and operation of
networked systems. The center’s security-enhancing benchmarks encompass all three factors in Internet-
based attacks and disruptions: technology (software and hardware), process (system and network
administration), and human (end user and management behavior).
Configuration management
This refers to centralized configuration management where the appropriate configuration information is
necessary to manage a large number of nodes and zones in a public IaaS cloud. Numerous configuration
management tools are available, including open source tools (e.g., Puppet) and tools from commercial
vendors such as BMC, Configuresoft, HP, Microsoft, and IBM. However, configuration management of
virtual servers hosted in the cloud will require customization per CSP, given the uniqueness of the CSP-
specific management API.
Internet policy
Allow traffic between customer virtual servers and hosts on the Internet (e.g., allow only ports 22, 80, and
443 to servers). Deny all outbound traffic initiated from customer virtual servers.
PRIVACY
“The rights and obligations of individuals and organizations with respect to the collection, use, retention,
and disclosure of personal information.”
Data lifecycle refers to a process to help organizations manage the different phases of the life of
certain critical objects as Vendor, Customer, Employee, Material etc. throughout their lifecycle. This
begins with the initial creation through Data lifecycle to the end of life.
It is necessary for organizations to understand that each stage of a lifecycle provides its own set of
defined activities. By clearly identifying the data objects' various phases, stages or statuses in their
data lifecycle: the foundation of data quality lifecycle, organizations are able to manage them
efficiently and effectively. Defining this basis is crucial to any data quality uplift activities that your
organization may plan for its data objects.
This first phase focuses on the initial acquisition, entry, creation or capture of the data object. In our
example, the creation of an employee as a data object begins when a legal contract binds a potential
candidate to a firm, transforming this potential candidate to an ‘employee’ with a job start date.
In another example, the creation of a product begins with activities such as defining and designing
the product. These activities only pertain to the creation stage and are no longer applicable once the
product becomes an active product.
This phase focuses on the usage of data and takes into consideration how the data is made available
for use, processed, modified or shared. In our example a product in an active stage could be
produced, purchased, sold, utilized, etc. On the other hand, an employee in an active stage could
receive a salary, must maintain working hours, can subscribe to trainings, is granted access to certain
environment and has an active badge.
Personal information should be managed as part of the data used by the organization. It should be
managed from the time the information is conceived through to its final disposition.
Transformation
• Derivation: Are the original protection and use limitations maintained when data is transformed or
further processed in the cloud?
• Aggregation: Is data in the cloud aggregated so that it is no longer related to an identifiable individual
(and hence is no longer considered PII)?
• Integrity: Is the integrity of PII maintained when it is in the cloud?
Storage
• Access control: Are there appropriate controls over access to PII when stored in the cloud so that only
individuals with a need to know will be able to access it?
• Structured versus unstructured: How is the data stored to enable the organization to access and manage
the data in the future?
• Integrity/availability/confidentiality: How are data integrity, availability, and confidentiality maintained
in the cloud?
• Encryption: Several laws and regulations require that certain types of PII should be stored only when
encrypted. Is this requirement supported by the CSP?
Archival
• Legal and compliance: PII may have specific requirements that dictate how long it should be stored and
archived. Are these requirements supported by the CSP?
• Off-site considerations: Does the CSP provide the ability for long-term off-site storage that supports
archival requirements?
• Media concerns: Is the information stored on media that will be accessible in the future?Is the
information stored on portable media that may be more susceptible to loss? Whocontrols the media and
what is the organization’s ability to recover such media from the CSP if needed?
• Retention: For how long will the data be retained by the CSP? Is the retention period consistent with the
organization’s retention period?
Destruction
• Secure: Does the CSP destroy PII obtained by customers in a secure manner to avoid potential breach of
the information?
• Complete: Is the information completely destroyed? Does the destruction completely erase the data, or
can it be recovered?
The impact differs based on the specific cloud model used by the organization, the phase (Figure 7-1,
shown earlier) of personal information in the cloud, and the nature of the organization.
Access
Data subjects have a right to know what personal information is held and, in some cases, can make a
request to stop processing it. This is especially important with regard to marketing activities; in some
jurisdictions, marketing activities are subject to additional regulations and are almost always addressed in
the end user privacy policy for applicable
organizations. In the cloud, the main concern is the organization’s ability to provide the individual with
access to all personal information, and to comply with stated requests. If a data subject exercises this right
to ask the organization to delete his data, will it be possible to ensure that all of his information has been
deleted in the cloud?
Compliance
What are the privacy compliance requirements in the cloud? What are the applicable laws, regulations,
standards, and contractual commitments that govern this information, and who is responsible for
maintaining the compliance? How are existing privacy compliance requirements impacted by the move to
the cloud? Clouds can cross multiple jurisdictions; for example, data may be stored in multiple countries,
or in multiple states within the United States. What is the relevant jurisdiction that governs an entity’s
data in the cloud and how is it determined?
Storage
Where is the data in the cloud stored? Was it transferred to another data center in another country? Is it
commingled with information from other organizations that use the same CSP? Privacy laws in various
countries place limitations on the ability of organizations to transfer some types of personal information
to other countries. When the data is stored in the cloud, such a transfer may occur without the knowledge
of the organization, resulting in a potential violation of the local law.
Retention
How long is personal information (that is transferred to the cloud) retained? Which retention policy
governs the data? Does the organization own the data, or the CSP? Who enforces the retention policy in
the cloud, and how are exceptions to this policy (such as litigation holds) managed?
Destruction
How does the cloud provider destroy PII at the end of the retention period? How do organizations ensure
that their PII is destroyed by the CSP at the right point and is not available to other cloud users? How do
they know that the CSP didn’t retain additional copies? Cloud storage providers usually replicate the data
across multiple systems and
sites—increased availability is one of the benefits they provide. This benefit turns into a challenge when
the organization tries to destroy the data—can you truly destroy information once it is in the cloud? Did
the CSP really destroy the data, or just make it inaccessible to the organization? Is the CSP keeping the
information longer than necessary so that it can mine the data for its own use?
Privacy breaches
How do you know that a breach has occurred, how do you ensure that the CSP notifies you when a breach
occurs, and who is responsible for managing the breach notification process (and costs associated with the
process)? If contracts include liability for breaches resulting from negligence of the CSP, how is the
contract enforced and how is it determined who is at fault?
In the privacy arena, lack of specifics on data collection with providers creates misunderstandings down
the road. For instance, one global outsourcer said, “Clients come in expecting the right things in security,
but the wrong things in privacy. They are expecting best practices, but they don’t know what they are.”
There are comprehensive security frameworks and standards (such as the ISO 27000 series, NIST
guidelines, etc.), and organizations know how to implement them. There is no universally adopted
privacy standard—instead, there are conflicting laws, regulations, and views on what privacy is and what
it requires from organizations to protect it. Many organizations want to do what they perceive to be “the
right thing”; however, their perception may be different from the law. As a result, there may be different
expectations regarding what privacy means between the organization and the CSP, and no agreed best
practices.
It is essential that service-level agreements (SLAs) are initially defined before any information is
provided or shared, because it is very hard to negotiate them later. If you start the request for proposal
(RFP) process with an SLA target, you will be able to disqualify providers who cannot meet your stated
needs. Well-defined security and privacy SLAs should be part of the statement of work (SOW). Ensure
that your SLAs have teeth with specific penalty clauses. Do not cede command of service-level
negotiation to the provider.
Moreover, organizations face the risk that, as different data elements about individuals are collected and
later merged, the combined information is more than needed and the original purpose as well as the
organization may be in potential violation of local laws.
Cloud computing places a diverse collection of user and business information in a single location. As data
flows through the cloud, strong data governance is needed to ensure that the original purpose of collection
and limitation on use is attached to the data. This is critical when organizations create a centralized
database, because future applications can easily combine the data via expanded views that are utilized for
new purposes never approved by data subjects. The ability to combine data from multiple sources
increases the risk of unexpected uses by governments. Governments in different countries could ask CSPs
to report on particular types of behaviors or to monitor activities of particular types or categories of users.
The possibility that a CSP could be obliged to inform a government or a third party about user activities
might be troubling to the provider as well as to its users.
Security Principle
Security is one of the key requirements to enable privacy. This principle specifies that personal data
should be protected by reasonable security safeguards against such risks as loss or unauthorized access,
destruction, use, modification, or disclosure of data.
How long data should be retained and when it should be destroyed is still a challenge for most companies.
Data growth has led to definitions of policies and procedures for data retention and destruction. Most
policies have been driven or imposed by legislation and regulations, such as the Health Insurance
Portability and Accountability Act of 1996 (HIPAA), the Sarbanes-Oxley Act (SOX), and other federal
and state compliance requirements.
The actual deletion process is sometimes loosely defined. But when data copies, data backups, or archives
are deleted, are they really gone? Deleting a file only marks the space (or blocks) it occupies as usable.
Until the blocks are actually overwritten, the data is still there and can be retrieved. In fact, the disk space
occupied by deleted files must be overwritten with other data several times before the entirety of the files
is deemed irretrievable (a minimum of seven times per the U.S. federal government’s guidelines).
In many cases, disk or tape media is reused to store more data; therefore, data deletion typically does not
constitute much of an issue. However, when leased IT assets, such as servers or disk arrays, must be
returned, when obsolete systems are replaced, or when storage media has reached end-of-life, special care
must be taken to ensure that any data once stored is irretrievable.
Encryption can play a key role in the destruction process. Encrypted data can be destroyed even when
organizations lose track of their data by destroying the encryption key—data can no longer be decrypted
and hence is rendered inaccessible. This is especially beneficial when the data is kept by CSPs—
encrypted data can be destroyed without the involvement of the CSPs.
The problem begins when there is a lack of clearly defined policies around data destruction in cloud
computing. Virtual storage devices can be reallocated to new users without deleting the data, and then
allocated to new users. Personal information stored in this device may now be available to the new user,
potentially violating individual rights, laws, and regulations. Servers or disks can be decommissioned
without much thought as to whether data is still accessible. There are several approved methods of data
destruction, including media destruction, disk degaussing, multiple data overwrites with random byte
patterns, and destruction of keying material for encrypted data.
Transfer Principle
This principle specifies that data should not be transferred to countries that don’t provide the same level
of privacy protection as the organization that collected the information. In a cloud computing
environment, infrastructure is shared between organizations; therefore, there are threats associated with
the fact that the data is stored and processed remotely, and there is increased sharing of platforms between
users, which increases the need to protect
privacy of data stored in the cloud. Another feature of cloud computing is that it is a dynamic
environment; for example, service interactions can be created in a more dynamic way than in traditional
e-commerce. Services can potentially be aggregated and changed dynamically by customers, and service
providers can change the provisioning of services. In such scenarios, personal and sensitive data can
move around within a single CSP infrastructure and across CSP organizational boundaries. The goal of
integrated services provided by multiple CSPs is to enhance the possibility of data transfer to third parties.
This transfer should be disclosed to the data subject prior to collection. In many cases there is a need for
unambiguous consent by the individual to the data transfer. Typically the organization is required to agree
to the provider’s standard terms of service without any scope for negotiation. The terms are likely to be
biased in the provider’s favor, and the organization may not know all the entities that are involved in the
process, and hence is rendered unable to provide an accurate notice to the data subjects.
Accountability Principle
This principle states that an organization is responsible for personal information under its control and
should designate an individual or individuals who are accountable for the organization’s compliance with
the remaining principles. Accountability within cloud computing can be achieved by attaching policies to
data and mechanisms to ensure that these policies are adhered to by the parties that use, store, or share
that data, irrespective of the jurisdiction in which the information is processed. The way to move onward
is for organizations to value accountability and build mechanisms for accountable, responsible decision
making while handling data. Specifically, accountable organizations ensure that obligations to protect
data are observed by all processors of the data,irrespective of where that processing occurs.