Unit 4
Unit 4
UNIT 4 1
UNIT IV
CLOUD SECURITY DESIGN PATTERNS
Introduction to Design Patterns, Cloud bursting, Geo-tagging, Secure Cloud Interfaces, Cloud
Resource Access Control, Secure On-Premise Internet Access, Secure External Cloud
4.1 INTRODUCTION
Cloud security design patterns refer to established and proven architectural approaches that
are employed to enhance the security of cloud – based systems. These patterns provide a
framework for designing and implementing robust security measures to protect sensitive data,
application and infrastructure in the cloud. Cloud security design patterns address various security
challenges, including unauthorized access, data breaches and system vulnerabilities, by
incorporating best practices and industry standards.
UNIT 4 2
thereby minimizing the potential for misuse.
2. Data Protection
Data Protection refers to measures taken to safeguard digital data from unauthorized
access, corruption, or loss. This can include encryption (transforming data into
unreadable format unless decrypted), secure file storage, and ensuring that data is only
shared with authorized parties.
It also includes practices to comply with privacy regulations (like the General Data
Protection Regulation (GDPR)), which mandates that organizations must protect
users' personal data and allow individuals control over their information. Organizations
typically implement data protection through secure networks, data classification
policies, and access management.
3. Network Security
Network Security involves strategies and technologies designed to protect the
integrity, confidentiality, and availability of a computer network and its data. This
includes protecting the network from threats such as cyber-attacks, unauthorized
access, and data breaches.
Tools used in network security include firewalls, which filter incoming and outgoing
traffic, Intrusion Detection Systems (IDS) to detect suspicious activity, and Virtual
Private Networks (VPNs) to securely connect remote users to a network. Network
security ensures that sensitive data transmitted over networks remains secure and that
systems are protected from external threats.
4. Identity and Trust Management
Identity Management (IdM) is the process of managing the lifecycle of user
identities, including their creation, maintenance, and deletion. IdM tools often use a
centralized directory (such as Active Directory) to store user credentials, roles, and
permissions.
Trust Management ensures that both users and systems are trusted to interact with
sensitive data or services. For example, Public Key Infrastructure (PKI) helps
manage digital certificates, ensuring that a user or system’s identity is verified before
trust is established. Together, identity and trust management systems help authenticate
users, ensure secure access, and establish accountability.
5. Secure Storage and Backup
Secure Storage involves storing data in a way that is resistant to unauthorized access
or theft. This can include encrypting data at rest (data stored on disks or servers) and
UNIT 4 3
ensuring that storage systems are physically and logically protected against
unauthorized access.
Backup is the process of creating copies of critical data to protect against data loss
from accidental deletion, hardware failure, or cyber-attacks like ransomware. Secure
backup practices involve encrypting backup files, storing them off-site (e.g., in a cloud
or remote data center), and ensuring regular backups are performed and tested for
reliability.
6. Monitoring and Logging
Monitoring refers to the continuous tracking of systems, networks, or applications to
detect unusual behavior, performance issues, or potential security threats. Monitoring
tools generate alerts or reports based on predefined conditions, such as a spike in traffic
or unauthorized access attempts.
Logging involves recording events and actions in a system over time, creating an audit
trail that can be used for troubleshooting, security analysis, and forensic investigations.
Logs can capture user activity, system errors, or unusual events, and are crucial for
tracking security breaches or system malfunctions.
7. Resilience and Disaster Recovery
Resilience is the ability of a system or network to continue operating effectively even
during disruptions or failures. This involves building systems that are fault-tolerant,
such as through redundancy (having backup components), load balancing, and
distributed architectures that minimize the impact of failures.
Disaster Recovery refers to strategies and procedures that allow an organization to
restore its critical systems and data after a disaster or failure. This includes creating
recovery plans, ensuring off-site backups, and establishing Recovery Time Objectives
(RTO) and Recovery Point Objectives (RPO) to ensure minimal downtime and data
loss during a disaster.
8. Compliance and Governance
Compliance refers to an organization’s adherence to external laws, regulations, and
standards related to data protection, privacy, and security. Examples of compliance
requirements include General Data Protection Regulation (GDPR), Health
Insurance Portability and Accountability Act (HIPAA), and Payment Card
Industry Data Security Standard (PCI DSS).
Governance involves the internal processes and policies that ensure an organization
meets its legal, regulatory, and security obligations. It also focuses on ensuring that the
organization’s objectives align with its security practices, risk management, and
UNIT 4 4
accountability structures. Effective governance involves leadership, policies, and
regular audits to ensure compliance with relevant laws and internal standards.
These design patterns provide a structured approach for architects, developers and security
professionals to implement and maintain a secure cloud environment. They serve as a guide to
help organizations mitigate risks, protect data and ensure the integrity and confidentiality of their
cloud-based systems.
Widely used cloud design patterns
Server less computing
Auto scaling
Load balancing
Elasticity
High availability
Disaster recovery
Data replication and synchronization
Hybrid cloud
Micro services architecture
1. Serverless Computing
Serverless computing is a cloud computing model where developers write code that is
executed on-demand without worrying about managing servers. The cloud provider automatically
handles infrastructure scaling and resource management, allowing you to pay only for the compute
time you use.
2. Auto Scaling
Auto scaling automatically adjusts the amount of computing resources (such as virtual
machines or containers) based on the demand of an application. It ensures that resources are
increased during high traffic and scaled back during low traffic, optimizing cost and performance.
3. Load Balancing
Load balancing distributes incoming network traffic across multiple servers to ensure that
no single server becomes overwhelmed. This improves the availability and performance of
applications by efficiently utilizing server resources and preventing downtime.
4. Elasticity
Elasticity is the ability of a cloud system to dynamically scale resources up or down based
on fluctuating demand. This allows applications to maintain performance during peak times while
reducing costs when demand is low.
UNIT 4 5
5. High Availability
High availability refers to a system’s ability to remain operational and accessible with
minimal downtime. This is achieved by designing systems with redundancy, failover mechanisms,
and multiple instances across different locations to ensure services are always available.
6. Disaster Recovery
Disaster recovery involves strategies and procedures to restore IT systems and data after a
disaster, such as a hardware failure, cyber-attack, or natural disaster. It ensures that businesses can
recover critical applications and data with minimal downtime.
7. Data Replication and Synchronization
Data replication is the process of copying data across multiple systems or locations to
ensure redundancy and high availability. Data synchronization keeps this replicated data
consistent in real-time, ensuring that changes made in one location are reflected across all copies.
8. Hybrid Cloud
A hybrid cloud combines on-premises infrastructure with public and/or private cloud
services, enabling organizations to leverage the benefits of both environments. It allows for
greater flexibility, with workloads running in the cloud or on-premises based on security, cost, and
performance needs.
9. Microservices Architecture
Microservices architecture is a software design where an application is broken down into
small, independent services that handle specific functions. Each microservice can be developed,
deployed, and scaled independently, which helps increase flexibility and maintainability.
10. Data Partitioning
Data partitioning is the practice of dividing large datasets into smaller, more manageable
chunks, or partitions, to improve performance and scalability. This allows for faster querying and
storage management, particularly in distributed systems and large-scale databases.
These are just a few examples of widely used cloud design patterns. Each pattern addresses
specific challenges and objectives and organizations may combine multiple patterns to create
architectures that suit their specific requirements and goals.
UNIT 4 6
Security
Agility and flexibility
Performance and scalability
Best practice implementation
Overall, the use of cloud design patterns provides organizations with a structured and
standardized approach to designing and implementing cloud architectures. These patterns enable
organizations to leverage the scalability, availability, cost optimization, Security and flexibility
benefits offered by cloud computing while addressing common challenges and achieving desired
system characteristics.
UNIT 4 7
How does cloud bursting work?
Cloud bursting works by seamlessly extending an organization's computing resources
beyond their primary infrastructure to meet increased demand or workload spikes. It involves
leveraging additional resources from a public cloud provider when the capacity of the primary
infrastructure is insufficient to handle the workload.
Here's a high-level overview of how cloud bursting typically works:
Primary infrastructure
Resource monitoring
Burst trigger
Bursting decision
Public cloud provisioning
Load balancing and traffic routing
Workload execution
Scaling optimization
Burst termination
It’s important to note that implementing cloud bursting effectively requires careful planning,
architectural considerations and the use of appropriate technologies and tools
4.3 GEOTAGGING
Geotagging in the context of cloud. Security refers to the process of associating geographical
information with data or resources stored or processed in the cloud. It involves adding metadata or
tags that indicate the geographic location or region associated with specific data or resources.
Geographic location identification
Data localization and compliance
Access controls and authorization
Network security and threat detection
Incident response and forensics
1. Geographic Location Identification
Geographic location identification involves determining the physical location of a
user, device, or system, typically through IP addresses, GPS data, or other network-based
methods. This is often used for providing location-based services, restricting access based
on location, or optimizing content delivery (like serving content from a nearby server).
UNIT 4 8
Data localization refers to the practice of storing and processing data within
specific geographic boundaries, usually for legal or regulatory reasons. Compliance refers
to adhering to laws, regulations, and industry standards that mandate where and how data
should be stored, processed, and protected (e.g., GDPR in the EU, or China's
Cybersecurity Law). This helps ensure that sensitive data is handled in accordance with
local laws, especially regarding privacy and security.
UNIT 4 9
1. Enhanced access control
2. Compliance with data localization
3. Location –based threat detection
4. Incident response and forensics
Organizations should carefully evaluate the advantage and disadvantage of geotag in the
context of their specific security requirements and privacy considerations.
It’s important to note that the architecture of secure cloud interfaces can vary based on the
specific cloud service model (Infrastructure as a Service, plat as a service, or Software as a
service) and the specific security measures implemented by the cloud service provider.
UNIT 4 10
How do secure cloud interfaces work?
Secure cloud interfaces work by establishing a secure and reliable communication channel
between user or client application and cloud-based services. Here’s a high –level overview of how
secure cloud interfaces typically work
Authentication
Secure communication channel establishment
Authorization
Data encryption
Access controls
APL security
Monitoring and logging
Security controls
1. Authentication
Authentication is the process of verifying the identity of a user, device, or system. It
ensures that the entity requesting access is who it claims to be. Common methods of
authentication include passwords, biometrics (e.g., fingerprints), smart cards, and multi-factor
authentication (MFA), where two or more verification methods are used together to increase
security.
2. Secure Communication Channel Establishment
Secure communication channel establishment refers to the process of setting up a protected
connection between two systems or entities to ensure that the data exchanged is not intercepted or
tampered with. This is commonly achieved using protocols like Transport Layer Security (TLS)
or Secure Sockets Layer (SSL), which encrypt data transmitted over the network and
authenticate the identities of the communicating parties.
3. Authorization
Authorization is the process of determining whether an authenticated user or system has
the rights to access specific resources or perform certain actions. After a user is authenticated,
authorization mechanisms ensure that the user only accesses what they're permitted to, typically
based on roles or permissions (e.g., Role-Based Access Control (RBAC) or Attribute-Based
Access Control (ABAC)).
4. Data Encryption
Data encryption is the process of converting readable data (plaintext) into an unreadable
format (ciphertext) to protect it from unauthorized access. Encryption uses algorithms and keys to
transform the data, and only authorized parties with the correct decryption key can access the
UNIT 4 11
original information. It’s used to protect data at rest (stored data) and data in transit (data being
transmitted across networks).
5. Access Controls
Access controls are security mechanisms that restrict who can access a system, application,
or network and what actions they can perform. This is enforced by implementing rules and
policies that determine which users or entities are granted permission to view, modify, or delete
data. Common access control models include Role-Based Access Control (RBAC), Mandatory
Access Control (MAC), and Discretionary Access Control (DAC).
6. API Security
API (Application Programming Interface) security refers to the practice of ensuring that
APIs (which allow different software systems to communicate with each other) are protected from
malicious use, data breaches, and unauthorized access. API security measures include using
authentication (e.g., OAuth), encryption, rate limiting, and input validation to safeguard against
attacks like SQL injection and denial-of-service (DoS).
7. Monitoring and Logging
Monitoring and logging involve continuously tracking system activity and recording it for
analysis, auditing, and troubleshooting purposes. Monitoring refers to real-time observation of
system performance and security, while logging captures detailed records of events, such as user
actions, errors, and security incidents. Together, they provide visibility into system health and help
detect unusual behavior or security breaches.
8. Security Controls
Security controls are safeguards or countermeasures put in place to protect systems and
data from threats and vulnerabilities. These controls can be preventive, like firewalls and
encryption, detective, like intrusion detection systems (IDS), or corrective, like backup and
disaster recovery systems. The purpose of security controls is to reduce risk, prevent breaches, and
ensure compliance with security policies.
UNIT 4 12
Encryption and data protection
Fine-gained access control
UNIT 4 13
what happened, when, and by whom, which can be critical for identifying breaches or
inappropriate access.
Monitoring involves actively observing system activities in real-time to detect anomalies,
suspicious behavior, or performance issues. It often includes the use of intrusion detection
systems (IDS), security information and event management (SIEM) tools, and real-time
alerts to respond to potential threats quickly.
6. Encryption and Data Protection
Encryption is the process of converting data into a coded format to prevent unauthorized
access. Only users with the correct decryption key or password can read the original data.
Encryption is critical for protecting sensitive data both in transit (e.g., using TLS/SSL for
secure web communications) and at rest (e.g., encrypting databases or files stored on
servers).
Data protection involves not just encryption but also strategies like data masking,
tokenization, and backup to ensure data is safeguarded from unauthorized access,
corruption, or loss. It also includes compliance with regulations like GDPR or CCPA,
which set legal standards for handling personal data.
7. Fine-Grained Access Control
Fine-grained access control refers to the practice of applying highly specific and detailed
permissions to data and systems, often based on attributes such as user roles, geographic location,
time of access, or other contextual factors. Instead of applying broad access rights, fine-grained
control allows organizations to restrict or allow access to very specific resources. For instance, in
a document management system, one user may be allowed to view a document but not edit it,
while another user can have full access to modify or delete the document. This approach provides
a more granular level of security and ensures that users can only access exactly what they need.
UNIT 4 14
Secure email gateways
Network segmentation
Regular updates and patch management
Secure on-premise internet access is crucial for safeguarding an organization internal network
and data from external threats. By implementing these security measures and best practices,
organizations can establish a protected connection to the internet while mitigating the risks
associated with unauthorized access, data breaches and other cyber threats.
1. Perimeter security:
Firewall: Deploy a firewall at the network perimeter to filter and monitor incoming
and outgoing traffic.
2. Secure Web Gate (SWG)
Web proxy: use a web proxy to inspect and filter web traffic, blocking access to
malicious websites and enforcing content.
3. Virtual Private Network (VPN)
VPN gateway: Set up VPN gateways to provide secure remote access for
authorized users. Use protocols like IPsec (internet protocol security) to establish
encrypted connections.
4. Network segmentation:
VLANs/ Subnets: Divide the internal network into segments or subnets to isolate
and control access to different parts of the network.
5. Secure DNS:
Secure DNS resolver: Employ a secure DNS resolver that blocks access to
malicious domains and performs DNS filtering to prevent DNS-based and data
exfiltration.
6. Endpoint protection:
Antivirus and endpoint security: deploy antivirus software and endpoint security
solutions on all devices to detect and block malware, enforce security policies and
monitor for suspicious activities.
UNIT 4 15
Regular patching and updates
Intrusion detection and prevention
Continuous monitoring and incident response
Employee security awareness training
Regular security assessments
UNIT 4 16
Regular updates ensure that systems are protected from known security risks. Vulnerabilities in
outdated software are a primary target for attackers, so maintaining up-to-date software is essential
for reducing the attack surface of your environment.
6. Intrusion Detection and Prevention
Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) monitor
network traffic and system activities for signs of malicious activity or policy violations. IDS is
used to detect suspicious behavior, while IPS can actively block or mitigate attacks in real-time.
Both systems help to identify potential threats before they cause significant damage, providing an
additional layer of defense against cyber-attacks.
7. Continuous Monitoring and Incident Response
Continuous monitoring involves tracking the health, security, and performance of
systems in real-time to identify potential threats, anomalies, or vulnerabilities. Incident response
is the set of actions taken when a security breach or attack is detected. This includes detecting the
attack, containing it, mitigating its impact, recovering affected systems, and learning from the
incident to improve future security. A strong incident response plan helps reduce downtime and
data loss in the event of an attack.
8. Employee Security Awareness Training
Employee security awareness training educates employees on best practices for
protecting the organization’s data and systems. Training typically covers topics such as
recognizing phishing attacks, using strong passwords, safeguarding sensitive data, and complying
with security policies. Since human error is one of the most common causes of security breaches,
training employees to identify and avoid security threats is a crucial part of a layered security
approach.
9. Regular Security Assessments
Regular security assessments involve evaluating the organization's security posture through
techniques like vulnerability assessments, penetration testing, and security audits. These
assessments help identify weaknesses in systems, networks, or policies before attackers can
exploit them. Regular assessments ensure that security controls are effective, and they help
organizations stay ahead of emerging threats and comply with security standards or regulations.
UNIT 4 17
It involves leveraging the benefits of cloud computing while implementing security
measures to safeguard sensitive information, maintain privacy and mitigate risks
associated with using external cloud services.
1. Data Encryption
Data encryption is the process of converting readable data (plaintext) into an
unreadable format (ciphertext) using cryptographic algorithms. The purpose is to protect
sensitive data from unauthorized access. Encryption can be applied to data at rest (stored
data) and data in transit (data being transmitted over networks). Only authorized users with
the appropriate decryption key can access the original data. Common encryption standards
include AES (Advanced Encryption Standard) for data at rest and TLS/SSL for data in
transit.
2. Access Controls
Access controls are security mechanisms that restrict access to systems, networks,
and data based on predefined rules. Access control mechanisms define who can access
what resources and what actions they can perform. Common models include:
Role-Based Access Control (RBAC): Access is assigned based on a user's role (e.g.,
admin, user).
Attribute-Based Access Control (ABAC): Access is granted based on attributes like the
user's location, time of access, or other contextual factors.
Mandatory Access Control (MAC): Access is determined by system-enforced policies
and cannot be changed by users.
3. Network Security
Network security involves protecting the integrity, confidentiality, and availability
of data and services in a network environment. It includes practices, policies, and tools to
UNIT 4 18
safeguard the network infrastructure from unauthorized access, misuse, and attacks. Key
components of network security include:
1. Firewalls to filter traffic.
2. Intrusion Detection and Prevention Systems (IDPS) to detect and stop
malicious activities.
3. Virtual Private Networks (VPNs) to encrypt remote access to networks.
Network segmentation to isolate sensitive areas of the network from other segments.
UNIT 4 19
typically includes detecting the incident, analyzing its impact, containing the threat, and
recovering from the attack. A well-structured incident response plan helps organizations
react quickly to mitigate damage and restore services.
7. Service Level Agreements (SLAs)
Service Level Agreements (SLAs) are formal agreements between a service
provider and a customer that define the expected level of service. SLAs specify key
metrics such as:
Availability (e.g., 99.9% uptime)
Response times for issue resolution
Performance levels
Support and maintenance terms
SLAs are crucial in managing expectations and ensuring that the service provider
meets the agreed-upon standards. In the context of cybersecurity, SLAs often include
provisions for security incident response times, backup and recovery times, and system
availability.
8. Data Backup and Disaster Recovery
Data backup is the process of making copies of data to ensure it can be restored in
case of data loss, corruption, or breach. Backups are typically stored in different locations
(e.g., on-site or in the cloud) and may be automated for regular, consistent protection.
Disaster recovery (DR) is the strategy for recovering IT systems, applications, and data
after a catastrophic event, such as a natural disaster, hardware failure, or cyberattack. A
disaster recovery plan outlines procedures for restoring services and minimizing
downtime. It typically includes backup strategies, system restoration steps, and key
personnel roles during an emergency.
UNIT 4 20
Data backup and disaster recovery
Continuous monitoring and auditing
UNIT 4 21
Network Segmentation: Dividing the network into isolated segments to limit the exposure
of sensitive data. This can include creating private networks (VPCs) within a cloud
provider's infrastructure.
Zero Trust Network: Implementing the "never trust, always verify" model, where every
request to access network resources, even from within the organization, is treated as
untrusted until verified.
4. Compliance and Regulatory Measures
Compliance refers to adherence to laws, regulations, and guidelines to protect data,
ensure privacy, and meet industry standards. Regulatory measures ensure that the
organization adheres to the legal requirements of the industry or geographic region they
operate in. Some common compliance frameworks in cloud environments include:
General Data Protection Regulation (GDPR): European Union law governing the
collection, processing, and storage of personal data.
Health Insurance Portability and Accountability Act (HIPAA): U.S. law for protecting
healthcare information.
Payment Card Industry Data Security Standard (PCI-DSS): Standard for securing
credit card transactions.
ISO/IEC 27001: International standard for managing information security. Cloud
providers typically offer tools to help meet these compliance requirements, but it's the
responsibility of the organization to configure resources to comply.
5. Data Privacy and Residency
Data privacy involves ensuring that sensitive information is protected and handled in
accordance with privacy laws and organizational policies. It is essential to maintain
confidentiality, integrity, and accessibility of personal data.
Data Residency refers to the geographic location where data is stored. Many countries
have data protection laws requiring that data be stored in specific regions (e.g., European
countries require personal data to be stored within the EU due to GDPR).
Cloud providers often give customers control over where their data is stored and processed
(e.g., selecting the region or availability zone).
In the cloud, organizations need to implement encryption, data masking, and
tokenization to protect sensitive data, and they must also understand the local and
international data residency laws that apply to their operations.
6. Threat Monitoring and Incident Response
Threat monitoring involves actively observing systems for potential security
incidents, while incident response focuses on how organizations respond when a breach
UNIT 4 22
or attack occurs.
Threat Monitoring: Use tools like Security Information and Event Management
(SIEM), intrusion detection systems, and cloud-native services to continuously monitor for
anomalies or signs of malicious activity.
Incident Response: Having an incident response plan in place is critical. This plan should
define the steps to take when a security breach occurs, including containment,
investigation, communication, and recovery.
Automation: Automated security tools (e.g., AWS GuardDuty, Azure Security Center)
can detect threats and initiate actions, such as quarantining an infected server or blocking
suspicious IP addresses.
A strong incident response plan helps minimize the impact of security breaches and
enables quick recovery.
7. Service Level Agreements (SLAs)
A Service Level Agreement (SLA) is a contract between a service provider (e.g., a
cloud provider) and the customer, outlining the level of service the provider will deliver.
SLAs in cloud computing often cover:
Uptime and Availability: The guaranteed uptime of services (e.g., 99.9% availability).
Performance Metrics: The expected speed and performance (e.g., response time,
throughput).
Support Response Times: How quickly the provider will respond to issues or service
requests.
Security Commitments: What security controls the provider will put in place, such as
encryption, incident response, and access controls.
Organizations should review SLAs to ensure they meet their business and security
requirements. In some cases, custom SLAs might be negotiated for critical services.
8. Data Backup and Disaster Recovery
Data backup is the process of making copies of critical data to ensure its availability
in case of loss or corruption. Disaster recovery (DR) is the strategy and procedures for
restoring systems and data in the event of an unexpected disruption, such as a natural
disaster, cyberattack, or system failure.
Backup Solutions: Use cloud-native backup tools (e.g., AWS Backup, Azure Backup) to
automate and manage regular backups. These tools can store data in multiple locations,
enhancing redundancy.
Disaster Recovery Planning: Cloud providers often offer built-in disaster recovery
services like AWS Elastic Disaster Recovery or Azure Site Recovery. These services
UNIT 4 23
enable the creation of failover environments across regions or availability zones, reducing
downtime in case of failure.
A comprehensive backup and disaster recovery plan ensures that data can be quickly
restored, and business operations can continue after an incident.
9. Continuous Monitoring and Auditing
Continuous monitoring refers to the ongoing process of monitoring IT systems to
detect vulnerabilities, performance issues, and security incidents in real time. Auditing
involves reviewing records of system activity to identify potential risks or breaches.
Monitoring Tools: Cloud providers offer monitoring solutions (e.g., AWS CloudWatch,
Azure Monitor) to provide real-time visibility into system performance, security
incidents, and application logs.
Auditing Tools: CloudTrail (AWS) and Azure Activity Logs provide detailed audit
trails, allowing you to track who accessed what resources and when. Regular audits help
ensure compliance with security policies and regulatory standards.
Alerting: Set up automated alerts to notify administrators of critical events such as failed
login attempts, unusual traffic patterns, or system misconfigurations.
Continuous monitoring and auditing are vital for identifying and addressing security
risks proactively, ensuring ongoing compliance, and improving the organization’s overall
security posture.
2 mark Questios
1. What is secure external cloud?
2. Give the abbreviation of the following. IDS, IPS, SWG, DLP,DNS,VPN,IAM,MFA
3. List the key aspects of security design?
4. What is meant by burst trigger?
5. How IDP used in cloud security?
6. Describe secure on-premise internet access.
Big Questions
1. Briefly explain concept of Cloud Bursting.
2. Explain Secure Cloud Interface.
3. Secure External Cloud with example.
4. Explain cloud Geo Tag.
5. Briefly explain secure on-premises Internet access.
6. Explain storage and network access control option.
UNIT 4 24
UNIT 4 25