Security and Privacy of Data (Final Practice Set)
Security and Privacy of Data (Final Practice Set)
[PEC-801A]
1. Select which of the following best describes the stages of the cloud-based information life cycle?
a. creation , storage , deletion
b. Ingestion, Processing , Ejection
c. generation, archiving, purging
d. capture , manipulation , retention.
2. Cite the concept that involves restricting access to physical location where data is store or processed
Answer - Physical Security.
3. Identify the types of attack that involves flooding a network with traffic in order to overwhelm it.
Answer - DDoS attack
4. Cite an example of vulnerability that can be exploited by an attacker to gain unauthorized access to a
system.
Answer - Weak password
5. Identify the type of cryptography that uses a single key for both encryption and decryption.
Answer - Symmetric key cryptography.
6. Identify the type of attack that involves tricking users into divulging sensitive information.
Answer – Fishing
7. Identify the type of cryptography that uses different keys for encryption and decryption.
Answer - Asymmetric key cryptography.
8. Cite an example of vulnerability that can be exploited by attacker to execute arbitrary code on a system.
Answer - Cross site scripting.
11. Define the technology that is commonly used for inter tenant network segmentation.
Answer - Virtual local area network (VLANs)
14. Describe the method that can be used for storage isolation to prevent unauthorized access.
Answer - data encryption
20. Define the virtualization technique that involves dividing a single physical server into multiple isolated
virtual environments
Answer - server virtualization
21. Identify the main benefit of using secure isolation strategies for data.
Answer - reduce the risk of UN-authorized access
22. Recall the strategies that achieves the physical separation of data from a network
Answer - Air gap
23. State which of these is not a common technique for virtual data isolation.
Answer - network segmentation.
24. Identify the type of isolation strategies that involves creating temporary network connections for back
up.
Answer - cloud air gap
25. Name the isolation strategies that allows granting specific user permissions to access certain data.
Answer - Access control list
26. Identify the isolation strategies that segments a network into smaller isolated sections.
Answer - network segmentation
27. State which of the following isolation technique creates a self-contained environment for running
application separate from the main system.
Answer - virtual machines
28. Identify the MOST suitable strategy for an organization that wants to isolate its development
environment from its production environments.
Answer - Virtual machine
29. State which isolation method protect data at rest by making it unreadable without a decryption key.
Answer - data encryption
30. Recall the isolation technique replaces sensitive data with fictions values while preserving the data
format.
Answer - data masking
31. Recognize the isolation data based on its classification (confidential, public, etc) comes under which of
the following principle.
Answer - data governance
32. Select the isolation strategies involves replacing sensitives data elements with unique identifies.
Answer - tokenization
33. Recognize the type of data isolation strategies of using fire wall.
Answer - network segmentation
34. Recognize the isolation technique that allows the authorized users to access specific data subset within
a larger data set.
Answer - Attribute based access control (ABAC)
35. Select the strategies of isolation data based on its origin (internal, external)
Answer - network segmentation
36. Name the security control that complements data isolation strategies by monitoring and detecting
suspicious activity
Answer - Intrusion detection system(IDS)
37. Identify the missing word in the following statement. When implementing data isolation its crucial to
consider the trade off between security and _________.
Answer - performance
38. Name the isolation strategies that is primarily used to protect data in transit across networks.
Answer - data encryption
39. Recognize which is isolation highly sensitive data on a separate server with stricter access control.
Answer - hardware security models
40. Select the need for regular reviewing and updating of data isolation policies
Answer - ensuring continued effectiveness
41. Identify the objective of secure isolation of physical and logical infrastructure.
Answer - protect against unauthorized access
45. State which of the following aims to mitigate the threat of unauthorized access to network resources
Answer - implementing strong password policies
50. Recall how to secure storage isolation can help in compliance with data protection regulation.
Answer - By limiting access to authorized personnel only
51. Select which is the common attack vector against network infrastructure.
Answer - Distributed Denial Of Service Attack (DDoS)
52. Select which of the following can be enhanced the security of logical infrastructure
Answer - Implementing multi-factor authentication
54. State how secure isolation of compute infrastructure can mitigate inside a threats
Answer - By implementing role-based access control
58. Select which of the following can enhance the security of storage infrastructure
Answer - Implementing data encryption at rest
61. Identify which of the following can enhance the security of compute infrastructure
Answer - Implementing regular security patches
62. Recall how does secure isolation of logical infrastructure contribute to regulatory compliance
Answer - By ensuring data integrity and confidentiality
63. Identify what could be the potential consequences of inadequate network isolation
Answer - Unauthorized access to sensitive data
67. State how could secure isolation of compute resources contribute to regulatory compliance
Answer - by ensuring access controls are in-place
70. Select the most appropriate protocol to monitor for suspicious login attempts
Answer – SSH
72. Recall the resource you consult for responding to a security incident
Answer - Incident resource plan
73. Identify the primary purpose of the “Generation” Phase in the cloud-based information life cycle
Answer - Creating new information of data
74. Identify which phase of the cloud-based information life cycle does data typically undergo processing or
analysis
Answer - manipulation
75. Select the “Archiving” phase of the cloud-based information life cycle involve
Answer - storing data for long term retention
76. Select which phase of the cloud-based information life-cycle focus on security removing data that is no
longer needed
Answer – purging
77. Identify the primary goal of the “Ejection” phase in the cloud-based information life - cycle
Answer - Storing data for future use
78. Select the purpose of the “Ejection” phase in the cloud-based information life-cycle
Answer - Moving data to a different storage location
79. Identify which phase of the cloud base information life-cycle involve ensuring that data is complaint
with regulatory requirements
Answer – Retention
80. Identify the type of attack involves an attacker intersecting communication between two parties and
altering the data exchange
Answer - Main in the middle (MitM)
81. Identify which security measure helps prevent unauthorized access to sensitive data by restricting user
permission
Answer - Access control List (ACLs)
82. Identify the common threats of data confidentiality in cloud computing environments
Answer - Insider threats
84. Select which of the following is a method used to protect data integrity
Answer - Digital Signatures
85. Select a common attack vector for comprising data confidentiality in web application
Answer - cross site scripting
87. Select which cryptography technique involves transforming plain text into cipher text using an
encryption algorithm and a key
Answer - encryption
89. Select which technique involves replacing sensitive data with non sensitive place holders or symbols
Answer - obfuscation
91. Identify which technique is commonly used to protect credit card information during transaction
Answer – Tokenization
93. Identify which technique involves masking or hiding portions of data to protect sensitive information
Answer - Data redaction
99. Select which technique is used to assure the permanent deletion of data from storage device
Answer - Secure wiping
100. Identify the purpose of conducting regular audits and monitoring in data deletion processes
Answer - Verifying compliance with policies
101. Select which phase of the data deletion process involves security removing data that is no longer
needed.
Answer – Purging
102. Indicate how does data deletion assurance contribute to regulatory compliance
Answer - By complying with data protection laws
105. Select which phase of the data management life cycle involves identifying and categorizing tenant data
based on its sensitivity and importance
Answer – classification
107. Identify which data protection strategy involves transforming data into a unreadable format using
cryptography algorithm
Answer – Tokenization
109. Classify which data protection strategy involves replacing sensitive data with non- sensitive
placeholders or symbols
Answer - data masking
111. Select the purpose of verifying the authenticity and validity of a digital certificate called
Answer - certificate validation
117. Select a appropriate action after receiving a high severity SIEM alters
Answer - Escalate to security team
118. Identify the root cause of a security incident using SIEM data along side
a. Vulnerability Scanner reports
b. Network traffic capture files
c. User activity logs
d. All of these
121. Define about the primary purpose of enforcing access control for cloud infrastructure-based services
Answer - to ensure compliance with industry regulations
122. List about multi factor authentication which is primarily used for in cloud infrastructure-based services
Answer - Authorizing access
123. Select about the following is an example of “something you are” factor in multi factor authentication
Answer - fingerprint Scan
124. Define about the following which is a potential drawback of multi-factor authentication
Answer - Increased complexity for users
126. Identify the following is Not a common access control requirement for cloud infrastructure
Answer - data encryption
127. Predict the access control method that verifies the identify of user accessing cloud infrastructure
resources
Answer - Authentication
128. Identify the access control mechanism which is used to determine what actions users are allowed to
perform om cloud resources
Answer - Authorization
129. Predict the role encryption in access control for cloud infrastructure
Answer - it Protects data from unauthorized access
134. Predict about access controls where challenges is associated with managing access control politicizes
across heterogeneous cloud environments and services providers
Answer – Interoperability
135. Select the purpose of continuous monitoring in access control enforcement for cloud infrastructure-
based services
Answer - Detecting security threats in real-time
136. Identify the access control mechanism which allows administrations to define policies that specific who
is allowed to access which resources and under what condition
Answer - policy Based Access Control (PBAC)
137. Select the main advantage of using Attribute Based Access Control (ABAC) over Role-Based Access
Control (RBAC) in cloud environments
Answer - ABAC provides more granular control over access rights
138. Identify about access control principle that users should only be granted the minimum level of access
necessary to perform their job function
Answer - principle of least privilege
139. Select the technology is commonly used for centralized management of user identities and access
permissions in cloud environments
Answer - Identify and Access Management (IAM)
140. Predict the primary purpose of access control enforcement in cloud environment
Answer - To restrict access to authorized user only
141. Select about enforcement mechanism which is used to verify the identify of users accessing cloud
infrastructure based services
Answer – Authentication
142. Select the following access controls models is commonly used in cloud environment and allows access
rights to be based on attributes of the user resources and environment
Answer - Attribute Based Access Control(ABAC)
143. Select the technology commonly used for implementing Multi-Factor Authentication in cloud
environments
Answer - One-Time Password(OTP)
144. Predict about access control challenge arise from the need to manage access permission for temporary
or transient cloud resources
Answer -Dynamic resources provisioning
145. Identify the purpose of Access Control List (ACLs) in cloud environment
Answer - To managed access permissions for network resources
146. Select the access control challenges that is associated with managing access permission for cloud
resources distributed across multiple geographic region
Answer - Geo-location based access control
150. In a cloud environment, select the role does Single Sign On (SSO) play in access Control
Answer - It enables users to authenticate once and access multiple resources without re-authenticating.
The cloud data lifecycle encompasses the various phases that a specific piece of data undergoes, starting with
its initial creation or capture and concluding with its eventual archival or deletion when it’s no longer
needed. Different Phases of Cloud Data Lifecycle : Data Creation, Data Storage, Data Processing, Data Sharing,
Data Archival and Data Deletion.
2. Explain the primary goal of data protection for confidentiality and integrity
Confidentiality : It means that only authorized individuals/systems can view sensitive or classified
information. The data being sent over the network should not be accessed by unauthorized individuals.
Integrity : The idea here is to make sure that data has not been modified. Corruption of data is a failure to
maintain data integrity. To check if our data has been modified or not, we make use of a hash function.
Data Redaction removes sensitive or confidential information from documents before they are shared with
other parties, typically for legal reasons.
It involves physically obscuring the text in question by deleting it or covering it up with a black marker or
other material so that the document’s contents cannot be easily accessed.
Encryption transforms readable data (plaintext) into an unreadable format (ciphertext) using algorithms and
cryptographic keys. This ensures data confidentiality by making the data inaccessible to unauthorized users
without the decryption key. It secures communication channels, protects data at rest and in transit, and
complies with regulatory requirements, thereby safeguarding sensitive information from interception,
unauthorized access, and breaches.
a. Protection Against Unauthorized Access
b. Secure Communication Channels
c. Data Integrity and Confidentiality
d. User Authentication and Access Control
e. Enhanced Security for Cloud Services
Public Key Infrastructure (PKI) is a framework of encryption and cybersecurity procedures that protect the
integrity and confidentiality of data transferred between users over the internet. Main components of PKI
include a pair of keys (one public and one private), a certificate authority (CA), registration authority (RA).
• The public key is disclosed and accessible to everyone, used to encrypt messages and data. The private
key is kept secret by the user and is used to decrypt the messages encrypted with the public key.
• The certificate authority (CA) is the entity that issues and verifies digital certificates. The registration
authority (RA) verifies the identity of the users requesting information from the CA.
• A digital certificate is akin to a digital passport that provides proof of the identity of an individual or
organization online. Digital certificates are issued by certificate authorities and are a cornerstone of PKI.
6. Explain the role of a certificate authority (CA) in a PKI System.
Certificate authority (CA) issues an entity's certificate and acts as a trusted component within a private PKI.
A certificate is a digital document, signed by a CA, and used to prove the owner of a public key, within a PKI.
The certificate has a number of attributes, such as usage of the key, Client authentication, Server
authentication or Digital signature and the public key. The certificate also contains the subject name which is
information identifying the owner. Any certificate issued by the CA is trusted by all entities that trust the CA.
The exact role of a CA will depend on its position within a CA hierarchy.
Phishing is a fraud technique where a malicious actor sends messages impersonating a legitimate individual
or organization, usually via email or other messaging system.
• Email Phishing : Email is the most popular phishing medium. Scammers register fake domains that
impersonate real organizations and send thousands of requests to their targets.
• Spear Phishing : It works like common phishing attacks, using communications from a seemingly trusted
source to trick victims. However, a spear phishing attack targets a specific individual or set of individuals
rather than sending generic messages to many users in the hope that one falls for the trick.
Inter-tenant network segmentation is a security practice used in multi-tenant environments, such as cloud
services, to ensure that different tenants (users or organizations) sharing the same infrastructure are
isolated from each other. It prevents tenants from accessing each other's data and resources. This ensures
that each tenant's information is kept private and secure, reducing the risk of data breaches & unauthorized
access. Techniques such as virtual LANs (VLANs), virtual private networks (VPNs), and software-defined
networking (SDN) are commonly used to create separate network segments for each tenant.
• Enhanced Security : By isolating tenants, inter-tenant network segmentation protects against lateral
movement of threats within the shared infrastructure. If one tenant's network is compromised, the
attacker cannot easily move to another tenant's network.
• Access Control : Network policies and access control lists (ACLs) are applied to regulate traffic between
segments, ensuring that only authorized communications occur between tenants and shared resources.
• Efficient Use of Resources : It allows service providers to manage and allocate network resources more
effectively, ensuring fair usage and performance optimization for all tenants.
11. Describe the primary goal of storage isolation strategies in multi-tenant environments. [Ques. 10]
12. State the technique that is used for storage isolation other than data encryption.
• Backup and Recovery : Backing up data regularly is an important aspect of data protection, as it ensures
that data is preserved in the event of data loss or corruption.
• Access Control : It is a method to restrict access to sensitive information to only authorized users. This can
be achieved through the use of passwords, multi-factor authentication and role-based access control.
13. Describe the multi tenancy in cloud computing.
In cloud computing, multitenancy means that multiple customers of a cloud vendor are using the same
computing resources. Despite the fact that they share resources, cloud customers are not aware of each other,
and their data is kept totally separate. Multitenancy is a crucial component of cloud computing; without it,
cloud services would be far less practical. Multitenant architecture is a feature in many types of public cloud
computing, including IaaS, PaaS, SaaS, containers, and serverless computing.
Virtualization is technology that is used to create virtual representations of servers, storage, networks, and
other physical machines. Virtual software mimics the functions of physical hardware to run multiple virtual
machines simultaneously on a single physical machine. Virtualization can be implemented to use hardware
resources efficiently and get greater returns from their investment. It also powers cloud computing services
that help organizations manage infrastructure more efficiently.
Data Isolation is a fundamental principle in data security and privacy, ensuring that data from different users,
applications, or tenants is kept separate and inaccessible to unauthorized parties.
• Access Controls : Data isolation employs strict access control mechanisms such as role-based access
control (RBAC) and access control lists (ACLs) to ensure that only authorized users can access specific
data sets. This minimizes the risk of unauthorized access.
• Confidentiality : Data isolation maintains the confidentiality of sensitive information by keeping it
separate from other data. This is particularly important in multi-tenant environments where multiple
clients share the same infrastructure.
Data Minimization is a principle in data security and privacy that involves collecting, processing, and
retaining only the minimum amount of personal data necessary for a specific purpose.
• Limited Data Collection : By collecting only the data that is strictly necessary, organizations reduce the
amount of sensitive information that could potentially be exposed in a data breach. This minimizes the
risk of unauthorized access to personal information.
• Efficient Data Handling : Managing smaller datasets reduces the complexity and cost of data storage and
processing. It also simplifies data governance and enhances the efficiency of data management practices.
17. State the main advantages of using secure isolation strategies. [Ques. 16]
• Access Policies : Implement robust access control mechanisms, such as role-based access control (RBAC)
and multi-factor authentication (MFA), to ensure only authorized users can access isolated environments.
• Network Segmentation: Use network segmentation techniques, such as VLANs and subnets, to isolate
network traffic and prevent unauthorized access between segments.
• Continuous Monitoring: Implement continuous monitoring solutions to detect and respond to security
incidents promptly. This includes monitoring access logs, network traffic, and system performance.
• Physical Infrastructure : Secure isolation of physical infrastructure involves physically separating and
securing hardware components, such as servers, networking equipment, and storage devices. This is
achieved through measures like physically segregating servers in locked racks or cages, using biometric or
card-based access controls, and employing surveillance systems to monitor physical access.
• Logical Infrastructure : Secure isolation of logical infrastructure refers to the separation of virtualized or
logical components, such as virtual machines (VMs), containers, networks, and databases. Techniques
include using hypervisors to create isolated VMs, implementing software-defined networking (SDN) for
network segmentation, employing access controls and encryption to protect data within logical partitions.
Compute isolation involves segregating computing resources such as servers, virtual machines (VMs), and
containers to ensure that workloads and applications run independently and securely.
• Ensures that workloads running on shared physical servers do not interfere with each other.
• Prevents unauthorized access to sensitive computing resources and data.
• Techniques : Virtualization and Containerization
Network isolation involves partitioning or segmenting network traffic to prevent unauthorized access
between different segments or entities within a network.
• Protects against unauthorized access and lateral movement within the network.
• Ensures that only authorized users and devices can communicate with specific network segments.
• Techniques : VLANs (Virtual LANs) and Firewalls and Access Control Lists (ACLs)
22. State Secure isolation strategies for network resources. [Ques. 22]
Storage isolation involves separating and securing data stored within storage systems to prevent
unauthorized access and ensure data confidentiality and integrity.
• Ensures that data is accessible only to authorized users or systems.
• Protects against data breaches and theft by encrypting stored data and enforcing strict access controls.
• Techniques : Logical Partitions and Encryption
To secure isolation, the following components of subjects (users, applications, or entities accessing
resources) are crucial:
• Identity : The unique identifier associated with each subject, such as a username, user ID, or application
ID. It ensures that each subject can be distinctly identified and managed, preventing unauthorized access.
Identity management systems help enforce policies and track activities related to each subject.
• Access Permissions : The specific rights and privileges assigned to a subject, determining what actions
they can perform on particular resources. Access permissions, managed through mechanisms like role-
based access control (RBAC) and access control lists (ACLs), ensure that subjects can only interact with
resources they are explicitly authorized to use.
• Authentication Credentials : The means by which a subject proves their identity, such as passwords,
biometric data, or cryptographic keys. Strong authentication mechanisms, including multi-factor
authentication (MFA), ensure that only verified subjects can access isolated environments. This adds a
layer of security, preventing unauthorized entities from gaining access.
26. Compare secure isolation strategies for compute, network, and storage. [Ques. 20, 21, 23]
27. Explain the reading for secure isolation of storage. [Ques. 23]
Proactive monitoring involves continuously identifying potential issues within IT infrastructure and
applications before they escalate into significant challenges. This approach allows businesses to address
problems before they cause application crashes or performance degradation. By resolving issues early,
proactive monitoring ensures smoother operations and enhances user experience.
32. Categorize the types of incidents that require immediate incident response.
A security incident, or security event, is any digital or physical breach that threatens the confidentiality,
integrity, or availability or an organization’s information systems or sensitive data. Security incidents can
range from intentional cyberattacks by hackers or unauthorized users, to unintentional violations of security
policy by legitimate authorized users. Some of the most common security incidents include - Ransomware,
Phishing and social engineering, DDoS attacks, Supply chain attacks, Insider threats, etc.
Indications of unauthorized access and malicious traffic can be classified into several categories based on the
nature of the anomalies and behaviours observed :
Unusual Account Activity :
- Multiple failed login attempts, logins from unusual geographic locations, or logins at odd hours can
indicate unauthorized access attempts.
- Unexplained changes in user privileges or roles, such as a standard user gaining administrative rights
without proper authorization, can signal unauthorized activity.
Network Traffic Anomalies:
- Large or unexplained data transfers, especially to external or untrusted IP addresses, can indicate data
exfiltration or malicious communication.
- Use of uncommon or unexpected network protocols, especially those typically associated with malicious
activity (e.g., unauthorized use of remote desktop protocols), can indicate malicious traffic.
All incidents are events, but all events are not incidents.
• An event is an observed change to the normal behavior of a system, environment, process, workflow or
person. Examples: router ACLs were updated, firewall policy was pushed.
• An alert is a notification that a particular event (or series of events) has occurred, which is sent to
responsible parties for the purpose of spawning action. Examples: the events above sent to on-call
personnel.
• An incident is an event that negatively affects the confidentiality, integrity, and/or availability (CIA) at an
organization in a way that impacts the business. Examples: attacker posts company credentials online,
attacker steals customer credit card database, worm spreads through network.
36. Define encryption and how does it contribute to data protection.
Encryption is a form of data security in which information is converted to ciphertext. Only authorized people
who have the key can decipher the code and access the original plaintext information. Encryption not only
ensures the confidentiality of data or messages but it also provides authentication and integrity, proving that
the underlying data or messages have not been altered in any way from their original state. Encryption
ensures - Privacy and security, Regulations, Secure internet browsing, Safety of sensitive data.
Data redaction is a data masking technique that enables you to mask (redact) data by removing or
substituting all or part of the field value. This helps protect sensitive personally identifying data. When
designing a data privacy strategy, data redaction is often considered as a first step. This entails reviewing
your sensitive data, and determining :
• What sensitive data should be de-identified with redaction?
• Which redaction technique should be used (full, partial, lookup)?
• Once redacted, does this field still maintain the data value/utility for analysis downstream?
Example : when applying redaction to a Social Security Number (SSN), the result might be ‘XXX-XX-XXXX.’
Tokenization is the process of exchanging sensitive data for non-sensitive data called “tokens” that can be
used in a database or internal system without bringing it into scope. A token is a piece of data that stands in
for another, more valuable piece of information. Tokens have virtually no value on their own they are only
useful because they represent something valuable, such as a credit card primary account number (PAN) or
Social Security number (SSN).
Obfuscation is the process of making code difficult to understand or reverse-engineer by transforming it into
a complex, hard-to-read form. Encryption, on the other hand, is the process of converting software code into
ciphertext using an encryption algorithm and a key. The primary differences are :
• Confidentiality : Encryption provides confidentiality for sensitive information by converting code into
ciphertext, making it unreadable to anyone who does not have the decryption key. Obfuscation, on the
other hand, does not provide confidentiality, as the code remains in a readable form, just more difficult to
understand.
• Tamper protection : Encryption can detect if the encrypted data has been altered, as the decryption
process will fail if the ciphertext has been tampered with. Obfuscation does not provide tamper
protection, as the code remains in a readable form and can be easily modified by an attacker.
• Security Strength : Encryption uses mathematical algorithms and keys to encrypt and decrypt data, which
provide a strong level of security. The strength of encryption depends on the strength of the algorithm
and key length used. Obfuscation, on the other hand, relies on making code more complex and difficult to
understand, but does not provide a mathematical guarantee of security.
Data archiving is the process of identifying and moving data that's no longer relevant for daily operations
from the primary storage into a secondary storage, for long term storage.
• Automated Archiving : Implementing automated systems that archive data based on predefined schedules
(e.g., monthly, quarterly). This ensures regular and consistent archiving without manual intervention.
• Manual Archiving : User-Initiated Archiving allows tenants or administrators to manually select and
archive data. This procedure is useful when specific data needs to be archived due to operational needs or
compliance requirements.
• Ad-Hoc Archiving : Manually archiving data on an as-needed basis, typically driven by specific events such
as the end of a project, legal requirements, or organizational policies.
• Tiered Storage Solutions : Moving archived data to low-cost, high-capacity storage solutions designed for
infrequent access, such as magnetic tapes, optical disks, or cloud-based cold storage services. This helps
reduce storage costs while ensuring data is preserved.
• Data Deduplication : Implementing data deduplication techniques to eliminate duplicate copies of
repeating data, reducing the storage footprint of archived data.
• Compression: Compressing data before archiving to save storage space. Compression techniques reduce
the size of data files, making long-term storage more cost-effective.
• Data Encryption : All data within a network should be fully encrypted; this ensures that would-be
cybercriminals are unable to decipher the data in the event of a data breach. For data within a network to
be fully secured, all data states should be encrypted; failure to encrypt all data states leaves it vulnerable
to theft or corruption.
• Data Backup : Backing up data to the cloud is one of the best ways to guard against data loss. Cloud data
backup should be done on a frequent and regular basis; this is especially important for mission-critical
data whose loss or corruption can severely hinder normal business processes and operations.
• Password Protection : Password control is the primary line of defense in safeguarding the data within
your network. Sensitive information should be password protected such that only users who know the
password can access the data.
• Identity and Access Management : One of the major ways to secure data is to regulate the users that have
access to a network. Access to a network should only be granted to individuals who need the relevant data
to carry out their job duties; access should be terminated as soon as the data in the network is no longer
needed.
• Intrusion Detection and Intrusion Prevention Systems : Part of keeping your data secure is monitoring
and regulating the traffic in and out of your network. Prompt identification of network threats allow for
necessary measures to be implemented before any significant data corruption or data loss occurs.
Intrusion detection and prevention software are applications that constantly monitor network traffic for
well-known threats.
5. Write a report on data retention policies and identify it in the implementation phase.
Data retention refers to the practice of storing data for a specific period of time. Proper data retention is
important for organizations to ensure that they have access to the data they need to operate effectively, while
also complying with any legal or regulatory requirements.
A data retention policy is a set of guidelines that an organization follows for retaining and disposing of data,
based on regulatory requirements and internal needs.
• Types of data to be retained : The policy should specify what types of data are to be retained, such as
financial, legal, health, or personal data.
• Retention periods : The policy should outline the retention periods for each type of data, based on
regulatory requirements and business needs. The retention period should be long enough to meet the
business requirements and regulatory obligations but not longer than necessary to avoid any unnecessary
data storage.
• Storage location : The policy should specify where the data should be stored, whether on-premises, in the
cloud, or in a hybrid storage environment.
• Access controls : The policy should specify who has access to the data and the procedures for accessing it.
This should include guidelines for how data is accessed, who can access it, and when access is granted.
• Data destruction : The policy should specify how data is destroyed at the end of its retention period. This
should include guidelines for securely deleting the data or disposing of physical media.
• Record-keeping : The policy should outline procedures for keeping records of data retention and
destruction. This includes details about who is responsible for the data, when it was created, and when it
was destroyed.
6. State the importance of PKI in key management policy. Justify it with proper application.
Public key infrastructure or PKI is the governing body behind issuing digital certificates. It helps to protect
sensitive data and gives unique identities to users and systems. Thus, it ensures security in communications.
The public key infrastructure uses a pair of keys: the public key and the private key to achieve security. The
public keys are prone to attacks, thus an intact infrastructure is needed to maintain them.
Managing Keys in the Cryptosystem : The security of a cryptosystem relies on its keys. Thus, it is important
that we have a solid key management system in place.
8. How to assure the data deletion of tenant data? Explain with an example.
Assuring the complete and secure deletion of tenant data involves implementing robust procedures and
technologies to ensure that data is irretrievable after deletion.
• Physical Deletion : Completely removing data from storage media, making it unrecoverable. This involves
overwriting the data or using specialized tools.
• Cryptographic Erasure : Deleting the encryption keys that protect the data, rendering the data
inaccessible and unusable.
Some methods to assure data deletion are :
• Overwriting : Writing over the storage space with random data multiple times to ensure the original data
cannot be recovered. Standards such as the DoD 5220.22-M recommend specific patterns for overwriting.
• Secure Erase Commands : Using built-in commands in storage devices (e.g., ATA Secure Erase) to securely
erase all data.
• Destruction of Physical Media : Physically destroying the storage media (e.g., shredding hard drives or
burning tapes) to ensure data cannot be recovered.
Example Process Flow:
- Tenant Request : Tenant submits a deletion request.
- Data Identification : Service provider identifies all tenant data.
- Secure Deletion : Data is securely deleted using overwriting and secure erase methods.
- Verification : Service provider verifies data deletion.
- Audit and Notification : Audit trail is generated, and tenant is notified.
• Data Breach Prevention : Implementing robust data protection measures reduces the risk of data
breaches, which can result in significant financial losses, reputational damage, and legal consequences.
• Threat Reduction : Protecting data helps mitigate various security threats such as malware, ransomware,
and insider threats, ensuring the integrity and availability of critical information.
• Disaster Recovery : Effective data protection strategies include regular backups and disaster recovery
plans, ensuring that data can be quickly restored in the event of a hardware failure, cyber-attack, or
natural disaster.
• Data Quality : Protecting data ensures its accuracy, completeness, and reliability, which is crucial for
informed decision-making and strategic planning.
• Efficient Use of Resources : Implementing data protection practices can streamline data management
processes, reducing redundancy and improving overall efficiency.
• Cost Avoidance : Data breaches and security incidents can be extremely costly due to fines, remediation
costs, and lost business. Effective data protection helps avoid these expenses.
• Phishing : Phishing involves stealing data, such as a user's password, that an attacker can use to break into
a network. Attackers gain access to this data by tricking the victim into revealing it. Phishing remains one
of the most commonly used attack vectors - many ransomware attacks, for instance, start with a phishing
campaign against the victim organization.
• Email attachments : One of the most common attack vectors, email attachments can contain malicious
code that executes after a user opens the file.
• Account takeover : Attackers can use a number of different methods to take over a legitimate user's
account. They can steal a user's credentials (username and password) via phishing attack, brute force
attack, or purchasing them on the underground market. Attackers can also try to intercept and use a
session cookie to impersonate the user to a web application.
• Lack of encryption : Unencrypted data can be viewed by anyone who has access to it. It can be intercepted
in transit between networks, as in an on-path attack, or simply viewed inadvertently by an intermediary
along the network path.
• Insider threats : An insider threat is when a known and trusted user accesses and distributes confidential
data, or enables an attacker to do the same. Such occurrences can be either intentional or accidental on
the part of the user. External attackers can try to create insider threats by contacting insiders directly and
asking, bribing, tricking, or threatening them into providing access. Sometimes malicious insiders act of
their own accord, out of dissatisfaction with their organization or for some other reason.
11. Explain some data protection measures for ensuring confidentiality and integrity in cybersecurity.
• Role-Based Access Control (RBAC) : Restricting access to data based on the roles and responsibilities of
users within the organization. Only authorized personnel can access specific data sets.
• Multi-Factor Authentication (MFA) : Adding an extra layer of security by requiring multiple forms of
verification (e.g., password and fingerprint) before granting access to sensitive data.
• Data Masking : Obscuring sensitive data with random characters, making it accessible only to those with
the proper authorization while maintaining its usability for authorized operations.
• Tokenization : Replacing sensitive data elements with non-sensitive equivalents (tokens) that can be
mapped back to the original data only by those with access to the tokenization system.
• Digital Signatures : Attaching a digital signature to data to verify the authenticity and integrity of the data.
Digital signatures use public-key cryptography to ensure that data has not been altered.
• Firewalls : Implementing firewalls to control incoming and outgoing network traffic based on
predetermined security rules, preventing unauthorized access to the network.
• Intrusion Detection and Prevention Systems (IDPS) : Monitoring network traffic for suspicious activity
and taking action to prevent potential threats.
The cloud data lifecycle is a dynamic process encompassing data creation, management, and utilization
within cloud computing environments. Different Phases of Cloud Data Lifecycle are :
• Data Creation : Data is gathered from various sources, including sensors, devices, applications, human
interactions, social media posts, and IoT temperature readings. Creating data is the first step in turning
unactionable information into knowledge that can be used.
• Data Storage : Data finds a secure place in cloud storage infrastructure, encompassing object, block, and
database storage options. This stage is pivotal in the data lifecycle, where data is protected and prepared
for future use and retrieval.
• Data Processing : Processing transforms data into a usable format, making tasks like data analysis,
machine learning, and artificial intelligence applications feasible. This process converts unstructured data
into structured data that can power automation and insights.
• Data Sharing : Data is accessible to approved users and programs for collaboration and utilization. This
phase entails securely distributing it inside a restricted framework to ensure only authorized parties can
interact with the data.
• Data Archival : Data is archived to meet compliance requirements and to ensure long-term storage.
Archiving ensures that previous data is still available when needed and reduces storage expenses for data
utilized less often. It is essential for ensuring data integrity and compliance with legal requirements.
• Data Deletion : When data no longer serves a purpose or when required by laws and regulations, it is
destroyed. Information must be securely deleted or removed at this step to ensure that data cannot be
accessed or retrieved.
13. Explain about VLANs that can be used for inter-tenant network segmentation.
A VLAN (Virtual Local Area Network) is a logical grouping of devices that are all connected to the same
network regardless of physical location. VLANs are an essential component of contemporary networking,
allowing network traffic to be segmented and managed.
VLANs enable logical partitioning inside a single switch, resulting in multiple virtual local area networks
where physical switch segmentation is not a possibility. These partitions enable the division of a large
network into smaller, more manageable broadcast domains, thereby improving network security, efficiency,
and flexibility. In this comprehensive guide, we will look at how VLANs function, when to use them, the
benefits and drawbacks they provide, and the types of VLANs.
VLANs prevent unauthorized access and associated security issues by isolating guest devices from the
internal network. This isolation guarantees that visitors have access to the resources they require while
safeguarding the internal network’s integrity and security.
• Encryption at the Storage Leve l: Encrypting data at rest ensures that stored data is unreadable without
the appropriate decryption keys. This means that even if unauthorized individuals access the physical
storage media, they cannot read the encrypted data.
• Tenant-Specific Encryption Keys : In multi-tenant environments, each tenant can be assigned unique
encryption keys. This ensures that data belonging to one tenant cannot be decrypted and accessed by
another tenant, even if the data resides on the same physical storage.
• Encryption Key Management : Secure and effective management of encryption keys, including key
generation, distribution, and rotation, ensures that only authorized users can decrypt and access the data.
• Unauthorized Physical Access : In the event of unauthorized physical access to storage devices, encryption
ensures that data remains protected. The encrypted data is rendered useless without the decryption keys.
• Multi-Tenancy Security : In cloud environments or shared data centers, encryption is crucial for
maintaining tenant isolation. It prevents cross-tenant data leakage and ensures that each tenant’s data
remains secure and isolated.
15. Explain the role of multi-tenancy that improve resource utilization in cloud environments.
In cloud computing, multitenancy means that multiple customers of a cloud vendor are using the same
computing resources. Despite the fact that they share resources, cloud customers are not aware of each other,
and their data is kept totally separate. Multitenancy is a crucial component of cloud computing; without it,
cloud services would be far less practical. Multitenant architecture is a feature in many types of public cloud
computing, including IaaS, PaaS, SaaS, containers, and serverless computing.
In cloud computing, applications and data are hosted in remote servers in various data centers and accessed
over the Internet. Data and applications are centralized in the cloud instead of being located on individual
client devices (like laptops or smartphones) or in servers within an organization. Many modern applications
are cloud-based, which is why, for example, a user can access their Facebook account and upload content
from multiple devices.
• Better use of resources : One machine reserved for one tenant is not efficient, as that one tenant is not
likely to use all of the machine's computing power. By sharing machines among multiple tenants, use of
available resources is maximized.
• Lower costs : With multiple customers sharing resources, a cloud vendor can offer their services to many
customers at a much lower cost than if each customer required their own dedicated infrastructure.
Server virtualization is the practice of software separation of a physical server into several different and
isolated virtual servers. Therefore, the process of virtualizing your servers, or just making everything digital.
Each virtual server can separately run its own operating system. Not only the quantity and identities of
certain physical servers, CPUs, and operating systems are hidden by server virtualization, but server
resources are likewise hidden. Cloud computing and hybrid clouds are built on it.
17. Explain the concept of multi-tenancy in cloud computing and how it differs from single-tenancy
architectures. [Ques. 15]
18. Compare and contrast server virtualization and network-virtualization, highlighting their respective
benefits and use cases.
21. Identify components subject to secure isolation. [Short Ques. 18, 25]
22. Identify the common attack vectors and threats. [Ques. 10]
23. Describe the common threats mitigation techniques. (How to reduce threats severity)
• Phishing : Phishing happens when people with malicious motive sends fraudulent communications to
users with the intent of getting sensitive information such as credit card and login information or to
install malware. Users should analyse the email thoroughly and also hover over the links in the email and
check if the link redirects them to a genuine website.
• DDoS Attacks : A Denial-of-Service attack aims at interfering and compromising network availability.
Having efficient DDoS mitigation services in place can help defeat such attacks and with regular testing,
the mitigation can work as planned.
• Malware : Malware is a collective term used to describe different types of malicious software such as
ransomware which blocks access to key components of the network, spyware which covertly gains
sensitive information by transmitting data in the hard drive and different types of viruses disrupting
certain components and affecting the system. The best way to prevent this is by using the latest version of
anti-malware software on all devices to seek and destroy malicious programs such as viruses.
• SQL injection : SQL injection attacks the target’s vulnerable websites to gain access to stored data. The
attacker may also run DELETE and DROP queries to disrupt data. The threat can be detected manually
leveraging some useful tests and strong validations against every entry point in the application.
24. Describe the common attack vectors and threats. [Ques. 10]
25. Difference secure isolation of physical and logical infrastructure. [Ques. 18, 19]
26. Distinguish Secure isolation strategies from common threats mitigation techniques. [Ques. 23]
Network isolation involves partitioning or segmenting network traffic to prevent unauthorized access
between different segments or entities within a network.
- It protects against unauthorized access and lateral movement within the network.
- Ensures that only authorized users and devices can communicate with specific network segments.
- Techniques :
• Virtual LANs (VLANs) : Creating separate broadcast domains within a single physical network
infrastructure. VLANs segment the network logically, ensuring that devices on one VLAN cannot
communicate directly with devices on another VLAN without proper routing.
• Network Segmentation : Dividing the network into smaller, isolated segments or subnets, each with
its own security policies and controls. This limits the scope of access and reduces the potential
impact of security breaches.
• Firewalls : Deploying firewalls to enforce strict access control policies between network segments.
Firewalls can filter traffic based on IP addresses, port numbers, protocols, and application-level
information.
• Secure Tunnels : Establishing VPNs to create secure, encrypted tunnels over shared or public
networks. VPNs provide isolated communication channels for remote users and sites, ensuring data
confidentiality and integrity.
28. Distinguish between abuse of system privileges and misuse of system resources.
Incident response is the strategic, organized responses that an organization uses following a cyberattack.
The response is executed according to planned procedures that seek to limit damage and repair breached
vulnerabilities in systems. Professionals use incident response plans to manage security incidents. Having a
clearly defined incident response plan limits attack damage, lower costs, & save time after a security breach
• Preparation : Incident response plans ensure that organizations are prepared to handle cybersecurity
incidents effectively. They include predefined roles and responsibilities for incident response team
members, ensuring clarity and swift action when an incident occurs.
• Early Detection : Incident response plans define methods and tools for early detection of cybersecurity
incidents. This includes monitoring systems, network traffic, and logs to identify unusual or suspicious
activities that may indicate an incident.
• Immediate Response : Once an incident is detected, the plan outlines immediate response actions to
contain the incident, minimize its impact, and prevent further escalation. This can involve isolating
affected systems, blocking malicious activity, or shutting down compromised services temporarily.
• Mitigation Strategies : Incident response plans include strategies to mitigate the effects of an incident.
This might involve restoring data from backups, patching vulnerabilities, or implementing additional
security controls to prevent similar incidents in the future.
• Recovery Planning : They also outline steps for recovery, ensuring that affected systems and services are
restored to normal operation as quickly as possible while ensuring the integrity and security of the
environment.
30. Select appropriate tools and techniques for proactive activity monitoring.
Proactive monitoring involves continuously identifying potential issues within IT infrastructure and
applications before they escalate into significant challenges. This approach allows businesses to address
problems before they cause application crashes or performance degradation. By resolving issues early,
proactive monitoring ensures smoother operations and enhances user experience. It involves using a
combination of tools and techniques to detect, prevent, and respond to potential security threats before they
can cause significant damage.
• Intrusion Detection and Prevention Systems (IDPS) : Real-time monitoring of network traffic to detect and
prevent suspicious activities, signature-based detection, anomaly-based detection. Example – Suricata.
• Security Information and Event Management (SIEM) Systems : Aggregation and correlation of logs and
events from various sources, real-time alerting, advanced analytics, and reporting. Example – IBM QRadar
• Endpoint Detection and Response (EDR) : Continuous monitoring of endpoints for suspicious activities,
threat hunting, behavior analysis, automated response actions. Example - Microsoft Defender
• Vulnerability Management Tools : Regular scanning for vulnerabilities in systems and applications,
prioritization of vulnerabilities based on risk, automated patch management. Example - Rapid7
• User and Entity Behavior Analytics (UEBA) : Monitoring user and entity behavior for deviations from the
norm, leveraging ML to detect insider threats and compromised accounts. Example - Splunk UEBA
31. Analyze the benefits of integrating incidence response with threat intelligence. [Ques. 29]
32. Classify the types of events commonly monitor by security information (SIEM)system.
Proactive monitoring involves continuously identifying potential issues within IT infrastructure and
applications before they escalate into significant challenges. This approach allows businesses to address
problems before they cause application crashes or performance degradation. By resolving issues early,
proactive monitoring ensures smoother operations and enhances user experience. It involves using a
combination of tools and techniques to detect, prevent, and respond to potential security threats before they
can cause significant damage.
• Tools: Splunk, IBM QRadar, ArcSight, LogRhythm.
• Techniques: Aggregation and correlation of logs and events from various sources, real-time alerting,
advanced analytics, and reporting.
Ensuring that personal data are only accessed by authorized users—and for authorized purposes—requires
some method of tracking transactions and who has accessed the data and when. Automatic, tamper-proof
logging of transactions involving identity data is a best-practice method for enabling both institutional and
personal oversight of how these data are being used.
Any logs or audit data collected must comply with privacy and data protection requirements for the ID
system in question. At a minimum, logs should be :
- protected from unauthorized access (and have that use monitored);
- protected from unauthorized copying or exfiltration; and
- devoid of personal data
Poor user management can have significant adverse effects on system security, impacting both the
confidentiality, integrity, and availability (CIA) triad of information security.
• Increased Risk of Unauthorized Access : Poor user management often leads to lax access controls, such as
weak passwords, excessive user privileges, or outdated access permissions. This increases the risk of
unauthorized access to sensitive data, systems, and resources by insiders or external attackers who
exploit poorly managed user accounts.
• Data Breaches and Information Leakage : Users with inappropriate access rights or credentials that are
not promptly revoked pose a risk of data breaches. Unauthorized users could potentially access
confidential information, leading to data leaks, intellectual property theft, or exposure of sensitive
customer data.
• Compromised System Integrity : Poorly managed user accounts may lead to compromised system
integrity due to the installation of unauthorized software or modification of critical system configurations.
Attackers could exploit compromised accounts to introduce malware, manipulate data, or disrupt system
operations, leading to system downtime or loss of data integrity.
35. Justify the need for security information and event management (SIEM) systems in modern cyber-
security operations. [Ques. 32]
36. Predict the challenges associated with ensuring the quality of services (Qos) in security Operations.
Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate performance
based on application or network-operator requirements or provide predictable or guaranteed performance
to applications, sessions, or traffic aggregates. Basic phenomenon for QoS means in terms of packet delay and
losses of various kinds. Some of the Challenges associated with it are :
• Security Operation Centres (SOCs) often deal with an overwhelming number of security alerts and false
positives. The high volume of alerts can lead to alert fatigue, where critical threats might be overlooked or
ignored due to the sheer number of alerts that analysts must process.
• Ensuring seamless integration and interoperability among various security tools and technologies is
complex. Poor integration can result in data silos, incomplete threat visibility, and inefficiencies in
incident response processes, negatively affecting QoS.
• Managing and analyzing vast amounts of security data generated by different sources. Inadequate data
management can lead to missed indicators of compromise, slow decision-making, and ineffective threat
hunting, undermining the SOC’s effectiveness.
37. Summarize the role of identity management in enhancing cyber security posture.
Identity and Access Management (IAM) is instrumental in averting unauthorized access, a critical factor in
safeguarding sensitive data and systems. Implementing a robust IAM strategy is paramount in the digital age,
where cyber threats loom large. It mitigates the risks associated with data breaches and cyber-attacks by
ensuring that access to resources is strictly regulated. Through the implementation of access control policies
and Single Sign-On (SSO) technologies, IAM provides a secure and user-friendly environment. These
measures not only protect against external threats but also address internal vulnerabilities.
• Access Control : Identity management systems help ensure that only authorized users can access specific
resources and data. This reduces the risk of unauthorized access and potential data breaches, as access is
tightly controlled based on verified identities.
• Authentication : Identity management incorporates strong authentication mechanisms, such as multi-
factor authentication (MFA). Enhanced authentication methods reduce the likelihood of credential theft
and unauthorized access, significantly bolstering security.
• Role-Based Access Control (RBAC) : Identity management systems implement RBAC, where permissions
are assigned based on the user's role within the organization. This ensures that users have access only to
the resources necessary for their job functions, minimizing the risk of excessive privileges and potential
abuse.
• Lifecycle Management : Identity management handles the full lifecycle of user identities, from creation
and modification to deactivation. Automated lifecycle management ensures that access rights are
promptly updated as users change roles or leave the organization, reducing the risk of orphaned accounts
and unauthorized access.
Cloud Firewalls are cloud-deployed, software-based security products that help prevent cyber-attacks. They
form a protective shield around the cloud assets, defending them from untrusted internet traffic. Cloud
assets include cloud platforms, data stored on clouds, infrastructure, and applications. Additionally, cloud-
based firewalls also protect the internal/ private network and the on-premise assets. Also known as Firewall-
as-a-Service (FWaaS), these security products can be offered by third-party vendors as a service.
• Firewalls filter incoming and outgoing network traffic to prevent unauthorized access and protect cloud
resources from attacks. By enforcing security policies, firewalls help mitigate threats such as denial-of-
service (DoS) attacks, port scanning, and intrusion attempts.
• Firewalls help segment the cloud environment into distinct security zones, isolating different workloads
and applications.
• Firewalls enforce access controls by allowing or denying traffic based on IP addresses, protocols, ports,
and other criteria.
• Firewalls provide detailed logging and monitoring of network traffic, offering visibility into potential
security incidents.
A honeypot is a security mechanism that creates a virtual trap to lure attackers. An intentionally
compromised computer system allows attackers to exploit vulnerabilities and study their activities to
improve security policies. Honeypots are a type of deception technology that allows to understand attacker
behavior patterns. Security teams can use honeypots to investigate cybersecurity breaches to collect intel on
how cybercriminals operate. They also reduce the risk of false positives, when compared to traditional
cybersecurity measures, because they are unlikely to attract legitimate activity.
A honeypot looks exactly like a genuine computer system. It has the applications and data that cyber
criminals use to identify an ideal target. A honeypot can pretend to be a system that contains sensitive
consumer data, such as credit card or personal identification information. The system can be populated with
decoy data that may draw in an attacker looking to steal and use or sell it. As the attacker breaks into the
honeypot, the IT team can observe how the attacker proceeds, taking note of the various techniques they
deploy and how the system’s defenses hold up or fail. This can then be used to strengthen the overall
defenses used to protect the network.