Unit-4 CC
Unit-4 CC
Cloud security refers to protecting data stored online via cloud computing environments
(instead of data centers) from theft, deletion, and leakage. There are many protective
methods that help secure the cloud; these measures include access control, firewalls,
penetration testing, obfuscation, tokenization, virtual private networks (VPN), and not using
public internet connections.
Migrating to Cloud? Here’s 5 Ways to Manage Data Security | A Guide to Cloud Computing
When it comes to network security concerns, the cloud itself is not the issue – rather,
the challenge lies within the policies and technologies for security and control of that
technology. Put simply? Human error is one of the top reasons for data breaches in the
cloud. In fact, Gartner estimates that by 2022, at least 95 percent of cloud security failures
will be the customer’s fault due to misconfigurations and mismanagement.
Therefore, it is not an issue of whether or not the cloud is secure but if the customer is using
the cloud securely.
Cyber security trends are continuing into 2021. Learn more here!
Too often, misconfigured cloud-based systems lead to data breaches. For instance, in
2019, Capital One was hacked by a malicious actor who stole the sensitive data of more
than 100 million people while not following traditional hacker patterns.
The breach was the result of a misconfigured open-source web application firewall (WAF),
which Capital One used in its operations hosted on Amazon Web Services. The
misconfigured WAF was permitted to list all the files in any AWS data buckets and read the
contents of each file. The misconfiguration allowed the intruder to trick the firewall into
relaying requests to a key back-end resource on AWS.
Once the breach happened, 100 million U.S. citizens were impacted and 140,000 Social
Security numbers and 80,000 bank account numbers were compromised. In total, the
breach cost Capital One roughly $150 million.
From April to May 2020, the Cloud Security Alliance (CSA) conducted a survey of
experienced cloud security architects, designers, and operators from large organizations to,
in part, determine the challenges of public cloud workloads in 2020. After surveying 200
respondents, they found that anticipated security challenges included:
Visibility
Data Privacy
IAM Procedures
Configuration Management
Compliance Requirements
At the same time, the diversity of production workloads in the public cloud were also
expected to increase in 2021, including the use of container platforms, function-as-a-
service/serverless approach, and cloud provider services. Use of virtual machines is also
expected to increase.
Don’t just migrate to the cloud – prevent security threats by following these tips:
1. Understand what you’re responsible for – different cloud services require varying
levels of responsibility. For instance, while software-as-a-service (SaaS) providers ensure
that applications are protected and that data security is guaranteed, IaaS environments may
not have the same controls. To ensure security, cloud customers need to double check with
their IaaS providers to understand who’s in charge of each security control.
2. Control user access – a huge challenge for enterprises has been controlling who has
access to their cloud services. Too often, organizations accidently publically expose their
cloud storage service despite warnings from cloud providers to avoid allowing storage drive
contents to be accessible to anyone with an internet connection. CSO advises that only load
balancers and bastion hosts should be exposed to the internet. Further, do not allow Secure
Shell (SSH) connections directly from the internet as this will allow anyone who finds the
server location to bypass the firewall and directly access the data. Instead, use your cloud
provider’s identity and access control tools while also knowing who has access to what data
and when. Identity and access control policies should grant the minimum set of privileges
needed and only grant other permissions as needed. Configure security groups to have the
narrowest focus possible and where possible, use reference security group IDs. Finally,
consider tools that let you set access controls based on user activity data.
4. Secure credentials – AWS access keys can be exposed on public websites, source code
repositories, unprotected Kubernetes dashboards, and other such platforms. Therefore, you
should create and regularly rotate keys for each external service while also restricting access
on the basis of IAM roles. Never use root user accounts – these accounts should only be
used for specific account and service management tasks. Further, disable any user accounts
that aren’t being used to further limit potential paths that hackers can compromise.
5. Implement MFA – your security controls should be so rigorous that if one control fails,
other features keep the application, network, and data in the cloud safe. By tying MFA
(multi-factor authentication) to usernames and passwords, attackers have an even harder
time breaking in. Use MFA to limit access to management consoles, dashboards, and
privileged accounts.
6. Increase visibility – to see issues like unauthorized access attempts, turn on security
logging and monitoring once your cloud has been set up. Major cloud providers supply some
level of logging tools that can be used for change tracking, resource management, security
analysis, and compliance audits.
The vulnerability is any mistake or weakness in the system's security procedures, design,
implementation, or internal control that may violate the system's security policy.
A vulnerability assessment process may involve automated and manual techniques with
varying degrees of rigor and an emphasis on comprehensive coverage. Using a risk-based
approach, vulnerability assessments may target different technology layers, the most
common being host, network, and application-layer assessments.
Vulnerability assessments provide security teams and other stakeholders with the
information they need to analyze and prioritize potential remediation risks in the proper
context. Vulnerability assessments are a critical component of the vulnerability
management and IT risk management lifecycles, helping protect systems and data from
unauthorized access and data breaches.
Organizations of any size, or even individuals who face an increased risk of cyberattacks, can
benefit from some form of vulnerability assessment. Still, large enterprises and other
organizations subject to ongoing attacks will benefit most from vulnerability analysis.
Because security vulnerabilities can enable hackers to access IT systems and applications,
enterprises need to identify and remediate weaknesses before being exploited.
Vulnerability assessment applies various methods, tools, and scanners to determine grey
areas, threats, and risks. Everything depends on how well the given systems' weakness is
discovered to attend to that specific need. Below are the different types of vulnerability
assessment, such as:
1. Network-based scans
It helps identify possible network security attacks. The scan helps zero-in the vulnerable
systems on wired or wireless networks.
2. Host-based scans
Host-based scans are used to locate and identify vulnerabilities in servers, workstations or
other network hosts. This type of scan usually examines ports and services that may also be
visible to network-based scans. It also provides excellent visibility into the configuration
settings and patch history of scanned systems.
4. Application Scans
It is used to test websites to discover all known software vulnerabilities. It also identifies
security vulnerabilities in web applications and their source code by automated scans on the
front-end or static or dynamic source code analysis.
5. Database Scans
Database Scans aid in identifying grey areas in a database to prevent vicious attacks by
cybercriminals. It is identifying rogue databases or insecure environments and classifying
sensitive data across an organization's infrastructure.
o Remediation actions to close any gaps and protect sensitive systems and
information.
o Meet cybersecurity compliance and regulatory needs for areas like HIPAA and PCI
DSS.
o The process of locating and reporting the vulnerabilities provides a way to detect
and resolve security problems by ranking the vulnerabilities before someone or
something can exploit them.
o In this process, Operating systems, Application Software and Network are scanned to
identify vulnerabilities, including inappropriate software design, insecure
authentication, etc.
Below is the step by step vulnerability assessment process to identify the system
vulnerability.
1. Goals and Objective: Define the goals and objectives of Vulnerability Analysis.
2. Scope: While performing the Assessment and Test, the assignment's Scope needs to
be clearly defined. The following are the three possible scopes that exist, such as:
:It is a software testing method in which software applications' functionalities are tested
without knowing internal code structure, implementation details and internal paths.
Black Box Testing mainly focuses on the input and output of software applications, and it is
entirely based on software requirements and specifications. It is also known as Behavioral
Testing.
: White box testing is a software testing technique in which internal structure, design and
coding of software are tested to verify the flow of input-output and also improve design,
usability and security.
In white-box testing, code is visible to testers, so it is also called Clear box testing, Open box
testing, transparent box testing, Code-based testing and Glass box testing.
:It is a software testing technique to test a software product or application with partial
knowledge of its internal structure. The purpose of grey box testing is to search and identify
the defects due to improper code structure or improper applications.
In this process, context-specific errors that are related to web systems are commonly
identified. It increases the testing coverage by concentrating on all of the layers of any
complex system.
Grey box testing is the combination of both Black Box Testing and White Box Testing.
Step 2) Test Execution: A packet is the data routed unit between an origin and the
destination. When any file, such as an e-mail message, HTML
file, Uniform Resource Locator (URL) request is sent from one place to another on the
internet, the TCP
layer of TCP/IP
divides the file into several "chunks" for efficient routing. Each of these chunks will be
uniquely numbered and will include the Internet address of the destination. These chunks
are called packets.
o When all the packets have arrived, they will be reassembled into the original file by
the TCP layer at the receiving end while running the assessment tools.
Step 3) Vulnerability Analysis: Now define and classify network or System resources and
assign priority to the resources (low, medium, high). Identify potential threats to each
resource and develop a strategy to deal with the most prioritized problems. Define and
implement ways to minimize the consequences if an attack occurs.
Step 4) Remediation: The vulnerability assessment results to patch key flaws or problems,
whether simply via a product update or through something more involved, from installing
new security tools to an enhancement of security procedures. In step 3, we prioritized the
problems that ensure the most urgent flaws are handled first. It's also worth noting that
some problems may have so little impact that they may not be worth the cost and
downtime required for remediation.
1. Active Testing: Inactive Testing, a tester introduces new test data and analyzes the
results. During the testing process, the testers create a mental model of the process,
and it will grow further during the interaction with the software under test.
While doing the test, the tester will actively find out the new test cases and new
ideas. That's why it is called Active Testing.
2. Passive Testing: It is used to monitoring the result of running software under test
without introducing new test cases or data
3. Network Testing: Network Testing is the process of measuring and recording the
current state of network operation over some time.
Testing is mainly done for predicting the network operating under load or find out
the problems created by new services. We need to Test the following Network
Characteristics, such as:
o Utilization levels
o Number of Users
o Application Utilization
4. Distributed Testing: Distributed Tests are applied for testing distributed applications.
These applications are working with multiple clients simultaneously. Testing a
distributed application means testing its client and server parts separately, but by
using a distributed testing method, we can test them all together.
The test parts will interact with each other during the Test Run.
This makes them synchronized properly. Synchronization is one of the most crucial
points in distributed testing.
Security in cloud computing is a major concern. Proxy and brokerage services should be
employed to restrict a client from accessing the shared data directly. Data in the cloud
should be stored in encrypted form.
Security Planning
Before deploying a particular resource to the cloud, one should need to analyze several
aspects of the resource, such as:
o A select resource needs to move to the cloud and analyze its sensitivity to risk.
o Consider cloud service models such as IaaS, PaaS,and These models require the
customer to be responsible for Security at different service levels.
o Understand the cloud service provider's system regarding data storage and its
transfer into and out of the cloud.
o The risk in cloud deployment mainly depends upon the service models and cloud
types.
Security Boundaries
The Cloud Security Alliance (CSA) stack model defines the boundaries between each service
model and shows how different functional units relate. A particular service model defines
the boundary between the service provider's responsibilities and the customer. The
following diagram shows the CSA stack model:
Key Points to CSA Model
o IaaS is the most basic level of service, with PaaS and SaaS next two above levels of
services.
o Moving upwards, each service inherits the capabilities and security concerns of the
model beneath.
o IaaS has the lowest integrated functionality and security level, while SaaS has the
highest.
o This model describes the security boundaries at which cloud service providers'
responsibilities end and customers' responsibilities begin.
o Any protection mechanism below the security limit must be built into the system and
maintained by the customer.
Although each service model has a security mechanism, security requirements also depend
on where these services are located, private, public, hybrid, or community cloud.
o access control
o audit trail
o certification
o authority
The service model should include security mechanisms working in all of the above areas.
Since the data stored in the cloud can be accessed from anywhere, we need to have a
mechanism to isolate the data and protect it from the client's direct access.
Broker cloud storage is a way of separating storage in the Access Cloud. In this approach,
two services are created:
1. A broker has full access to the storage but does not have access to the client.
2. A proxy does not have access to storage but has access to both the client and the
broker.
5. The client data request goes to the external service interface of the proxy.
7. The broker requests the data from the cloud storage system.
Encoding
Encryption helps to protect the data from being hacked. It protects the data being
transferred and the data stored in the cloud. Although encryption helps protect data from
unauthorized access, it does not prevent data loss.
The difference between "cloud security" and "cloud security architecture" is that the former
is built from problem-specific measures while the latter is built from threats. A cloud
security architecture can reduce or eliminate the holes in Security that point-of-solution
approaches are almost certainly about to leave.
It does this by building down - defining threats starting with the users, moving to the cloud
environment and service provider, and then to the applications. Cloud security architectures
can also reduce redundancy in security measures, which will contribute to threat mitigation
and increase both capital and operating costs.
The cloud security architecture also organizes security measures, making them more
consistent and easier to implement, particularly during cloud deployments and
redeployments. Security is often destroyed because it is illogical or complex, and these flaws
can be identified with the proper cloud security architecture.
Elements of cloud security architecture
The best way to approach cloud security architecture is to start with a description of the
goals. The architecture has to address three things: an attack surface represented by
external access interfaces, a protected asset set that represents the information being
protected, and vectors designed to perform indirect attacks anywhere, including in the
cloud and attacks the system.
The goal of the cloud security architecture is accomplished through a series of functional
elements. These elements are often considered separately rather than part of a coordinated
architectural plan. It includes access security or access control, network security, application
security, contractual Security, and monitoring, sometimes called service security. Finally,
there is data protection, which are measures implemented at the protected-asset level.
A complete cloud security architecture addresses the goals by unifying the functional
elements.
The security and security architectures for the cloud are not single-player processes. Most
enterprises will keep a large portion of their IT workflow within their data centers, local
networks, and VPNs. The cloud adds additional players, so the cloud security architecture
should be part of a broader shared responsibility model.
Each will divide the components of a cloud application into layers, with the top layer being
the responsibility of the customer and the lower layer being the responsibility of the cloud
provider. Each separate function or component of the application is mapped to the
appropriate layer depending on who provides it. The contract form then describes how each
party responds.
The concept of Identity management and access control in cloud computing covers most
areas of technology, access control is merging and aligning with other combined activities.
Some of these are automated using single sign-on capabilities; others operate in a
standalone, segregated fashion.
IAM includes people, processes, and systems that manage access to enterprise resources.
This is achieved by ensuring that the identity of an entity is verified (who are they, can they
prove who they are) and then granting the correct level of access based on the assets,
services, and protected resources being accessed.
In line with best practice, one-time passwords should be utilized as a risk reduction and
mitigation technique .
The key phrases that form the basis and foundation for IAM in the enterprise include the
following:
Each is discussed in the following sections Identity management and access control in cloud
computing.
Provisioning and de-provisioning are critical aspects of access management and Identity
management and access control in cloud computing. Think of setting up and removing
users.
In the same way, as you would set up an account for a user entering your organization
requiring access to resources, provisioning is the process of creating accounts to allow users
to access appropriate systems and resources within the cloud environment.
The ultimate goal of user provisioning is to standardize, streamline, and create an efficient
account creation process while creating a consistent, measurable, traceable, and auditable
framework for providing access to end-users.
Deprovisioning is the process whereby a user account is disabled when the user no longer
requires access to the cloud-based services and resources.
This is not just limited to a user leaving the organization but may also be due to a user
changing a role, function, or department.
As when building a house or large structure, the foundation is key. In the world of IAM, the
directory service forms the foundation for IAM and security both in an enterprise
environment and within a cloud deployment.
The primary protocol for centralized directory services is Lightweight Directory Access
Protocol (LDAP), built and focused on the X.500 standard.16 LDAP works as an application
protocol for querying and modifying items in directory service providers like Active
Directory.
Active Directory is a database-based system that offers authentication, directory, policy, and
other services to a network. Essentially, LDAP acts as a communication protocol to interact
with Active Directory.
LDAP directory servers store their data hierarchically (similar to domain name system [DNS]
trees and UNIX file structures) with a directory record’s distinguished name (DN) read from
the individual entries back through the tree, up to the top level.
Each entry in an LDAP directory server is identified through a DN access to directory
services, should be part of the IAM solution, and should be as robust as the core
authentication modes used.
The use of privileged identity management (PIM) features is strongly encouraged for
managing access of the administrators of the directory.
If these are hosted locally rather than in the cloud, the IAM service requires connectivity to
the local LDAP servers, in addition to any applications and services for which it is managing
access.
Within cloud environments, directory services are heavily utilized and depended upon as
the go-to trusted source by the IAM framework as a secure repository of identity and access
information.
Again, trust and confidence in the accuracy and integrity of the directory services are must-
haves.
As the name implies, privileged user management focuses on the process and ongoing
requirements to manage the lifecycle of user accounts with the highest privileges in a
system.
Privileged accounts typically carry the highest risk and impact because compromised
privileged user accounts can lead to significant permissions and access rights being
obtained, thus allowing the user or attacker to access resources and assets that may
negatively affect the organization.
The key components from a security perspective relating to privileged user management
should, at a minimum, include the ability to track usage, authentication successes and
failures, and authorization times and dates; log successful and failed events; enforce
password management, and contain sufficient levels of auditing and reporting related to
privileged user accounts.
Many organizations monitor this level of information for standard or general users, which
would be beneficial and useful in the event of an investigation; however, the privileged
accounts should capture this level of detail by default because attackers often target and
compromise a general or standard user, with the view to escalating privileges to a more
privileged or admin account.
Not forgetting that a number of these components are technical by nature, the overall
requirements that are used to manage these should be driven by organizational policies and
procedures.
Note that segregation of duties can form an extremely effective mitigation and risk
reduction technique around privileged users and their ability to effect major changes.
Access to devices, systems, and resources forms a key driver for use of cloud services (broad
network access); without it, the overall benefits that the service may provide are reduced to
the enterprise, and legitimate business or organizational users are isolated from their
resources and assets.
In the same way that users require authorization and access management to be operating
and functioning to access the required resources, security requires these service
components to be functional, operational and trusted to enforce security within cloud
environments. In its simplest form, authorization determines the user’s right to access a
certain resource.
(Think of entry onto a plane with your reserved seat or when you may be visiting an official
residence or government agency to visit a specified person.) Access management is focused
on the manner and way in which users can access relevant resources, based on their
credentials and characteristics of their identity.
Think of a bank or highly secure venue only certain employees or personnel can access the
main safe or highly sensitive areas.
Note that both authorization and access management are point-in-time activities that rely
on the accuracy and ongoing availability of resources and functioning processes, segregation
of duties, privileged user management, password management, and so on, to operate and
provide the desired levels of security.
If one of the mentioned activities is not carried out regularly as part of an ongoing managed
process, it can weaken the overall security posture.
"Data at rest" is data currently in storage, typically on a computer's or server's hard disk.
Data at rest contrasts with data in transit — also called data in motion — which is the state
of data as it travels from one place to another. It also contrasts with data in use — data
loaded into memory and actively in use by a software program.
Type Where is it?
Suppose Bob wants to send Alice a picture of a cheeseburger. Bob took the picture on his
smartphone, which has stored it ever since — the cheeseburger photo is currently data at
rest. Bob views the photo and attaches it to an email, which loads the photo into memory —
it becomes data in use (specifically by his phone's photo viewer and email applications). Bob
taps "Send," and the email with the attached photo travels over the Internet to Alice's email
service; it has become data in transit.
Each state of data — at rest, in transit, in use — faces the risk of discovery or exposure by a
malicious party. However, the risks are not the same across all of these states. For instance,
data in transit can be intercepted by an unauthorized party, while data at rest cannot,
because it does not move.
Data at rest still makes an attractive target for attackers, who may aim to encrypt the data
and hold it for ransom, steal the data, or corrupt or wipe the data.
No matter the method, the end goal is to access the data at rest and take malicious action,
often with financial gain in mind:
A data breach can occur if data at rest is moved or leaked into an unsecured
environment. Data breaches can be intentional, as when an external attacker
or malicious insider purposefully accesses the data to copy or leak it. They can also
be accidental, such as when a server is left exposed to the public Internet, leaking
the data stored within.
Unauthorized or excessive access to data at rest also puts it at risk. Attackers may
fake or steal credentials to gain access.
Physical theft can impact data at rest if someone steals the laptop, tablet,
smartphone, or other device on which the data at rest lives.
What is data at rest encryption?
Encryption is the process of scrambling data in such a way that it can only be unscrambled
by using a key (a key is a string of randomized values, like "FFBD29F83C2DA1427BD"). Hard
disk encryption is the technology used to encrypt data at rest.
Data at rest encryption is like locking away important papers in a safe. Only those with the
key can access the stored papers; similarly, only parties with the encryption key can access
data at rest.
Encrypting data at rest protects it from negative outcomes like data breaches, unauthorized
access, and physical theft. Without the key, the data is useless.
(Note that encryption is also crucial for protecting data in transit. The main technology for
encrypting data in transit is Transport Layer Security/TLS — learn more about TLS here.)
How does identity and access management (IAM) protect data at rest?
Restricting who can access data is a crucial part of protecting it. The more people who can
access data, the greater the chances of a breach. And without strong access controls,
unauthorized parties may be able to alter, copy, steal, or destroy data at rest. In fact, many
ransomware attacks use lateral movement to acquire the credentials they need to access,
and then alter, data at rest.
Identity and access management (IAM) is the practice of managing a user's identity and
what they are allowed to do. IAM helps keep data at rest secure by authenticating users and
checking their authorization for viewing and editing data at rest.
Before the Internet and cloud computing, data at rest was kept on a user's computer or on
an organization's on-premise servers. However, as many organizations move to the cloud,
data at rest is stored on remote servers managed by an external vendor. Without direct
access to the data, organizations that use cloud infrastructure should evaluate their
providers' cloud storage security measures and make sure their cloud deployments are
configured correctly.
Cloud security posture management (CSPM) tools can help automate the process of
identifying security misconfigurations that could compromise data at rest.
Additionally, Cloudflare Zero Trust protects data at rest whether it is stored locally or
remotely in the cloud. Learn more about how Cloudflare Zero Trust helps control access,
filter out malicious web traffic, and verify devices for better organizational security.
Data theft can devastate any company, resulting in lost profits, regulatory enforcement,
litigation, and reputational damage that can be difficult to overcome. Every organization
must protect its customer data and assure that sensitive information is kept safe.
That said, the data in your company’s possession is held in different states – and each of
these states has particular vulnerabilities. A security tactic that works for one state may be
inefficient for another. Knowing the biggest threats for each type of data can help you to
design controls that will keep all of your clients’ personally identifiable information safe.
To create the best information security system for your data, you’ll first need to understand
the differences among data at rest, data in motion, and data in use.
Data at Rest
“Data at rest” is data that is not being used or transferred. This means that the data is likely
being stored on a hard drive, flash drive, or another device. The term can also refer to data
stored in a cloud service, such as Microsoft Azure or Amazon Web Services (AWS). Data at
rest is easier to secure, but thieves typically see this data as a more attractive target
because there’s more of it to steal. Data at rest is also more vulnerable to malicious attacks
from employees who have access to the storage network.
Data in Motion/flight
This refers to data moving from one location to another, such as between two storage
devices. This includes downloads, transfers, or any other means of taking data from one
place to another. Of the three states, data in motion is by far the most vulnerable. In
particular, Man in the Middle (MitM) attacks (where a malicious actor inserts himself into a
transaction or transfer) are easier to execute with data in motion.
Data in Use
This refers to data being accessed or used at any given moment. If you are creating,
deleting, editing, or processing your data, then it is “in use” until such time as it is stored (at
rest) or transferred (in motion).
What Are the Best Practices for Keeping Data at Rest Safe?
As we discussed previously, data at rest can be a tempting target for hackers. The following
defense methods can help you protect your stored data.
Data encryption translates your data into a code that is undecipherable to anyone without
the correct encryption keys. Encrypting your data while it is stored means that even if
unauthorized access happens, the encrypted data will not be of use to the thieves.
Encryption is a powerful tool that can keep information from falling into the wrong hands.
It’s imperative that you understand where your data is stored and what safeguards are in
place. If you’re using cloud storage, your server will likely have a number of protections in
place; those are a great addition to a strong defense strategy. Depending on these outside
controls alone, however, can have disastrous results. Whatever cloud services you use, you
still have a responsibility to create security controls for your storage and make sure the data
is safe.
What Are the Best Practices for Keeping Data in Motion/Flight Safe?
Data in motion is harder to protect than stored data. These security measures will provide a
solid foundation for your data protection efforts.
Your company will need to access and transmit data to provide the best service for your
clients; leaving critical data in storage is simply not an option. This means that your private
network must be secured and protected so that data is safe while in transit. There are a
variety of ways to secure your network, such as implementing firewalls or using encrypted
connections and SSL certificates. Whatever method you choose, your goal should be to
assure that no outside actors are able to see or acquire sensitive data while it moves within
your company.
An important part of data protection is knowing who has access to it and who is responsible
for any transmissions. This means that restricting data access on a need-to-know basis and
implementing access logs can help you track when data is being moved and who was
responsible for moving it. If a breach occurs at this level, access control will make it easier to
determine where it originated and what must be done to prevent future breaches.
Encourage Good Security Hygiene
The safety of data transmitted within your company depends on the security of your staff’s
accounts. Usually data will be transmitted via email or messaging apps, and your employees
need to have unique names and strong passwords to keep that data secure. Multi-factor
authentication is also a great option for protecting data, and your employees should be
encouraged to implement such protection on their accounts.
Endpoint security should also be a concern, and any laptops or mobile devices used by your
staff must be secured. Training your staff on the importance of data security should be an
integral part of your overall data management plan.
Your best defense in the face of data theft is preparedness. Part of that preparedness is a
risk management system tailored to your company’s needs. Organizing this important
information in spreadsheets can result in redundancies or worse, risks that slip through the
cracks. How can you best streamline your risk management and protect your customers’
data?
ZenGRC is an integrated software solution that allows you to view your entire risk landscape
in real time. This innovative platform will help you track and assign risks for all data,
whether it’s at rest or in motion.
Schedule a demo today to learn how ZenGRC can help create a risk management system
tailored to your data protection needs.
Virtualized security, or security virtualization, refers to security solutions that are software-
based and designed to work within a virtualized IT environment. This differs from
traditional, hardware-based network security, which is static and runs on devices such as
traditional firewalls, routers, and switches.
Virtualized security is now effectively necessary to keep up with the complex security
demands of a virtualized network, plus it’s more flexible and efficient than traditional
physical security. Here are some of its specific benefits:
Virtualized security can take the functions of traditional security hardware appliances (such
as firewalls and antivirus protection) and deploy them via software. In addition, virtualized
security can also perform additional security functions. These functions are only possible
due to the advantages of virtualization, and are designed to address the specific security
needs of a virtualized environment.
For example, an enterprise can insert security controls (such as encryption) between the
application layer and the underlying infrastructure, or use strategies such as micro-
segmentation to reduce the potential attack surface.
The increased complexity of virtualized security can be a challenge for IT, which in turn leads
to increased risk. It’s harder to keep track of workloads and applications in a virtualized
environment as they migrate across servers, which makes it more difficult to monitor
security policies and configurations. And the ease of spinning up virtual machines can also
contribute to security holes.
It’s important to note, however, that many of these risks are already present in a virtualized
environment, whether security services are virtualized or not. Following enterprise
security best practices (such as spinning down virtual machines when they are no longer
needed and using automation to keep security policies up to date) can help mitigate such
risks.
Traditional physical security is hardware-based, and as a result, it’s inflexible and static. The
traditional approach depends on devices deployed at strategic points across a network and
is often focused on protecting the network perimeter (as with a traditional firewall).
However, the perimeter of a virtualized, cloud-based network is necessarily porous and
workloads and applications are dynamically created, increasing the potential attack surface.
Traditional security also relies heavily upon port and protocol filtering, an approach that’s
ineffective in a virtualized environment where addresses and ports are assigned
dynamically. In such an environment, traditional hardware-based security is not enough; a
cloud-based network requires virtualized security that can move around the network along
with workloads and applications.
There are many features and types of virtualized security, encompassing network
security, application security, and cloud security. Some virtualized security technologies are
essentially updated, virtualized versions of traditional security technology (such as next-
generation firewalls). Others are innovative new technologies that are built into the very
fabric of the virtualized network.