Understanding Comprehensive Database Security: Technical White Paper
Understanding Comprehensive Database Security: Technical White Paper
Understanding
Comprehensive Database
Security
White Paper
Executive Summary 4
Part 1: Database Security 6
Complexity Weakens Security 6
Time Weakens Security 7
Understanding the Need for a Formal Database Security Program 9
Three Strategic Objectives of a Formal Database Security Program 9
Seven Components of Defense in Depth 10
Part 2: People and Physical Security 30
People Security Perimeter 30
Physical and Infrastructure Security Perimeters 31
Conclusion 33
Appendix A: Tools Cited 34
Executive Summary
You Own Your Data: Protect It with a Formal Database Security Program
Organizations own and are responsible for their data, and to secure that data in a
database, especially large databases supporting ERP platforms such as SAP, PeopleSoft,
and the Oracle E-Business Suite, a formal database security program is required.
For example, how important are vendor-provided security patches? The truth
is, vendor-provided security patches provide little security and are a woefully
inadequate security approach. In fact, for many organizations they cause more
problems than they help and therefore are never applied.
To meet the three strategic objectives for a database security program, the required
processes can be grouped into seven components. All seven components are
required, and the order in which the components must be implemented reflects
their relative importance.
The older the database, the harder it is to secure. Over time, technical security
exploits are found and commonly, the older the database version, the more
vulnerabilities exist.
1
Oracle Inc., http://www.oracle.com/us/support/library/lifetime-support-technology-069183.pdf.
Scheduling Downtime for Patch Updates Creates Risk and Is Often Avoided
Complicating the overall situation is the need to schedule business downtime to test
changes. Downtime interrupts operations and can have an adverse material impact
on compant revenue. Databases cannot be tested independently of the application
they are supporting. Any patch or configuration change, either for the database or
the application, requires a proportionate amount of testing by the business dictated
by the size/scope of the patch or configuration change. If the business perceives
little-to-no benefit to testing and scheduling downtime to apply security patches,
over time security vulnerabilities can easily accumulate.
Figure 2: Database security decays over time due to complexity, usage, application changes,
upgrades, published security exploits, etc.
While database security tends to deteriorate over time, what is needed to secure
databases does not: Effective database security requires a formal program to be in
place that addresses people, processes, and tools.
1. Minimize the attack surface. This is about reducing your total exposure to
security vulnerabilities. Nearly all database security vulnerabilities require a
valid database session (connection) to exploit; and it’s noteworthy that a valid
session is easiest to gain through an application or configured tool. To proactively
defend and protect a database requires that vulnerabilities be minimized as much
as possible. In security parlance, this is called reducing the attack surface. To
minimize the attack surface, both physical and virtual database perimeters must
be inventoried, reduced, and secured, and removing sensitive data is a key step.
For organizations running large Oracle applications, the attack surface includes
not just the application itself, but all the supporting tools, utilities, and third-party
applications — many of which require direct database connections
2. Classify databases and act appropriately. Overall, a risk-based approach is
required to provide database security. The risk-based approach must focus effort
and resources on specific organizational data in individual databases. Databases
and sensitive data must be appropriately classified to define requirements and to
make decisions in response to real-time events.
3. Perform intelligent and business-focused auditing and monitoring. Capturing
audit data is easy; using it is not. Audit data must be transformed into actionable
information. Monitoring must provide the constant vigilance required to support
and enforce multiple layers of defense. Baseline configurations must be protected
against drift, and trust in operational processes must be verified.
To meet the three strategic objectives for a database security program (plan of
action), the required processes can be grouped into seven components (see Table 2).
All seven components are required, and the order in which the components must be
implemented reflects the relative importance of the component. Due to the inherent
risks associated with not having certain processes in place, some must be put in
place before others. Lastly, the more time that elapses between implementing the
seven components and the initial go-live, the more resources it will take to establish
an effective database security program.
The seven steps required for true defense in depth in a database security program
are described in Table 2 and Figure 3.
Especially for large Oracle ERP applications, production databases are commonly
copied or cloned to create test, support, and development environments (see Figure
4). This process is complicated when business-sponsored operational projects
request additional database copies of varying lifetimes. As projects reach different
milestones, database copies are deleted, and new copies are made. Likewise, new
production databases can be introduced.
Depending on the nature and risk classification of the data, organizations subject
to the Payment Card Industry (PCI) data security standard or the Health Insurance
Portability and Accountability Act (HIPAA) will need to closely consult their
compliance requirements for automated discovery and scanning.
The inventory process to identify both databases and sensitive data must be
programmatic. For organizations with hundreds, if not thousands, of databases,
tools such as Imperva, IBM Guardium, and Integrigy AppSentry can greatly assist in
automating the discovery process.
Default configurations are rarely, if ever, designed with security best practices in
mind. The default configurations for databases are designed for the speed and
efficiency of the initial installation and setup. This includes the use of well-known
default passwords, simple password requirements, and the creation of high-
privileged as well as demo and test accounts (users). Such configurations are
vulnerabilities that must be closed.
2
“Change management (ITSM),” Wikipedia, https://en.wikipedia.org/wiki/Change_management_(ITSM).
3
F or more information on the DISA STIG for Oracle, see http://iase.disa.mil/stigs/app-security/database/
Pages/index.aspx, accessed Dec. 22, 2015.
Large organizations and large applications face significant risks with access
management over time, as applications are updated and changed, and accounts are
also changed due to human resource activities (hires, transfers, layoffs, and so on).
Consequently, if not addressed, least-privilege security deteriorates as privileges
accumulate and/or are forgotten.
―― Authentication and Authorization — Who can log in to the database and what
they can do once they log in is referred to respectively as authentication and
authorization. Authentication defines whom you trust to log in to your databases
and authorization defines what you trust them to do once they have logged in.
Database authorization is very complex. The number of features provided by
databases such as Oracle allows a very large number of privileges to be granted.
Privileges include who can create users, read data on certain tables, or read data
from any table as well as who can update and change data.
―― Administration — Who is allowed to edit or alter accounts and when and why
accounts are edited must be formal processes. Each change to an account must
be documented; otherwise the change may be deemed a security event.
―― De-provisioning — All accounts follow a lifecycle. Accounts that are no longer used
create multiple security vulnerabilities and must be closed. Like provisioning, a formal
process is required to document who, when, how, and why each account is closed.
For example, with a password safe such as CyberArk, Oracle database passwords can be
separated from SQL Server and/or network assets and operating system root accounts.
Tools such as PowerBroker can take this concept further, adding restricting
privileges, as well as passwords.
Auditing provides the data with which to verify trust. Users are authorized only for
certain privileges. Auditing identifies when users’ privileges have changed or when they
exercise privileges that they potentially have not been authorized to have and/or use.
A common audit framework should be applied to all databases. This will give a
single lens through which to view database activity and make security decisions.
The foundation of the framework should be a set of security events and actions
that are audited and logged in all databases (see Figure 7). These security events
and actions should be derived from and mapped back to key compliance and
security standards most organizations have in place to comply with such as HIPAA,
PCI, and SOX (see Figure 8).
Database auditing cannot be done manually, and audit logs must be sent to
a centralized solution for safekeeping, segregation of duties, and monitoring
(correlation and reporting). Such tools are referred to as Database Activity
Monitoring (DAM) solutions.
DAMs work through the use of agents deployed on database servers, and/or by
intercepting and reading network traffic between clients and servers. Agents and
intercepted traffic are relayed to a centralized DAM solution for analysis. One
advantage of DAM solutions is that they can be implemented transparently in
databases and applications. No configuration of native database auditing is required
for DAM solutions.
A database security program must have a monitoring solution that is outside the
database. Not only does this create a segregation of duties by placing monitoring
outside the reach of DBAs, but it also allows correlation among and between
databases and other assets (such as firewalls, VPN activity, and applications).
In this scenario, the runbook would identify that both the DBA and IT security teams
should be immediately notified — sending an email and opening a ticket will not
suffice. Assuming a 24x7 security operations center or service is raising the alert,
phone calls would be made until a “live” person was reached (no voice mails). The
runbook would specify to find a “live” person by calling the defined list and then
escalate up the chain-of-command until a decision maker was reached — possibly
as far as the CIO or CEO.The decision makers would then need to determine if the
newly created account is malicious or whether an approved ticket or service request
exists. Two decision makers are needed to ensure segregation of duties is enforced
and to promote better decision making (two people will usually allow a better
decision to be made than just one).
Vulnerability management is the process that identifies and guides decision making
about risk. The risk is always defined by the data, and the vulnerability management
process is sometimes erroneously focused on vendor security patches. Organizations
own their data; organizations need to secure their data.
Perfect bug-free software does not exist. There may be exceptions, hopefully, such as
the military ‘s command-and-control system for nuclear weapons, but overall because
humans write software, security vulnerabilities will always exist due to design flaws
and stupidity, and because of the inherent cleverness of the human species.
Relying on vendor security patches and vulnerability processes does not work for
several reasons:
―― Many vendor patches do not work; they do not fully address the vulnerability and/
or some patches cause other problems.
―― Sentrigo Inc., a database security products vendor conducted a survey that found
a full two thirds of Oracle DBAs (206 out of 305) were noted as never having
applied Oracle CPUs.
―― There are two major reasons for the trend, Slavik Markovich, chief technology
officer at Sentrigo, said. The first and most important is that most DBAs fear
the consequences of installing a patch on a running database, he said.
―― “To apply the CPU, you need to change the binaries of the database,” he said.
“You change the database behavior in some ways that may affect application
performance,” he said. So applying security patches to a database typically
involves testing them against the applications that feed off the database, he
said. “This is a very long and very hard process to do, especially if you are
in enterprises with a large number of databases and applications,” he said.
Applying these patches means months of labor and sometimes significant
downtime, both of which most companies can’t afford, he said.
―― Another problem is that companies that want to install the most recent Oracle
patches need to first ensure that they have already installed the previous
patch set, Markovich said. So companies that fail to keep up with the latest
patches keep falling further behind with each patch set release, he said4.
―― Other vendor patches are only released for specific modules or products, not all.
―― Still other vulnerabilities are addressed only for those customers paying for
premier or enhanced support.
―― Lastly, vendors exist to make profits, and maximizing security reduces profits.
Relying exclusively on a vendor’s vulnerability management processes and
decision making will not secure an organization’s data.
―― When vendor database security patches are released, making changes to the
database might need to wait until the application’s vendor certifies the new
permutation of combined patch levels. Each change to a database must be
made according to the application’s overall tested combination of patches and
configurations for the operating system, database, and application as well as all
supporting utilities and third-party tools that comprise the overall application.
Likewise, large applications and organizations usually require longer testing cycles,
and it is often difficult to schedule system downtimes with operations. The entire
process of testing and applying patches and making configuration changes to
databases to mitigate vulnerabilities can take weeks, if not months. In this context,
applying security patches can lead to overall insecurity if resources are diluted in
pursuit of mitigating low-risk vulnerabilities with little security impact.
Vulnerability management must address the risks as a whole and starts with
the data; sensitive data and databases must be inventoried. The risk calculation
consists of what data is specifically at risk, the technical exploit required, and the
likelihood and effort required. For example, sensitive information, if encrypted, will
be protected when at-rest. If a vulnerability exists where data at-rest is an issue, the
risk may be deemed lower if not mitigated.
4
http://www.computerworld.com/article/2538688/security0/update--two-thirds-of-oracle-dbas-don-t-apply-
security-patches.html
Securing sensitive data requires a formal process for several reasons. Securing
sensitive data cannot be left to guesswork and optimistic assumption. Usually there
are multiple legal and compliance requirements and mandates to protect sensitive
data. Another reason is that sensitive data is hard to find, and once located, is
difficult to keep track of.
Securing sensitive data is also complicated by the restrictions posed by the features
and functionalities of both the application and the database itself. Not all technical
solutions will be feasible, and the exact technical solution deployed will be highly
dependent on the technology stack.
Of all of the seven components of database security discussed in this white paper,
protection is the hardest to implement.
Because of the success of P2PE and tokenization for payment data, the industry has
begun to look to other areas where sensitive data can be moved to more secure
locations. Protected Health Information (PHI), covered by HIPAA, is fast becoming an
option for P2PE and tokenization.
A process for the protection of sensitive data is depicted in Figure 10. To protect
sensitive data, a risk-based approach should be used to direct appropriate resources
to the highest-risk data and databases. The process starts with the privacy policies
of the organization. These policies determine what data is sensitive, who can access
it, and acceptable solutions for protecting it. Client contractual, regulatory, and
compliance requirements usually heavily influence organizational privacy policies.
Using the privacy policies, the database inventory and discovery processes then identify
the sensitive data specifically to be secured. The inventory of sensitive data usually is
kept in the following format: server-database-table-column-steward.
The steward is the employee responsible for the security of the specific data
element. Typically, the data steward is the business owner. For example, the
senior human resource officer will be the data steward for U.S. Social Security
numbers — in large part because of the senior human resource officer’s fiduciary
responsibilities as defined by HIPAA.
The concept of “live data” is important to understand when deciding how to secure
sensitive data. Data is often copied, as noted earlier, from production to test and
development environments as well as to data warehouses. Regardless of where data
exists, if it is live-production sensitive data, it must be protected. Constant vigilance
for the detection of rogue sensitive data is required. Copies and backups of tables and
sometimes of entire databases are often much softer targets to hack and compromise.
Rogue data and databases are previously unknown and/or not inventoried.
Automated tools should be used for the sensitive data inventory and scanning
process. These tools can also be used for configuration management and should
be integrated into your SIEM when rogue sensitive data is found. Imperva and IBM
Guardium provide such functionality. These tools use technology such as regular
expressions to parse each packet of data coming from a database to determine if
certain patterns, such as credit cards or U.S. Social Security numbers exist. Rules can
then be created and applied and usage can be mapped and audited. Additionally,
rules can be defined to dynamically limit or block the use of sensitive data.
Once located, specific data protection solutions must be deployed to secure the
sensitive data. Depending on the database’s production status, different tools
and approaches will be used to secure the sensitive data within it — including
tokenization and encryption solutions such as those offered by CardConnect,
Vormetric, and Voltage. The regulatory and compliance requirements for the
protection of sensitive data determine, to a large extent, what tools and approaches
should be used, as well as the attestation (such as logs and reports) required to
prove that the protection is in place and working.
Vendors such as Vormetric and Voltage both provide solutions for the protection of
sensitive data. Table 3 provides a high-level description for the approaches used to
protect sensitive data.
Encryption
Encryption is often discussed as a solution for the protection of sensitive data. Some
organizations may even be forced to encrypt data because of client contractual, legal, or
compliance mandates. Encryption, while useful, has several drawbacks.
Aside from certification and integration requirements with applications they support,
with database encryption there usually is a statistically measurable performance
impact. Where encryption is supported, there is a performance cost; and where
encryption is supported, usually more is not better.
Encryption is not an access control solution. All encrypted data in a database will be
encrypted the same, regardless of the user creating or viewing the data. Database
encryption is very similar to a hotel where all guests get the same room key. Non-
hotel guests cannot get into the hotel, but all guests have full access to all guest
rooms. Encryption of sensitive data does not protect against malicious privileged
employees and contractors. For example, DBAs and developers with direct access to
the database will in many or most cases have full access to unencrypted data.
―― Planning
―― Implementation
―― Ongoing
Planning
Building a database security program requires planning if for no other reason than
that most organizations cannot implement all of the components at once.
To effectively size and scale the components, the inventory of in-scope databases
and the sensitive data discovery within these databases must occur first. Once the
inventories have been completed, planning can commence on data protection and the
section and implementation of a DAM solution. This selection process may involve a
pilot or proof-of-concept before making an investment.
Implementation
The implementation phase usually centers on the implementation of the DAM and
sensitive data protection solutions. Access management also commonly involves
large cleanup efforts in the implementation phase as rogue accounts are purged and
consistent standards for authentication and authorization are introduced.
Over time, databases change size and new databases are created. Rogue databases
and data must be purged. Configurations and setups drift and must be corrected. All
of this requires constant vigilance.
Figure 12: The relationship of data security to people and physical security
Database security, as with any security topic, begins and ends with people. People’s
decisions and actions lead to security events and insecurity. In terms of database
security, people who have privileged access to use, maintain, and administer the
database, or who have elevated privileges to all data within the database, are of
paramount concern.
The scope of this white paper does not allow a detailed discussion of operational
security processes. At a high level, who you should trust to be privileged database
users and why you should trust them should be part of a mature human resource
program that includes the use of background checks. Trust must be verified, and
background checks are a vital tool to identify what people should be trusted with
what data and how much they should be trusted.
Outsourced Employees
Another common oversight with people security is that managed services, cloud
providers, and hosting companies commonly use contractors and third parties.
Offshore operations centers are often separate legal entities, if not completely
separate third-party companies servicing multiple clients from the same location.
You own your data. You need to ensure that your security requirements flow down to
each person who has access to your data.
Regardless of whether or not you want to trust people, your trust perimeter is
physically defined by who has physical access to the servers (either directly or via a
network) and who has the passwords.
If you are effectively restricting physical access to your database servers, databases
are rarely accessed from the database server’s console — for example, by accessing
the database directly from the server console inside the data center. DBAs are rarely
(usually never) allowed access to the data center. Database access is mostly through
applications, tools, and utilities.
Work areas for DBAs and high-privileged and trusted employees must be restricted.
For example, they should not be on the first floor near windows, and their laptops
should be encrypted and have locking cables on them. If a DBA lost their laptop,
what would be risked?
Network Design
Network design has a large impact on physical and infrastructure security for
databases. How flat is your network? Do you use DMZs? Do you have both internal
and external DMZs? Do you have separate network segments for applications,
databases, storage, and administration (for example, hypervisor consoles and
monitoring)? Or can your databases be directly accessed from any wireless access
point or conference room wall jack in your office? Often an organization will
implement controls such as firewalls, proxies, intrusion protection, and monitoring
and then fail to implement them for internal users. Do you still allow any-any access
between non-related servers in the data center? If you are outsourced, are your
databases directly accessible from any conference room or wireless access point in
your vendor’s locations in throughout the U.S., Asia, and Europe?
Managing Passwords
Another indication of overall IT security maturity is how passwords are managed.
Large Oracle applications, through the installation processes, create hundreds of
passwords for a single environment (operating system, database, and application).
If password safes are not being used to safeguard and protect passwords, and
to segregate access to passwords to appropriate personnel only, overall security
usually is weak. For example, are Oracle database passwords separated from SQL
Server, network assets, and operating system root accounts?
Still another measure of security is use of tools such as Puppet and Chef. Puppet
and Chef automate the installation, configuration, and maintenance of servers.
These tools greatly promote the use and maintenance of secure baseline
configurations for operating systems and virtual machine guests.
Lastly, is your organization ISO 27000 certified, working toward an ISO 27000
certification, or using ISO 27000 as a guiding principle? The International Standards
Organization (ISO) 27000 series are best-practice security standards for information
security.5 ISO 27001, in particular, is a formal specification concerning the
management of information security risks and is equally applicable to all types of
organizations, both public and private.
5
International Standards Organization, “ISO/IEC 27001 - Information security management,” http://www.iso.
org/iso/home/standards/management-standards/iso27001.htm, accessed Dec. 23, 2015.
1. Inventory
2. Configuration
3. Access
4. Auditing
5. Monitoring
6. Vulnerability
7. Protection
All seven components are required and are comprised of formal processes, executed
by people using tools: security is a process and tools do not provide security, people
do. The chief thrust of the seven components can be summarized as follows:
1. Limit access to the database — Create physical and virtual perimeters to reduce
access, especially direct database access, as much as possible.
2. Classify databases and act appropriately — Data must define the acceptable
level of risk for each database. Focus resources on the most important data and
databases first. Use a layered approach with a minimum configuration baseline
as a consistent framework to be applied to all databases.
3. Verify trust – Know who to trust and verify their trust. Especially for direct
database connections outside applications, auditing must enforce and verify trust
and, in general, must transform audit data into actionable, business-focused
information.
The table below summarizes the tools mentioned in this white paper.
Rimini Street and the Rimini Street logo are trademarks of Rimini Street, Inc. All other company and product
names may be trademarks of their respective owners.
© 2010-2016. Rimini Street, Inc. All rights reserved. Rimini
Street and the Rimini Street logo are registered trademarks
of Rimini Street, Inc. All other brand and product names
are trademarks or registered trademarks of their
respective holders. LT-US-061316