SAP HANA Data Management and Performance On IBM Power Systems
SAP HANA Data Management and Performance On IBM Power Systems
In partnership with
IBM Academy of Technology
Redpaper
IBM Redbooks
July 2021
REDP-5570-01
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Approaching SAP HANA on IBM Power Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Memory footprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Start times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.4 High availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.5 Exchanging hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.6 Remote database connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Cloud® Redbooks®
Db2® POWER® Redbooks (logo) ®
FlashCopy® POWER9™ System z®
IBM® PowerVM® XIV®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and
other countries.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redpaper® publication provides information and concepts about how to use SAP
HANA and IBM Power Systems features to manage data and performance efficiently.
The target audience of this paper includes architects, IT specialists, and systems
administrators who deploy SAP HANA and manage data and SAP system performance.
Authors
This paper was produced in close collaboration with the IBM SAP International Competence
Center (ISICC) in Walldorf, SAP Headquarters in Germany and IBM Redbooks.
Dino Quintero is a Power Systems Technical Specialist with Garage for Systems. He has 25
years of experience with IBM Power Systems technologies and solutions. Dino shares his
technical computing passion and expertise by leading teams that are developing technical
content in the areas of enterprise continuous availability, enterprise systems management,
high-performance computing (HPC), cloud computing, artificial intelligence (including
machine and deep learning), and cognitive solutions. He also is a Certified Open Group
Distinguished IT Specialist. Dino holds a Master of Computing Information Systems degree
and a Bachelor of Science degree in Computer Science from Marist College.
Adriana Melges Quintanilha Weingart is an IBM Thought Leader and The Open Group
Distinguished Technical Specialist certified, who is working as an Infrastructure Architect for
SAP solutions on IBM Cloud®, reviewing exceptions and proposing viable alternatives to the
solution architects and customers as part of the Boarding Solutions team. With more than 22
years of experience in IT/SAP, and as an IBM employee for 16 years, she also supported
Global, LA, and Brazilian customers in Banking and Consumer Products industries as an
SAP and Middleware Subject Matter Expert, working closely with customers, Business
Partners, and other IBM teams. Adriana is a member of IBM Academy of Technology and the
IBM Technology Leadership Council in Brazil. She has authored other IBM Redbooks®
publications and participates as a speaker in IBM and non-IBM technical conferences.
Faisal Siddique is an IBM Power System Lab Services Technical Specialist since 2011.
Faisal specialties are IBM Power Systems, Linux on Power, SAP HANA on Power, and
Spectrum Scale, including the IBM® Elastic Storage Server. Faisal has implemented major
SAP HANA projects in MEA and MEP.
Wade Wallace
IBM Redbooks, Poughkeepsie Center
Damon Bull
Vinicius Cosmo Cardoso
Cleiton Freire
Eric Kass
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
viii SAP HANA Data Management and Performance on IBM Power Systems
Stay connected to IBM Redbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Preface ix
x SAP HANA Data Management and Performance on IBM Power Systems
1
Chapter 1. Introduction
This chapter introduces the features of SAP HANA on IBM Power Systems that help manage
data and performance efficiently.
The authors of this publication are architects and engineers who are implementing SAP
HANA systems for years, and often are asked to provide their insights about designing
systems. The inquiries are diverse, the conditions are varied, and the answers are individual.
However, in designing this book, the team attempted to anticipate the questions that are most
likely asked.
The authors intend that this book be used in a task-based fashion to find answers to
questions, such as the following examples:
Which hardware do you choose?
How do you plan for backups?
How do you minimize start time?
How do you migrate data to SAP HANA?
How do you connect to remote systems?
How much hardware do you need (memory, CPU, network adapters, and storage)?
How do you reduce the memory footprint?
What high availability (HA) options exist?
Where do you get support?
How do you answer other questions?
Consider this paper as a starting guide to learn what questions to ask and to answer the last
question first. For any remaining questions, contact the ISICC.
The authors understand the goal of every SAP HANA installation is to be as resilient,
available, inexpensive, fast, simple, extensible, and manageable as possible within
constraints, even if some of the goals are contradictory.
This publication is unique in its intention to exist as a living document. SAP HANA technology
and Power Systems technology that supports SAP HANA change so quickly that any static
publication is out-of-date only months after distribution. The development teams intend to
keep this book as up-to-date as possible.
So, where does the process to define the requirements of an SAP HANA system begin?
Before any sizing is performed, your business goals must be established that come from the
changing IT industry.
The typical motivation for moving forward to an SAP HANA-based system is the path toward
digital transformation, which requires systems to perform real-time digitalization processing.
For example, the requirements for processing pervasive user applications that use SAP Fiori
and real-time analytics has a different processing footprint (Open Data Protocol
[OData]-based) than a classical form-based process before output and the process after input
into SAP GUI applications.
IBM bases its sizing on tables that are established by the benchmark team, who are highly
proficient in SAP performance analysis. Conceptually, calculating SAP HANA sizing from a
classical AnyDB system begins by determining the SAP Application Performance Standard
(SAPS), which is a standard measure of workload capacity, of your present system.
After you size the memory, you find that the number of hardware permutations that fits your
requirements is overwhelming. To narrow your choices, you decide whether to use a scale-up
or scale-out infrastructure. As a best practice, use a scale-up infrastructure because although
some automation is available to help select which data can be spanned across multiple
nodes, you still need some manual management when you implement a scale-out structure.
When you use a scale-up infrastructure, consider that different hardware implementations
have different performance degradation characteristics when scaling-up memory usage. As
systems become larger, the memory to CPU architecture plays an important role in the
following situations:
How distant the memory is from the CPU (affinity).
How proficiently the system can manage cache to memory consistency.
How well the CPU performs virtual address translation of large working sets of memory.
Note: Acquisition costs and licensing become complicated when running mixed workloads.
Ask your customer representative to contact the ISICC to have them help provide an
optimal cost configuration. Contacting the ISICC is a good idea in general because we
want to know what you are planning, and it is our job to help make the offering as attractive
as possible to fit your needs.
Customers that have SAP incidence support for SAP on IBM products continue to enjoy that
support with SAP HANA. SAP HANA on Power Systems support channels are intricately
integrated into SAP development and support.
The people who are supporting SAP on AIX, SAP with IBM Db2®, or SAP on IBM i, and
System z® are members of the same team that support SAP HANA on Power Systems.
By migrating to SAP HANA but remaining with IBM, support teams were not changed. For
questions regarding anything SAP, open an SAP support ticket with an IBM coordinated
support channel, such as BC-OP-AIX, BC-OP-AS4, BC-OP-LNX-IBM, BC-OP-S390; and for
issues regarding interaction with other databases (DBs), BC-DB-DB2, BC-DB-DB4, and
BC-DB-DB6.
Chapter 1. Introduction 3
1.1.1 Memory footprint
In contrast to the disk-storage-based DB engines that are used by classical SAP Enterprise
Resource Planning (ERP), a memory footprint with memory-based DB systems is a
continuous issue. A memory footprint is not a static calculation because the calculation takes
time. If your archiving rate cannot keep up with your data generation rate, your memory
footprint tomorrow is greater than it is today.
Data growth is an issue that continuously requires attention because it is likely that your
application suite changes. Therefore, the characteristics of your data generation also likely
change.
An archiving strategy is your primary application-level control for data growth, but some
applications, such as SAP Business Warehouse (BW), support near-line-storage as an
alternative. For data that is not archived for various reasons, SAP HANA supports a selection
of hardware technologies for offloading data that is not used frequently (warm data or cold
data) to places other than expensive RAM with expensive SAP licensing.
The available technologies to alleviate large data and data growth are SAP HANA Native
Storage Extension (NSE), which is a method of persisting specific tables on disk and loading
data into caches as necessary (it is similar in function to classical DB engines), extension
nodes (slower scale-out nodes), and SAP HANA Dynamic Tiering (SAP HANA to SAP HANA
near-line storage).
Note: Some of the options that are offered by SAP HANA are not available for all
applications. SAP S/4HANA features restrictions other than SAP BW, and both SAP
S/4HANA and SAP BW includes more restrictions than SAP HANA native applications.
SAP provides reports to run on your original system to provide data placement assistance,
and the results are typically good suggestions. You must be prepared to distribute your data
differently if prototypes demonstrate other configurations are necessary.
For more information about planning SAP archiving and managing various technologies for
data that is accessed at different frequencies, see Chapter 2, “SAP HANA data growth
management” on page 7.
Although the hardware and operating system methods of retaining persistent data in memory
varies, in SAP HANA persistent memory is referenced as memory mapped files. The
operating system or hardware provides a memory-based file system (such as a RAM disk)
that is “seen” as a file system that is mounted in a path.
1.1.3 Backup
SAP HANA provides an interface for backup tools that is called Backint. SAP publishes a list
of certified third-party tools that conform to that standard. If you plan on using methods, such
as storage-based snapshot or flash copies, quiesce the SAP HANA system before taking a
storage-based snapshot. A quiesce state is necessary to apply subsequent logs (the record
of transactions that occur since the time of the quiesce) to an image that is restored from the
IBM FlashCopy® copy.
SAP HSR accomplishes the task of transferring changes between hosts by transferring
segments of the log (segments of the record of the changes to the DB that are used for
transactional recovery). Changes that are received by backup hosts are sent to the local DB
in a fashion that is similar to when undergoing forward recovery.
The SAP standard method of exchanging hardware (for example, any hardware with the
same endianness), is by using SAP HSR. For more information, see SAP Note 1984882.
Note: Some links for more information throughout the publication t require to have an S-ID
to access them. For more information about SAP user and authorization, see this SAP web
page.
Chapter 1. Introduction 5
1.1.6 Remote database connectivity
SAP provides a wealth of data connectivity options. Some methods, such as Remote
Function Call (RFC) and Database Multiconnect, are provided at the layer of the SAP system.
Other methods, such as Smart Data Access (SDA) and Smart Data Integration (SDI), are
integrated directly into SAP HANA. Extract-transform-load (ETL) methods include SAP
Landscape Transformation (SLT) and the SAP Data Migration Option (DMO) for upgrades.
SDA and SDI are not specific to SAP HANA to IBM i for the connectivity function. The IBM i
case is a prime example of the use of a generic Open Database Connectivity (ODBC) adapter
for SDA, and the generic Camel JDBC adapter for SDI.
1.1.7 Conclusion
Facilitating designs of superior SAP HANA systems is the goal of this publication.
Although your productive, development, and test systems include different reliability,
availability, serviceability (RAS); connectivity; and security requirements, the aspects to
consider are universal.
Where tradeoffs must be made between cost, RAS, and complexity are decisions that are
unique to your situation, the intent of this paper and the service and support you receive from
the ISICC and the IBM SAP development team is to help optimize your decisions so that you
feel comfortable and confident with the final architecture of your design.
This chapter introduces the data temperature concept, which is used as a guide to decouple
data types that are based on their criticality. Data temperature is helpful for companies that
are deciding when to move their data to different but still accessible data tiers.
Different SAP data tiering solutions that are supported on IBM Power Systems servers are
described in this chapter. The purpose of this chapter is to help you decide on the most
suitable solution among the different available solutions.
In an SAP HANA database, main memory and disk areas are used, which increases the total
cost of ownership (TCO) and affects performance over time.
Before scaling up or scaling out the SAP HANA database, think about options for decoupling
your data location by defining what data always must be in memory and available with the
highest performance for applications and users. Also, consider what data is less frequently
accessed so that it available from a lower performance data tier with no effect on the business
operations.
You can define which data is accessed infrequently so that it can be available to users in a
reasonable and cheaper performing storage tier. This concept is called the data temperature.
The use of data tiering options for your SAP HANA database includes the following benefits:
Reduce data volume and growth in the hot store and SAP HANA memory.
Avoid performance issues on SAP HANA databases because too much data must be
loaded into the main memory.
Avoid needing to scale up or scale out over time.
Ensure lower TCO.
SAP offers the following data tiering solutions that are supported on SAP HANA on IBM
Power Systems:
Near-Line Storage (NLS) (cold data)
SAP HANA Extension Node (warm data)
SAP HANA Native Storage Extension (NSE) (warm data)
SAP HANA Dynamic Tiering (warm data)
SAP IQ is a column-based with petabyte scale relational database software system that is
used for business intelligence, data warehousing, and data marts. Produced by Sybase Inc.
(which is now an SAP company), its primary function is to analyze large amounts of data in a
low-cost, high availability (HA) environment.
SAP developed NLS to use with SAP NetWeaver BW and SAP BW/4HANA only.
The required SAP BW/4HANA versions and support package level is SAP BW/4HANA 1.0
SPS >= 00.
For more information about the minimum release level for SAP BW, see SAP Note 1796393.
Implementing SDA to access the NLS data is optional. SAP HANA SDA optimizes running
queries by moving processing as much as possible to the database that is connected through
SDA. The SQL queries the work in SAP HANA on virtual tables. The SAP HANA Query
Processor optimizes the queries and runs the relevant parts in the connected database; then,
it returns the result to SAP HANA and completes the operation.
The use of an NLS solution with SAP HANA SDA is supported as of SAP NetWeaver BW
7.40 SP8 or higher or SAP BW/4 HANA 1.0 or higher.
Note: To use the SDA solution, the SAP BW application team must configure the SAP BW
objects.
The architecture of NLS implementation with SAP NetWeaver BW on SAP HANA and SAP
BW/4HANA with SDA is shown in Figure 2-3.
For the SDA implementation, SAP developed and provided the packages that are supported
on Power Systems servers.
For more information about the implementation of NLS for SAP BW, see SAP Note 2780668.
Note: As a best practice, the SAP recommendation for memory sizing is:
RAMdynamic = RAMstatic
For example, if you use a footprint of 500 GB, your SAP HANA database memory size must
be at least 1 TB.
The SAP HANA Extension Node can operate twice as much data with the same amount of
memory and fewer cores. For example, if you expect to have up to 1 TB of footprint in your
Extension Node, you can have up to 250 GB of memory for the Extension Node.
The ideal use case for Extension Node is SAP BW on SAP HANA or SAP BW/4HANA
because the SAP BW application controls the data distribution, partitioning, and access
paths. For SAP HANA native applications, all data categorization and distribution must be
handled manually.
SAP HANA Extension Node is built into the SAP HANA platform and supported on Power
Systems.
For more information about the implementation of Extension Node, see the following SAP
Notes:
SAP Note 2486706
SAP Note 2643763
To activate NSE, you must configure your warm data-related tables, or partitions of columns
to be page-loadable, by running SQL DDL commands.
After the table, partition, or column is configured to use NSE, it is no longer loaded into
memory. However, it is readable by using the buffer cache. The performance for accessing
data on that table is slower than accessing it in-memory.
Also, depending on the amount of data that is moved to NSE, SAP HANA can start faster
during scheduled and unexpected maintenance.
Supportability
NSE supports SAP HANA native applications for SAP S/4HANA and SAP Suite on SAP
HANA (SoH).
Note: SAP recommends the use of NSE with SAP S/4HANA or SoH with Data Aging only.
If Data Aging is used in SAP S/4HANA or SoH with SAP HANA 2.0 SPS4, NSE is used for
storing the historical partitions of tables in aging objects. To learn more about this use case
for SAP S/4HANA, see SAP Note 2816823.
Based on the workload on the table, partition, or column over time, the NSE Advisor identifies
the frequently accessed and rarely accessed objects so that the system administrators can
decide which objects can be moved to NSE.
NSE is included with SAP HANA 2.0 SPS4, and is supported on Power Systems. For more
information about the use of NSE, see SAP Note 2799997.
SAP HANA Dynamic Tiering is not included in the standard installation package; therefore,
you must download an extra component and install it on SAP HANA.
In SAP HANA Dynamic Tiering, you can create two types of warm tables: extended and
multistore. The extended table type is disk-based; therefore, all data is stored in disk. The
multistore table type is an SAP HANA partitioned table with some partitions in memory and
some partitions on disk.
The distribution of the data among the in-memory store tables, extended tables, and
multistore tables is shown in Figure 2-6.
SAP HANA Dynamic Tiering can be installed in the same server that is hosting SAP HANA or
in a separate dedicated host server. You can also install a second SAP HANA Dynamic
Tiering host as a standby host for HA purposes.
The operating system process for the SAP HANA Dynamic Tiering host is hdbesserver, and
the service name is esserver.
SAP HANA Dynamic Tiering supports only low-tenant isolation. Any attempt to provision the
SAP HANA Dynamic Tiering service (esserver) to a tenant database with high-level isolation
fails. After the SAP HANA Dynamic Tiering implementation, the SAP HANA Dynamic Tiering
service stops working if you attempt to configure the tenant database with high isolation.
es_log_threshold_size Size (in megabytes) of the log file Value is based on db space size. For
threshold for point-in-time recovery in databases less than 1TB, the
dynamic tiering. threshold is 10% of database pace
size or 2 GB, whichever is greater. For
larger databases, the value is 1% of
database space size.
num_partition_buffer_cache Number of main and temp buffer None. Value is determined at runtime
cache partitions. Must be a power of based on the number of CPUs, and is
2; otherwise, the value is rounded to not user visible.
the nearest power of 2 - 64.
Table 2-2 Restart service for these SAP HANA Dynamic Tiering configuration properties to take effect
Database Section Properties
es_log_backup_interval Specifies the time interval (in minutes) after which 15 (minutes)
the eslog (log for the dynamic tiering extended
store; these files are copied into the backup
directory and represent the active log) is backed up.
This property is set at the database level.
es_log_threshold_size Specifies the minimum available size (in The default value is based on
megabytes) of the partition (not the size of eslog) on database space size.
which the eslog volume is mounted and allowed to
grow. If the size of the partition falls under the For databases less than 1 TB, the
specified size, dynamic tiering returns a warning or threshold is 10% of database space
error and prompts you to free up or add space to the size or 2 GB, whichever is greater. For
file system. This property is set at the SYSTEM larger databases, the value is 1% of
level. database space size.
The dynamic tiering service must be restarted for changes to the trace section properties to
take effect, as listed in Table 2-3. These properties control the size and number of the
esserver and esserver_console trace files that are in the HANA trace directory.
Table 2-3 Restart service for these SAP HANA Dynamic Tiering configuration properties to take effect
Trace Section Properties
Table 2-4 Restart service for these SAP HANA Dynamic Tiering configuration properties to take effect
Zrlog Section Properties
For a complete list of the parameters, see SAP HANA Dynamic Tiering: Administration Guide.
Note: Some host deployments are designed for small, nonproduction environments, but
are supported in production environments.
This deployment is where the organizations can use all Power Systems benefits with the
flexibility of LPAR support, with low network latency.
Figure 2-9 More than one SAP HANA Dynamic Tiering server deployment
2.2.5 SAP HANA on IBM Power Systems and SAP HANA Data Tiering
solutions
With Power Systems, a scale-up or scale-out SAP HANA database environment can be
implemented.
With support for multiple LPARs, organizations can consolidate multiple SAP HANA instances
or SAP HANA nodes of the same instance (multi-host) on a single Power Systems server by
using its simplicity of management with a low-latency network.
By using IBM PowerVM® (the Power Systems hypervisor), you can virtualize up to 16
production SAP HANA instances on the LPARs of a single Power Systems server (IBM Power
Systems E950 or IBM Power Systems E980 server). It is also possible to move memory and
CPUs among the LPARs with flexible granularity (for more information, see SAP Note
2230704). This Note is being updated by SAP.
PowerVM allows more granular scaling and dynamically changing allocation of system
resources. You can avoid adding hardware that can cause higher energy, cooling, and
management needs.
Power Systems is the best solution for implementing SAP HANA scale-up and scale-out
modes and the data tiering solutions that are shown in this section. For more information, see
SAP HANA Server Solutions with Power Systems.
You use the Console Interface (CI) to perform an SAP HANA Data Tiering on a Same Host
Deployment; that is, on the same server where the SAP HANA is installed with no additional
SAP HANA nodes.
For more information and detailed procedures, see SAP HANA Dynamic Tiering: Master
Guide, SAP HANA Dynamic Tiering: Installation and Update Guide, and SAP HANA Dynamic
Tiering: Administration Guide at SAP HANA Dynamic Tiering.
Note: Power Systems environments require the suitable IBM XL C/C++ redistributable
libraries. Download and install the suitable runtime environment for the latest updates from
the supported IBM C and C++ Compilers page at the IBM Support Portal. Install the
libraries on the SAP HANA and SAP HANA Dynamic Tiering hosts. These libraries are not
required for an Intel -based hardware platform environment.
Now, you can download the SAP HANA Dynamic Tiering revision that is compatible with the
SAP HANA 2.0 SP level you have in place or are installing.
Example 2-1 SAP HANA Dynamic Tiering installation command for starting the installation
./hdblcm --component_dirs=/<full_path_option>
Example 2-2 SAP HANA Dynamic Tiering installation: Choose an action window
SAP HANA Lifecycle Management - SAP HANA Database 2.00.048.03.1605873454
************************************************************************
Choose an action
Example 2-3 SAP HANA Dynamic Tiering installation: Choose components to be installed or
updated window
Choose components to be installed or updated:
------------------------------------------------------------------------------
1 | all | All components
2 | es | Install SAP HANA Dynamic Tiering version
2.0.043.00.12711
5. You are prompted to add another host. Enter n, as shown in Example 2-4.
Example 2-4 SAP HANA Dynamic Tiering installation: Add hosts window
Verifying files...
Do you want to add hosts to the system? (y/n) [n]:
6. You are prompted to enter the System Database User Name and password. Enter SYSTEM
and its password, as shown in Example 2-5.
Example 2-5 SAP HANA Dynamic Tiering installation: User and password window
Enter System Database User Name [SYSTEM]:
Enter System Database User (SYSTEM) Password:
7. You are prompted to add the paths for SAP HANA Dynamic Tiering data and log volume
paths. In this case, the paths are /hana/data/dtes/TST for the data volumes and
/hana/log/dtes/TST for the log volumes, as shown in Example 2-6.
During the installation process, you see messages that are similar to the examples that are
shown in Example 2-8.
Example 2-9 SAP HANA Dynamic Tiering role: Running the SAP HANA Lifecycle Management tool
./hdblcm
SAP HANA Lifecycle Management - SAP HANA Database 2.00.048.03.1605873454
************************************************************************
Choose an action
----------------------------------------------------------------------------------------
---
1 | add_host_roles | Add Host Roles
2 | add_hosts | Add Hosts to the SAP HANA Database System
3 | check_installation | Check SAP HANA Database Installation
4 | configure_internal_network | Configure Inter-Service Communication
5 | configure_sld | Configure System Landscape Directory Registration
6 | extract_components | Extract Components
7 | print_component_list | Print Component List
8 | remove_host_roles | Remove Host Roles
9 | rename_system | Rename the SAP HANA Database System
10 | uninstall | Uninstall SAP HANA Database Components
11 | unregister_system | Unregister the SAP HANA Database System
12 | update | Update the SAP HANA Database System
13 | update_component_list | Update Component List
14 | update_components | Install or Update Additional Components
15 | update_host | Update the SAP HANA Database Instance Host
integration
16 | exit | Exit (do nothing)
2. Select the SAP HANA host to be assigned the extra SAP HANA Dynamic Tiering role. In
this case, only one host is available, as shown in Example 2-10.
Example 2-10 SAP HANA Dynamic Tiering role: Host selection for adding role
System Properties:
TST /hana/shared/TST HDB_ALONE
HDB00
version: 2.00.048.03.1605873454
host: linux-y743 (Database Worker (worker))
edition: SAP HANA Database
3. In Select additional host roles for host '<host>', select the host. In this case, only t
one host is available. Add the <sid>adm ID password for it, as shown in Example 2-11.
-------------------------------------------------------------------------------
---
1 | extended_storage_worker | Dynamic Tiering Worker
(extended_storage_worker)
Enter comma-separated list of additional roles for host ' linux-y743' [1]:
Enter System Administrator (TSTadm) Password:
4. You are prompted to confirm all added parameters. Confirm and enter y, as shown in
Example 2-12.
Example 2-12 SAP HANA Dynamic Tiering role: Installation summary window
Summary before execution:
=========================
At the end of the process, you see a summary of the installation, as shown in
Example 2-13.
Figure 2-10 SAP HANA Dynamic Tiering: Yellow status in SAP HANA Studio
Figure 2-11 SAP HANA Dynamic Tiering: Service esserver in SAP HANA Studio
To create the extended storage, you must have the system privileges EXTENDED STORAGE
ADMIN.
Note: The extended storage is created with SYSTEM ID now, but for all subsequent
activities, you need another user ID with all necessary privileges, and it is the owner of the
extended storage and multistore tables.
Note: In this demonstration, a single tenant is the initial tenant, and the SAP HANA
instance previously never contained more tenants. Therefore, the SAP HANA Dynamic
Tiering service is automatically provisioned to the tenant database.
2. For this demonstration, create one extended storage with 1 GB of available allocated
space. Right-click the tenant database and click Open SQL Console to open the SQL
console. Then, run the command that is shown in Example 2-14.
Example 2-14 SAP HANA Dynamic Tiering: Command for creating extended storage
CREATE EXTENDED STORAGE AT ' linux-y743' SIZE 1000 MB;
Figure 2-12 SAP HANA Dynamic Tiering: Result for extended storage creation command-line interface
3. Click the Overview tab in SAP HANA Studio, you see the status of SAP HANA Dynamic
Tiering as “Running”, as shown in Figure 2-13.
Figure 2-13 SAP HANA Dynamic Tiering: Running status in SAP HANA Studio
Your SAP HANA Dynamic Tiering is now ready for you to create an Extended Table or
multistore table.
To create the user ID by using SAP HANA Studio, complete the following steps:
1. Click Tenant → Security, right-click Users, and then, click New User.
2. Define a name for the user and add the System Privileges CATALOG READ, EXTENDED
STORAGE ADMIN, and IMPORT. You do not have to force the password change in the
first log on if you prefer.
In this case, the user ID is DTUSER. Log in to the tenant with the user ID that was defined by
you.
Example 2-15 SAP HANA Dynamic Tiering: Extended table creation command-line interface
CREATE TABLE "DTUSER"."CUSTOMER_ES" (
C_CUSTKEY integer not null,
C_NAME varchar(25) not null,
C_ADDRESS varchar(40) not null,
C_PHONE char(15) not null,
primary key (C_CUSTKEY)
) USING EXTENDED STORAGE;
Figure 2-15 SAP HANA Dynamic Tiering: Extended table CUSTOMER_ES creation
In left pane of the window, the table is identified as an EXTENDED table in the SAP HANA
Catalog for user DTUSER.
Important: Foreign keys between two extended tables or between an extended table and
an in-memory table are not supported.
To insert data, use the same syntax as a common in-memory column store table, as shown in
Example 2-16 and in Figure 2-16.
Example 2-16 SAP HANA Dynamic Tiering: Insert data into table CUSTOMER_ES command-line
interface
INSERT INTO "DTUSER"."CUSTOMER_ES"
(C_CUSTKEY, C_NAME, C_ADDRESS, C_PHONE)
VALUES
(1,'CUSTOMER 1','ADDRESS 1', 19999999999);
Figure 2-16 SAP HANA Dynamic Tiering: Insert data into table CUSTOMER_ES - SAP HANA Studio window
Figure 2-17 SAP HANA Dynamic Tiering: Contents of the extended table CUSTOMER_ES in SAP HANA Studio
Example 2-17 SAP HANA Dynamic Tiering: Creating a multistore table command-line interface
CREATE TABLE "DTUSER"."SALES_ORDER" (
S_SALESOKEY integer not null,
S_CUSTOMER integer not null,
S_VALUE decimal(15,2) not null,
S_DATE date not null,
primary key (S_SALESOKEY,S_DATE))
PARTITION BY RANGE ("S_DATE")
(
USING DEFAULT STORAGE
(PARTITION '2010-12-31' <= VALUES < '9999-12-31')
USING EXTENDED STORAGE
(PARTITION '1900-12-31' <= VALUES < '2010-12-31'));
In SAP HANA Studio, you see that the multistore table symbol differs from the extended
table symbol, as shown in Figure 2-18 on page 33.
The Data Manipulation Language (DML) operations on the table do not differ from any
other table type.
2. Run a query from the TABLE_PARTITIONS table, as shown in Example 2-18, and you see
that the new table features two partitions: one in the default store, and another in extended
storage, as shown in Figure 2-19.
Example 2-18 SAP HANA Dynamic Tiering: Query SALES_ORDER table from the
TABLE_PARTITIONS table
SELECT SCHEMA_NAME, TABLE_NAME, PART_ID, STORAGE_TYPE FROM TABLE_PARTITIONS
WHERE TABLE_NAME = 'SALES_ORDER' AND SCHEMA_NAME = 'DTUSER'
Figure 2-19 SAP HANA Dynamic Tiering: Results from TABLE_PARTITIONS table
3. Insert a row into the table so that it is stored in the default store by running the command
that is shown in Example 2-19.
Example 2-19 SAP HANA Dynamic Tiering: Inserting a row in the default store partition of the
SALES_ORDER table
INSERT INTO "DTUSER"."SALES_ORDER"
(S_SALESOKEY, S_CUSTOMER, S_VALUE, S_DATE)
VALUES
(1,1,120,'2011-12-11');
Example 2-20 Checking the count of default store rows of the table SALES_ORDER
SELECT RECORD_COUNT FROM M_CS_TABLES WHERE TABLE_NAME = 'SALES_ORDER' AND
SCHEMA_NAME = 'DTUSER';
Figure 2-20 SAP HANA Dynamic Tiering: Results for the count of default store rows of the table SALES_ORDER
4. Insert one more row in the extended storage of the table SALES_ORDER, as shown in
Example 2-21.
Example 2-21 Inserting a row in the extended storage partition of the SALES_ORDER table
INSERT INTO "DTUSER"."SALES_ORDER"
(S_SALESOKEY, S_CUSTOMER, S_VALUE, S_DATE)
VALUES
(1,1,120,'2009-12-11');
From the table M_ES_TABLES (the table that shows data that is stored in the extended
storage) as shown in Example 2-22, you see that one record exists in the extended storage
as part of SALES_ORDER table, as shown in Figure 2-21.
Example 2-22 Checking the count of extended storage rows of the table SALES_ORDER
SELECT * FROM M_ES_TABLES WHERE TABLE_NAME = 'SALES_ORDER' AND SCHEMA_NAME =
'DTUSER';
Figure 2-21 SAP HANA Dynamic Tiering: Results for the count of extended storage rows of the table SALES_ORDER
This chapter describes different solutions that can help speed up starting large SAP HANA
DBs to help minimize downtime.
At the same time that Persistent Memory dramatically increases systems performance, it
enables a fundamental change in computing architecture.
Some SAP documentation refers to persistent memory as Non-Volatile Memory (NVM), while
IBM Documentation often uses the term Storage Class Memory (SCM). The term
Non-Volatile DIMM (NVDIMM) persistent memory also is used.
The Storage Networking Industry Association (SNIA) defined a programming model that
describes an architecture of how operating systems can provide persistent memory services
and how application software can use them. The PowerVM/Linux on Power Systems
implementation of this programming model is shown in Figure 3-1 on page 37.
As shown at the bottom of Figure 3-1, the PowerVM hypervisor presents the persistent
memory devices to the operating system in a technology agnostic manner. This process is
referred to as the PowerVM Persistent Memory Architecture. This abstraction enables the
adoption of new persistent memory technologies, attachments technologies, and device form
factors with minimal effect on the operating system and virtualization management code.
Depending on the physical device capabilities, the PowerVM hypervisor is can virtualize
persistent memory devices and segment them into smaller capacity volumes, which can be
assigned to different logical partitions (LPARs).
After persistent memory is assigned to an LPAR, individual devices are presented by the
Linux operating system as generic non-volatile DIMM devices, /dev/nmem<#>. The
management tool ndtcl is used to interface with the nvdimm driver to configure and provision
these nvdimm devices into regions, namespaces, and persistent memory volume.
Region, which is grouping one or more NVDIMM devices, is commonly formed from devices
from the same NUMA code.
Namespace is the partition of a whole or part of a region and is associated with a mode,
which enables access methods to the persistent memory. The following modes are available:
File system direct access (fsdax): Persistent memory is presented as a block device and
supports XFS and EXT4file systems. This mode provides direct access (DAX) support,
which bypasses the Linux page cache and performs reads and writes directly to the
device. For direct access through load and store instructions, the device can be mapped
into the address space of the application process with mmap(). The default mode of a
namespace is fsdax.
Device direct access (devdax): Persistent memory is presented as a character device.
This mode also provides DAX support.
For SAP HANA, only the fsdax mode is used. Figure 3-2 shows an example of the fsdax stack
making available NVDIMM devices to application.
Specifically, SAP HANA supports placing column-store main data structures in persistent
memory. The main data structures are highly compressed, read-only (after creation), and
represent 95% of database data.
Figure 3-3 SAP HANA memory components and persistent memory data
SAP HANA main data, which is organized in column-wise data structures, can be written to
files in the DAX file system. However, instead of using standard file I/O read and write calls,
SAP HANA employs memory-mapped file I/O, as shown in Figure 3-3. By mapping the files
directly into its address space, the application can use load and store CPU operations to
manipulate the data.
This vPMEM technology is integrated into the IBM PowerVM hypervisor for POWER9
systems. It provides a high-speed persistent RAM disk storage solution for applications that
persist across operating system and logical partition (LPAR) restarts.
The PowerVM Persistent Memory architecture allows for multiple types of memory to be
defined and deployed for different use cases. Currently, the vPMEM solution creates
persistent storage volumes from standard system DRAM, which provides high-speed
persistent access to data for applications that are running in an LPAR.
For this solution, no special memory or storage devices are required; only unused available
system DRAM is needed. It is intended that enhancements are to be supported that allow
access to other types of memory to be used for different use cases.
vPMEM volumes are created as part of a specific LPAR definition and managed on the
system Hardware Management Console (HMC). Each defined LPAR on a system can have a
dedicated vPMEM volume.
Individual vPMEM volumes are not sharable between LPARs, and vPMEM volumes are not
transferable to another LPAR, nor can be resized; instead, they are deleted and new vPMEM
volumes can be created with the wanted size. They are sized on logical memory block (LMB)
granularity where an LMB is the unit of memory that is used by the hypervisor to manage
DRAM memory. By default, an LMB is 256 MB system-wide.
After the application uses this persistent system memory volume as a disk resource, any data
that is stored in the vPMEM device persists if the LPAR is restarted.
Access to vPMEM volumes by the Linux operating system is provided by the standard
non-volatile memory device (libnvdimm) subsystem in the Linux kernel corresponding ndctl
utilities. The resulting vPMEM volumes are then mounted on the Linux file system as Direct
Access (DAX) type volumes.
When SAP HANA detects the presence of DAX vPMEM volumes, it starts the copy of main
column store table data in these defined persistent volumes. Through its default settings,
SAP HANA attempts to copy all compatible column store table data into the vPMEM volumes,
which maintains a small amount of space in the LPAR DRAM for column store table
metadata. This situation creates a persistent memory copy of the table data that SAP HANA
then uses for query and transactional processing. SAP HANA can also be configured to copy
in to vPMEM-only column store tables, or even specific partitions of individual column store
tables.
To access the column store table data on the vPMEM volumes, SAP HANA creates
memory-mapped pointers from the DRAM memory structures to the column store table data.
Considering these vPMEM volumes are allocated from memory, accessing the table data is
done at memory speeds with no degradation in performance compared to when the column
store data is stored without vPMEM in DRAM.
When SAP HANA shuts down, the column store table data persists in the vPMEM volumes.
The next time SAP HANA starts, it detects the presence of the column store table data in the
vPMEM volumes and skips loading that data. SAP HANA re-creates memory structure
pointers to the columnar table data that is stored on the vPMEM volumes, which results in a
reduction in SAP HANA start times, as shown in Figure 3-6.
Figure 3-6 A large SAP HANA OLAP database start and shutdown times with and without vPMEM
This chart shows that substantial time savings for SAP HANA start for a large OLAP DB when
all of the DBs column store data is allocated in vPMEM volumes. Time savings also are
available for SAP HANA if shut down because not as much DRAM memory is required to be
programmatically tagged as freed for ceding control to the operating system.
Prerequisites
The following minimum hardware and software levels are required to configure and implement
SAP HANA with IBM PowerVM Virtual Persistent Memory:
IBM POWER9 System with Firmware FW940.
IBM HMC V9.1.940.
SAP HANA 2 SPS04 revision 44.
SLES 15 SP1:
– Kernel version 4.12.14-197.21.1.
– Ndctl version 64.1-3.3.1.
To run the SAP Hardware and Cloud Measurement Tool (HCMT) with vPMEM, the minimum
tool version is SAP HANA v2.0 SPS04 revision 46.
The next subsections of this chapter provide guidance about enabling vPMEM usage with
SAP HANA.
SAP Note 2786237 describes several tools to assist in the correct sizing of persistent
memory:
SAP HANA Quicksizer for greenfield deployments.
Sizing report for SoH and S/4HANA (SAP Note 1872170).
Sizing report for BWoH and BW/4HANA (SAP Note 2296290).
SQL reports attached to the SAP Note 2786237 for an overview of memory usage in a
current system.
Note: The ratio restriction between DRAM and PMEM that is documented in SAP Note
2786237 does not apply to the POWER platform.
A rough estimation can be obtained by a simple query to the DB that provides the amount of
memory in use by all columns store table (as shown in Example 3-1_ and adding 10 - 15% of
memory to allow DB growth over time.
If less LPAR DRAM memory is used by SAP HANA to store the column store table data that is
on the vPMEM volume, the RAM memory allocation for the LPAR can be reduced by a similar
amount to avoid the use of more system memory that is required for the LPAR. This memory
definition adjustment is done in the LPAR profile on the HMC.
The resource allocation of CPU and memory for a system’s LPARs can be queried by
performing a resource dump from the HMC. The following methods are available:
Within the HMC, which is described in How to Initiate a Resource Dump from the HMC -
Enhanced GUI.
By logging on to the LPAR's HMC command-line interface (CLI) with a privileged HMC
user account, such as hscroot, and running the command that is shown in Example 3-2.
Example 3-2 Starting a resource dump from the HMC command-line interface
startdump -m <system name> -t resource -r 'hvlpconfigdata -affinity -domain'
Substitute <system_name> with the system name that is defined on the HMC that is hosting
the LPAR.
Both methods create a resource dump file in the /dump directory that is timestamped.
Depending on the size of the system, it can take a few minutes before the dump file is ready.
You can view the list of resource dump files on the HMC that is listed in chronological order by
running the command that is shown in Example 3-3.
Example 3-3 Listing the resource dump files in the /dump directory
ls -ltr /dump/RSCDUMP*
The last file in the list can be viewed by running the less command, as shown in Example 3-4.
Example 3-5 Main section of the RSCDUMP file list CPU and memory resources assigned to LPARs
|-----------|-----------------------|---------------|------|---------------|---------------|-------|
| Domain | Procs Units | Memory | | Proc Units | Memory | Ratio |
| SEC | PRI | Total | Free | Free | Total | Free | LP | Tgt | Aloc | Tgt | Aloc | |
|-----|-----|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|
| 0 | | 1200 | 0 | 0 | 2048 | 311 | | | | | | 0 |
| | 0 | 1200 | 0 | 0 | 2048 | 311 | | | | | | 0 |
| | | | | | | | 1 | 1200 | 1200 | 1023 | 1023 | |
|-----|-----|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|
| 1 | | 1200 | 0 | 0 | 2048 | 269 | | | | | | 0 |
| | 1 | 1200 | 0 | 0 | 2048 | 269 | | | | | | 0 |
| | | | | | | | 1 | 1200 | 1200 | 1024 | 1024 | |
|-----|-----|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|
| 2 | | 1200 | 0 | 0 | 2048 | 312 | | | | | | 0 |
| | 2 | 1200 | 0 | 0 | 2048 | 312 | | | | | | 0 |
| | | | | | | | 1 | 1200 | 1200 | 1024 | 1024 | |
|-----|-----|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|
| 3 | | 1200 | 0 | 0 | 2048 | 311 | | | | | | 0 |
| | 3 | 1200 | 0 | 0 | 2048 | 311 | | | | | | 0 |
| | | | | | | | 1 | 1200 | 1200 | 1025 | 1025 | |
|-----|-----|-------|-------|-------|-------|-------|------|-------|-------|-------|-------|-------|
In Example 3-5, the columns of data that are of interest in this context have the following
meanings:
Domain SEC
The socket number in which the cores and memory are installed. In Example 3-5, the
system features four sockets: 0 - 3.
Domain PRI
The NUMA domain number. Example 3-5 has four NUMA domains (0 - 3), and each
NUMA domain aligns to a socket number. Some Power Systems servers have two NUMA
domains per socket.
Procs Total
The number of processors in a 1/100th of a processor increment. Because PowerVM can
allocate subprocessor partitions in 1/100th of a single core, this number is 100 times
larger than the actual number of cores on the NUMA domain. Example 3-5 shows each
socket features a total of 12 cores per socket.
Procs Free/Units Free
The total number of 1/100th of a core processor resource that is available.
Memory Total
The total amount of memory that is available on the NUMA domain. This number is four
times larger than the actual memory in gigabytes that is available. Example 3-5 shows
each socket has 512 GB of RAM installed, for a total system capacity of 2 TB.
Memory Free
The amount of memory that is not in use by assigned LPARs on the NUMA domain. Again,
this value is four times larger than the actual memory in gigabyte of available memory.
This measurement is an important detail in determining the amount of memory that can be
used for a vPMEM volume because this value decreases after the creation of the vPMEM
volume.
Example 3-5 shows sockets 0, 2, and 3 all have approximately 75 GB of available memory,
and socket 1 has about 65 GB of available memory. This system has a vPMEM volume of
700 GB that is assigned to the running LPAR.
If the system has vPMEM volumes assigned, this memory allocation is not listed in this
output. The memory values for the LPARs are the ones that are assigned to the LPAR’s
memory allocation in the profile. To determine the approximate amount of memory vPMEM
volumes are taking on a specific socket, add up the memory allocations for the LPARs on that
socket and subtract that value from the Memory Total. Taking this result and subtracting the
value from Memory Free shows the amount of RAM that is used by the vPMEM volume, as
shown in Example 3-6.
Example 3-5 on page 45 for socket 0 used the values that are shown in Example 3-7.
Considering the same memory allocation is assigned across all four nodes, the total vPMEM
device that is allocated to this LPAR is approximately 714 GB.
vPMEM volumes are configured at the level of LPARs. Currently, creation, renaming, and
deletion of vPMEM volumes is supported. To perform these operations, the LPAR must be in
the state not activated.
To create a vPMEM memory device for an LPAR is done on the HMC by modifying the
properties of the LPAR. Complete the following steps:
1. Click System definition in the HMC to get a list of available LPARs, and then, click the
LPAR’s name to get the general details of the partition. Then, click the Persistent Memory
property to show the details for the vPMEM volumes, as shown in Figure 3-7.
Figure 3-8 Persistent Memory window: Empty list of defined vPMEM volumes
3. Add a descriptive volume name, the total size in megabytes of the vPMEM device, and
select the Affinity check box. Click OK, which creates a single vPMEM device for
the LPAR.
After a vPMEM volume exists, you can rename or delete it on the Persistent Memory page.
In addition to the graphical configuration, the HMC command-line tools can be used. For
example, use the lshwres command to list all vPMEM volumes of all LPARs on a managed
system:
$ lshwres -r pmem -m ish359-HanaP-9009-42A-SN7800440 --level lpar
lpar_name=lsh30221,lpar_id=21,curr_num_volumes=0,curr_num_dram_volume
s=0,max_num_dram_volumes=4
lpar_name=lsh30222,lpar_id=20,curr_num_volumes=0,curr_num_dram_volume
s=0,max_num_dram_volumes=4
lpar_name=dummy2,lpar_id=5,curr_num_volumes=0,curr_num_dram_volumes=0
,max_num_dram_volumes=4
Figure 3-10 An 8 TB system with memory that is allocated across four NUMA domains
Figure 3-10 also shows an 8 TB system with memory that is allocated across four NUMA
domains. Creating a 4 TB vPMEM device with NUMA affinity creates one vPMEM device per
NUMA node, each 1 TB.
The benefit of dividing the vPMEM volume into segments and affinitizing them to NUMA
boundaries enables applications to access data in physically aligned NUMA node memory
ranges. Considering data is accessed sequentially, storing data NUMA optimized is best for
throughput and access latency performance.
Affinitized vPMEM volumes are the only option that is supported by SAP HANA.
Figure 3-11 An 8 TB system with memory that is allocated across four NUMA nodes
Figure 3-11 also shows an 8 TB system with memory that is allocated across four NUMA
nodes. This configuration creates a 4 TB non-affinitized vPMEM device that results in a single
4 TB device that is stripped across all NUMA nodes.
Currently, this vPMEM device option is not supported for SAP HANA.
The persistent memory volumes are then initialized, enabled, and activated with the standard
operating system non-volatile DIMM control (ndctl) commands. Although these utilities are
not provided by default in a base level Linux installation, they are included in the Linux
distribution. Install them by running the application repository commands; for example, for
Red Hat, install it by running the command that is shown in Example 3-8 on page 51.
For SUSE Enterprise Linux Server, install it by running the command that is shown in
Example 3-9.
Example 3-9 SUSE Enterprise Linux Server command-line interface installation of the ndctl package
zypper install ndctl
On Power Systems servers, each vPMEM volume is initialized and activated automatically. A
corresponding number of /dev/nmem and /dev/pmem devices are available because NUMA
nodes are assigned to the LPAR, as shown in Example 3-10.
If the /dev/pmem devices are not created automatically by the system during the initial
operating system start, they must be created. Example 3-11 shows the set of ndctl
commands to initialize the raw /dev/nmem device, where X is the device number (for example,
/dev/nmem0).
New device definitions are then created as /dev/pmemX. These new disk devices must be
formatted, as shown in Example 3-12.
Example 3-12 Creating the XFS file system on the vPMEM device
# mkfs -t xfs -b size=64k -s size=512 -f /dev/pmemX
When mounting the vPMEM volumes, use the /dev/disk/by-uuid identifier for the volumes.
These values are stable regarding operating system renaming of devices on restart of the
operating system. Also, these volumes must be mounted by using the -o dax option, as
shown in Example 3-13.
vPMEM volumes are not traditional block device volumes. Therefore, normal block device disk
monitoring tools (for example, iostat and nmon), cannot monitor the I/O to the vPMEM
devices. However, a normal director monitoring tool (for example, du) works because the files
use the available storage space of the vPMEM volume.
Hence, the SAP HANA instance configuration must be updated to use the new vPMEM
volume. Update the global.ini file to add the file system directory paths to the
basepath_persistent_memory_volumes parameter in the [persistence] section, with each
directory separated by a semi-colon, as shown in Example 3-15.
Example 3-15 Entry in the global.ini file defining the paths to the vPMEM volume directories
[persistence]
basepath_persistent_memory_volumes =
/path/to/first/directory;/path/to/second/directory;…
This parameter option is an offline change only, which requires the restart of SAP HANA to
enable it.
On first start, SAP HANA by default copies all column store table data (or as much as
possible) from persistent disk into the newly configured and defined vPMEM volumes. With
partitioned column store tables, SAP HANA assigns in a round-robin fashion partitions to
vPMEM volumes to distribute evenly the column store table partitions across the entire
vPMEM memory NUMA assignment. When all column store data for a table is loaded into the
vPMEM volumes, SAP HANA maintains a small amount of column store table metadata in
normal LPAR DRAM memory.
Example 3-16 Changing the default behavior of SAP HANA to not load all tables on SAP HANA start
[persistent memory]
table_default = OFF
The table_default parameter is dynamic. If a column stores table data in the vPMEM
volumes, performing an SAP HANA UNLOAD of the unneeded columnar table data removes
the table data from the persistent device. Then, a LOAD operation is needed to reload the
table data into system DRAM, or a shutdown of SAP HANA can be done so that the old
column store table data can be removed from the vPMEM volumes. On the next start, SAP
HANA loads all column store table data into DRAM.
This setting can be overridden by the preference settings on the table, partition, or column
level.
Note: To specify different sets of vPMEM volumes for different SAP HANA tenants, use
SAP Note 2175606 to first segment tenants to separate GALs. Then, define the persistent
memory volumes in the .ini files at the database level.
Individual column store tables that are to use the vPMEM volumes can be moved by running
the SQL command ALTER TABLE, as shown in Example 3-17.
Example 3-17 Altering a table move the column store table to PMEM
ALTER TABLE "<schema_name>.<table_name>" PERSISTENT MEMORY ON IMMEDIATE CASCADE;
This command immediately moves the table from LPAR DRAM to the vPMEM devices, and it
persists for all future restarts of SAP HANA.
For columns and partitions, the only way to load data into vPMEM volumes is by running one
of the CREATE COLUMN TABLE commands, as shown in Example 3-18.
# create a table that the named partitions will use persistent memory
CREATE COLUMN TABLE … PARTITION .. PERSISTENT MEMORY ON
Column store table data can also be unloaded from the vPMEM devices and the data
removed from the vPMEM volumes. Example 3-19 shows the commands that remove the
column store table data from the vPMEM volume and unload the data from all memory.
After this command runs, as shown in Example 3-19, the column store table data is no longer
loaded into any memory areas (DRAM or vPMEM). Table data must be reloaded by future
query processing or manually by running the SQL command that is shown in Example 3-20
on page 54.
If the table persistent memory setting is changed to OFF or ON, it can be reset to DEFAULT by
running the SQL ALTER TABLE command that is shown in Example 3-21.
For example:
hdbsql> select * from M_PERSISTENT_MEMORY_VOLUMES where PORT=30603
HOST,PORT,VOLUME_ID,NUMA_NODE_INDEX,PATH,FILESYSTEM_TYPE,IS_DIRECT_AC
CESS_SUPPORTED,TOTAL_SIZE,USED_SIZE
"lsh30117",30603,3,0,"/hana/shared/pmem/pmem0/JE6/mnt00001/hdb00003.0
0003","xfs","true",401517510656,15582494720
"lsh30117",30603,3,1,"/hana/shared/onen/pmem1/JE6/mnt00001/hdb00003.0
0003","xfs","true",402590728192,15930228736
The output shows that SAP HANA found and uses two persistent memory-based XFS file
systems. One file system is backed by memory on NUMA node 0; the other is backed by
memory on NUMA node 1.
Table 3-1 Software version dependencies for using vPMEM on SAP HANA on Power Systems servers
Component Version
Operating systems SUSE Enterprise Linux Server 15 SP1 or later, or Red Hat
8.2 or later.
SAP HANA can use tmpfs volumes to store columnar table data in the same way it uses
vPMEM volumes: The tmpfs memory volumes are mounted on the operating systems file
system directory structure for storage of table data for fast access. SAP refers to the tmpfs
volumes solution as the SAP HANA Fast Restart Option.
As with vPMEM persistent memory volumes, SAP HANA can use tmpfs volumes to store
columnar table data in configured LPAR DRAM volumes. The mechanisms by which SAP
HANA is configured to use tmpfs volumes are identical to vPMEM volume usage.
Like vPMEM volumes, access to the files that are stored in a tmpfs file system are performed
at DRAM speeds. In this regard, accessing data from tmpfs and vPMEM volumes has the
same performance. Also, no special hardware is needed to create tmpfs volumes because
the DRAM memory that is allocated to the LPAR is used.
Also, unlike vPMEM volumes that are created with the Affinity option, creating a tmpfs volume
is not automatically aligned to any specific NUMA node. NUMA node memory alignment
details are gathered as a preparatory step and used in creating the tmpfs volumes at the
operating system level.
One benefit of tmpfs file systems have over vPMEM volumes is that tmpfs volumes can be
created to grow dynamically as the tmpf volumes fills, so the volumes can accommodate
larger than expected data growth. But, this dynamic characteristic of tmpfs file systems
includes the side effect of using more LPAR DRAM memory than expected, which funnels
away memory from applications that need that memory to function. Hence, correctly sizing
the tmpfs volumes is still important. Alternatively, the tmpfs volumes can be created to use a
fixed amount of LPAR DRAM.
A quick check at the operating system shows the available NUMA nodes and how much
memory is allocated to each node, as shown in Example 3-22.
Example 3-22 Determining the amount of RAM that is allocated to each NUMA node of an LPAR
grep MemTotal /sys/devices/system/node/node*/meminfo
The command that is shown in Example 3-22 produces an output that shows the amount of
memory that is available on each of the NUMA nodes that the operating system assigned.
In this output (see Example 3-23), the system has four NUMA nodes, each installed with
roughly 512 GB of system DRAM.
To create tmpfs devices in Example 3-23, allocate four different tmpfs devices, with one for
each NUMA node. The mount command includes an option that can assign the memory for
the tmpfs to a named NUMA node. Example 3-23 shows four directories to where the tmpfs
files systems will be mounted.
Example 3-24 shows how to create the file systems by using the mount command options.
In Example 3-24:
<tmpfs file system name> is the operating device name. Use any descriptive name.
-t tmpfs is the file system type, and in this case, of type tmpfs.
-o mpol=prefer:X is the he NUMA node number to assign the memory for the tmpfs.
/<directory to mount file system> is the location on the operating system file system
path to mount the tmpfs file system. This directory is accessible and readable from the
operating system level. Check that this directory exists as for any normal mount command.
In Example 3-23, the system has four NUMA nodes; therefore, four directories can be created
and our different tmpfs file systems can be mounted (as shown in Example 3-25 on page 59)
by substituting the <SID> with the SID of the SAP HANA DB.
By using these options (see Example 3-25), the amount of memory that is allocated to each
tmpfs is dynamically sized based on what is being stored by SAP HANA in the file systems.
This option is preferred option because the file system grows as SAP HANA table data is
migrated from RAM to the tmpfs files system.
If you must statically allocate an amount of memory to the tmpfs file system, use the -o
size=<size in GB> option to allocate statically a fixed size of LPAR DRAM to the tmpfs file
systems.
Note: The tmpfs volumes are not formatted as an XFS file system like for vPMEM
volumes, and the volumes are not mounted by using the -o dax option because of the
differences in file system format for tmpfs volumes and the ability of SAP HANA to
differentiate and support both types of file system formats to persistently store columnar
table data.
Note: More more set up information, see the SAP HANA Administration Guide for SAP
HANA Platform - SAP HANA Fast Restart Option (which is available at this web page) and
SAP NOTE 270084.
In total, the summary of main memory and persistent memory features the same size as with
the tmpfs or the Rapid-Cold-Start solution. Figure 3-13 on page 60 shows the architecture for
vPMEM with SAP HANA on Power Systems.
Note: More details about the configuration and sizing for virtual persistent memory can be
found at the following website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102502
Multiple LPARs on a system Yes, and volume DRAM taken Yes, tmpfs volume created
can be assigned memory from spare system memory. within each LPAR memory.
volumes.
Partitioned columnar table data Yes, with Affinity enabled. Yes, when each file system
is assigned in a round-robin aligned to NUMA nodes in the
fashion to memory volume. mount command.
Memory volume is Live Partition Not yet, but coming in a future Yes
Migration (LPM) capable. release
Dependency on system Yes, as outlined in the previous Can be used with any POWER9
firmware, HMC code, or section. supported versions.
operating system release.
Non-Volatile Memory Express (NVMe) adapters, with their high-speed flash memory, became
a popular technology to store data that needs low latency and high throughput. The NVMe
architecture and its access protocols became an industry standard, which makes it easy to
deploy through a device driver addition to the operating system.
When you use NVMe in this context, you use the flash modules to store the DB persistently
as though they are disk-based storage. SAP HANA reads data from these devices on start as
they do from regular storage area network (SAN) -based storage. The benefit is that the read
operations from NVMe are faster than SAP disk-based storage, which provides for faster SAP
HANA start and for persistence of data across SAP HANA, operating system, and system
restarts.
NVMe devices provide the following key benefits over SAN disk-based storage devices:
Increased queue depth, which provides decreased I/O times.
Lower latencies for read and write operations.
Higher I/O throughput than traditional disk fabric systems (for example, Fibre Channel)
because of the location of the adapters on a PCIe adapter slot.
NVME adapters are made up of multiple individual flash memory modules. This architecture
allows the operating system to access multiple storage modules per NVMe adapter
independently.
Figure 3-15 shows a sample output that lists the devices on an individual NVMe adapter.
Figure 3-15 also shows that four modules with 745 GB per module are used, and 3 TB total
storage for the adapter.
Figure 3-16, Figure 3-17, and Figure 3-18 show a few examples.
Figure 3-16 RAID 1 storage volumes for SAP HANA data and logs by using NVMe adapters
Figure 3-17 RAID 1 arrays for SAP HANA data and logs that are created
Figure 3-18 RAID 0 and RAID 1 arrays for SAP HANA data and logs
For environments that must use flash modules with high-write and change activity
characteristics, mirroring a NVMe volume with a traditional disk-based storage volume can
help preserve data integrity if a NVMe flash wear fails.
Note: Monitoring and alerting muse by set up in case a card fails to instantly take
corrective actions.
For read performance, testing shows that NVMe adapter volumes are up to 2 - 4x faster than
disk-based storage solutions (depending on I/O block sizes) as shown in Table 3-3.
Creating a RAID 0 volume over multiple NVMe devices increases the throughput on write
operations on block sizes greater than or equal to 64 KB nearly by a factor of 1.7. On block
sizes greater than 256 KB, the factor is nearly 2.
Creating a RAID 0 volume on the multiple memory modules of one NVMe device has no
positive effect on performance over storage on a single memory module.
A large DB takes some time to load from SAN disk-based storage through connectivity
solutions, such as Fibre Channel that are connected to HDD-based or SSD-based volumes.
NVMe storage that is hosted at the host level provides faster access to the SAP HANA data
for loading and writing.
However, flash modules are subject to wear because of data writes or changes. To protect
valuable data, a hybrid of the traditional SAN disk-based storage and NVMe storage can be
used to provide faster SAP HANA start with protection of the data on disk-based storage.
Then, a mirrored RAID 1 volume is created by assigning one SAN RAID volume to one NVMe
RAID 0 volume, which is represented by the gray boxes in Figure 3-19 on page 66.
When creating the RAID 1 mirror between the NVMe volumes and the SAN storage volumes,
you can set the preference for the operating system to write to the NVMe volumes by passing
the option --write-mostly of the mdadmin array utility. In this case, the preference is to assign
it to the RAID 0 device name for the external SAN volume. Example 3-26 shows the Linux
man page excerpt for mdadm.
Example 3-26 The mdadm --write-mostly option to prefer one RAID device for reading
-W, --write-mostly
subsequent devices listed in a --build, --create, or --add command will be
flagged as 'write-mostly'. This is valid for RAID 1 only and means that the
'md' driver will avoid reading from these devices if at all possible. This
can be useful if mirroring over a slow link.
In this case, for the RAID 1 device, specify the SAN storage device as --write-mostly in the
mdadm --create command, as shown in Example 3-27.
3.4.2 Summary
Mirroring an internally installed NVMe adapter to an external SAN volume of the same size
can provide the benefits of a rapid start of SAP HANA by reading data from the NVMe
adapter with the RAID 1 mirroring of the data to an external SAN disk for ultimate data
protection.
vPMEM and tmpfs for fast restart also provide resiliency of the data from an SAP HANA
restart. In addition, for vPMEM for an LPAR restart, data that is stored in these solutions must
be restored in a full system restart.
Optane DCPMM memory is implement by installing new Optane DCPMM memory cards into
existing system DRAM DIMM slots. Real DRAM capacity must be sacrificed to use the
Optane memory solution.
Optane memory capacities are provided in 128 GB, 256 GB, and 512 GB sizes per PCDMM
module, which is much higher capacities than what exists for DIMM memory modules that
have a maximum capacity of 64 GB per DIMM.
Rules for the use of PCDMM modules are complicated and vary depending on the memory
mode that is used. However, with a maximum of 12 DIMM modules per socket that uses six
DIMM modules of 64 GB and the maximum of six 512 GB DCPMM memory modules, a
socket maximum memory configuration is 3.4 TB. This memory configuration is compared to
a maximum memory configuration of 1.5 TB when only DIMM memory modules are used.
POWER9™ systems support up to 4 TB of DIMM system memory per socket by using 128
GB DIMMs. Future DIMM sizes will increase this memory footprint.
From a memory latency perspective, Optane DCPMM memory modules have a higher read
and write latency compared to standard DIMM memory technologies because of the
technology that is implemented to provide data persistence in the DCPMM module. Higher
latencies can affect application performance and must be evaluated when implementing the
Optane solution.
IBM POWER9 vPMEM and tmpfs use DIMM-backed RAM and perform read and write
operations at full DIMM throughput capabilities.
Optane has three memory modes that the DCPMM modules can use:
Memory Mode
In this mode, the DCPMM memory modules are installed along standard DIMM modules
and are used as a regular memory device. One advantage of the use of the DCPMM
modules in this mode is that greater overall memory capacities can be achieved over
standard DIMM modules that are available for x86 systems.
However, enabling the PCDMM modules in this mode puts the regular DIMMs in the
system into a caching function, which makes their capacity invisible to the host operating
system. Therefore, only the capacity of the DCPMM memory can be used by the host
operating system, and the regular DIMM memory capacity is unavailable for operating
system and application use.
App Direct Mode
In this mode, the DCPMM memory modules are used as persistent storage for operating
systems and applications that can use this technology. The DCPMM memory modules are
recognized by the operating system as storage devices and are used to store copies of
persistent disk data, which makes access to that data faster after an operating system or
system restart. The standard DIMMs are used normally as available RAM to the operating
system.
Mixed Mode
This mode is a mixture of the Memory and App Direct modes. When DCPMM modules are
used in this mode, a portion of the capacity of the module is used as memory for the host
operating system, and the remaining capacity is used for persistent storage. However, as
in Memory Mode, any DIMM memory is unavailable for use by the host operating system
and is instead converted into a memory cache subsystem.
The time values are based on HH:MM:SS. The start and stop time information's are extracted
from the index server trace files of the SAP HANA environment. All tests are running up to
minimum 3 times.
The result values that are listed in Figure 3-22 are average values. It is important to
emphasize that the factors that the start improved do not correlate to the run time and heavily
depend on the amount of data that is inside HANA. Therefore, it is recommended to validate
whether the targeted business application supports SAP HANA Native Storage Extension
(NSE) to first start reducing the amount of memory that is loaded and the overall memory
consumption.
Based on older measurements when moving to an FS9100 model, a start time improvement
of a minimum factor of 2 - 3x can be expected. For the PCIe NVMe local disks, 5x start
improvements can be easily achieved.
The use of tmpfs for persistent memory uses the memory that is assigned to the LPAR and
available to the LPAR. Therefore, moving an LPAR from one system to another preserves the
use of the tmpfs persistent memory volumes at the destination system.
vPMEM volumes that are assigned to an LPAR are defined outside the LPAR configuration.
Because of this implementation, LPM operations are not supported for vPMEM-enabled
LPARs.
Before the LPAR migration, the vPMEM device must be removed from the LPAR. Then, a new
vPMEM volume can be created at the destination system to support the application. vPMEM
LPM is intended to be supported in a future firmware release.
When the business applications need access to warm data that is NSE enabled, they are
found directly in the BUFFER CACHE or are loaded only into the BUFFER CACHE. For
improved optimization of BUFFER CACHE usage, only needed parts of the data objects are
loaded into this memory area.
SAP HANA NSE can reduce the memory footprint on IBM Power Systems servers
depending on data and workload by:
Decreasing the Global Allocation Limit of the SAP HANA instances
Minimizing the need of adapting SAP HANA Extension Nodes
All SAP HANA NSE features can be used and changed in real time, without direct effects
on the application availability.
Two SAP HANA views are available for monitoring. The first view is the
M_BUFFER_CACHE_STATISTICS system view is shown in Example 4-1, which provides
information about the buffer cache configuration, buffer cache state, and the memory usage.
In addition, this monitoring view provides information about the quality of the buffer cache,
such as hit ratio and reuse count.
For more information and instructions about enabling or disabling data objects, see the SAP
HANA Native Storage Extension Help Page.
The Advisor is disabled by default and must be enabled by using an SQL command or SAP
HANA Studio. Before enabling the SAP HANA NSE Advisor, it is recommended to clean the
Access State Cache.
To get useful results, it is recommended to run the NSE Advisor over several hours, where a
representative workload is run. The following commands enable or disable the SAP HANA
NSE Advisor and clean the Access Stats Cache:
ALTER SYSTEM CLEAR CACHE ('cs_access_statistics')
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','system')
SET('cs_access_statistics','collection_enabled') = 'true' WITH RECONFIGURE
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini','system') SET
('cs_access_statistics','collection_enabled') = 'false' WITH RECONFIGURE
4.2.1 Workflow
To make the SAP HANA database ready for the optimal use of the NSE feature, a workflow is
available for the NSE Advisor use, which can be followed to perform the correct NSE setup for
recommended data objects.
The SAP HANA NSE Advisor generates more load on the system; therefore, it is
recommended to enable the NSE Advisor for analyzing and verifying the NSE table and
system settings only.
Note: These results depend on a specific SAP HANA BW/4 system, system setup, and
workload. Results on other SAP solutions that are based on SAP HANA 2.0 can show a
different picture. These results are based on application workload, the hardware
infrastructure, total persistence size, and the relation between cold, warm, and hot data.
The improvements that are described next cannot be used as reference savings.
To get test results that are as realistic as possible, the SAP HANA BW/4 system was tested
with two different types of query workloads. To simulate business workload of users’, complex
queries are used. To measurement the effect on batch workload, massive parallel simple
queries used. Consider the following points:
The simple query test runs a set of different simple queries in parallel over a specific time.
The number of queries is measured over this specific time. The number of parallel queries
that are run is configured such that the overall CPU use increases to nearly 90% usage.
The complex queries are run as single query and the measurement is based on the query
run time.
Figure 4-3 shows the memory savings in the test SAP HANA BW/4 environment.
The total memory usage in a standard setup without NSE enabled tables of the test setup
amounts to nearly 3.4 TB of main memory. If all recommended tables are switched to NSE to
use the buffer cache, the overall main memory usage is under 1.6 TB. To have stable and
transparent test results, the buffer cache size was not changed during the test executions.
The buffer cache is 400 GB in these test scenarios.
Only when a non-recommended NSE configuration is used (where hot data is enabled for
NSE) does the application performance goes down for simple queries. In some cases, if the
storage environment of the SAP HANA persistence is too slow, the application can suffer
significant performance deterioration.
Figure 4-4 shows the results of a test that was performed on a fast storage solution that was
configured with direct attached NVMe devices. The buffer cache was limited to 400 GB. If the
buffer cache size is increased, the performance affects only the worst-case scenario.
Therefore, increasing the SAP HANA buffer cache help to reduce the performance
degradation of the application.
To protect the ability of work for the SAP HANA system, a fast storage subsystem can be
helpful. Figure 4-5 shows the performance effects on different storage subsystem solutions
for the SAP HANA persistence layer.
Figure 4-6 shows the start time of an SAP HANA BW/4 system with a persistence size of 3.2
TB. In this case, a reduction of factor 4 between standard SAP HANA configuration and SAP
HANA with recommended NSE enabling was possible.
The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only:
IBM Power Systems Security for SAP Applications, REDP-5578
IBM Power Systems Virtualization Operation Management for SAP Applications,
REDP-5579
SAP HANA on IBM Power Systems: High Availability and Disaster Recovery
Implementation Updates, SG24-8432
SAP Landscape Management 3.0 and IBM Power Systems Servers, REDP-5568
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, web docs, drafts, and other materials, at the following website:
ibm.com/redbooks
Online resources
The following websites are also relevant as further information sources:
Guide Finder for SAP NetWeaver and ABAP Platform:
https://help.sap.com/viewer/nwguidefinder
IBM Power Systems rapid cold start for SAP HANA:
https://www.ibm.com/downloads/cas/WQDZWBYJ
SAP Support Portal:
https://support.sap.com/en/index.html
Software Logistics Tools:
https://support.sap.com/en/tools/software-logistics-tools.html
Welcome to the SAP Help Portal:
https://help.sap.com
REDP-5570-01
ISBN 0738459860
Printed in U.S.A.
®
ibm.com/redbooks