0% found this document useful (0 votes)
94 views30 pages

SAP HANA Cloud Guide

The SAP HANA Cloud Getting Started Guide provides an overview of how to use SAP HANA Cloud, a cloud-native platform for real-time data access and processing. It includes instructions for creating and managing database instances, utilizing features such as the Data Lake and Native Storage Extension, and loading data into tables. The document serves as a preliminary resource, with a disclaimer that it is not a complete version of the official SAP documentation.

Uploaded by

shubham54498
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views30 pages

SAP HANA Cloud Guide

The SAP HANA Cloud Getting Started Guide provides an overview of how to use SAP HANA Cloud, a cloud-native platform for real-time data access and processing. It includes instructions for creating and managing database instances, utilizing features such as the Data Lake and Native Storage Extension, and loading data into tables. The document serves as a preliminary resource, with a disclaimer that it is not a complete version of the official SAP documentation.

Uploaded by

shubham54498
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

1/12/2024

SAP HANA Cloud Getting Started Guide


Generated on: 2024-01-12 13:04:21 GMT+0000

SAP HANA Cloud | cloud

PUBLIC

Original content: https://help.sap.com/docs/HANA_CLOUD/db19c7071e5f4101837e23f06e576495?locale=en-


US&state=PRODUCTION&version=hanacloud

Warning

This document has been generated from the SAP Help Portal and is an incomplete version of the official SAP product
documentation. The information included in custom documentation may not re ect the arrangement of topics in the SAP Help
Portal, and may be missing important aspects and/or correlations to other topics. For this reason, it is not for productive use.

For more information, please visit the https://help.sap.com/docs/disclaimer.

This is custom documentation. For more information, please visit the SAP Help Portal 1
1/12/2024

SAP HANA Cloud Getting Started Guide


Get started with the SAP HANA Cloud.

About This Documentation


This guide describes how to use the latest version of SAP HANA Cloud.

This guide does not describe the SAP HANA features and capabilities available with the SAP HANA Cloud. For more information
about features and capabilities, see the Feature Scope Description SAP HANA Cloud.

SAP HANA Cloud Documentation


To access the official documentation of SAP HANA Cloud visit the SAP Help Portal: SAP HANA Cloud.

You can nd all guides of the databases services available in SAP HANA Cloud on their respective product pages:

SAP HANA Client

SAP HANA Cloud, SAP HANA Database

SAP HANA Cloud, Data Lake

Related Information
Discovery Center Service Catalog

Introduction to SAP HANA Cloud


Get an overview of SAP HANA Cloud.

Overview

Storage Options

Releases and Upgrades

Overview
SAP HANA Cloud provides a single place to access, store, and process all enterprise data in real time. It is a cloud-native
platform that reduces the complexity of multi-cloud or hybrid system landscapes. SAP HANA Cloud provides all of the advanced
SAP HANA technologies for multi-model data processing in-memory or on disk. You can bene t from cloud qualities such as
automatic software updates, elasticity, and low total cost of ownership by using SAP HANA Cloud either as a stand-alone
solution or as an extension to your existing on-premise environment.

The SAP HANA Cloud allows you to consume the SAP HANA database from applications running on SAP Business Technology
Platform, as well as from applications running on-premise or other cloud services using the standard SAP HANA clients. The
SAP HANA Cloud provides simpli ed data access to connect all your information without the need to have all data loaded into a
single storage solution.

If you are familiar with multiple tenant databases in SAP HANA on-premise systems, note that every SAP HANA Cloud, SAP
HANA database instance is equivalent to a single tenant database. For multiple databases, create multiple SAP HANA database

This is custom documentation. For more information, please visit the SAP Help Portal 2
1/12/2024
instances. Using SAP HANA Cloud Central or the command-line interface, you can create and manage SAP HANA Cloud
instances in your subaccount.

Developers can bind their applications deployed in the same space to database instances. SAP Business Technology Platform
applications are bound to HDI containers; every application requires a dedicated HDI container. The SAP HANA Deployment
Infrastructure (HDI) provides a service that enables you to deploy database development artifacts to so-called containers. This
service includes a family of consistent design-time artifacts for all key SAP HANA database features, which describe the target
(run-time) state of SAP HANA database artifacts, for example: tables, views, or procedures. These artifacts are modeled,
staged (uploaded), built, and deployed into SAP HANA. Using HDI is not a strict requirement, schemas and database artifacts
can be created at run-time using SQL database de nition language in the SQL console. For more information, see the SAP
HANA Cloud Deployment Infrastructure Reference.

Data lake is an SAP HANA Cloud component composed of data lake Relational Engine – which provides high-performance
analysis for petabyte volumes of relational data – and data lake Files – which provides managed access to structured,
semistructured, and unstructured data stored as les in the data lake.

Data lake is available in different con gurations. You can integrate it into a SAP HANA Cloud, SAP HANA database instance, or
you can provision a standalone data lake instance with no SAP HANA database integration. You can also enable or disable the
data lake Relational Engine component when provisioning your data lake instance.

To create and manage SAP HANA Cloud instances, use SAP HANA Cloud Central or the command line interface.

To administer an SAP HANA database, use the SAP HANA cockpit, which provides a range of tools for administration and
monitoring. For more information, see SAP HANA Cockpit.

To query information about an SAP HANA database and view information about your database's catalog objects, use the SAP
HANA database explorer. For more information, see Getting Started With the SAP HANA Database Explorer.

All access to SAP HANA Cloud instances is via secure connections on SQL ports.

This is custom documentation. For more information, please visit the SAP Help Portal 3
1/12/2024

Storage Options

SAP HANA Native Storage Extension

SAP HANA native storage extension is a general-purpose, built-in warm data store in SAP HANA that lets you manage less-
frequently accessed data without fully loading it into memory. It integrates disk-based database technology with the SAP HANA
in-memory database for an improved cost-to-performance ratio. For more information, see SAP HANA Native Storage
Extension.

 Note
The SAP HANA Native Storage Extension (NSE) feature for warm data storage is enabled by default in SAP HANA Cloud.
Database developers may choose to assign speci c tables, columns, or partitions to use NSE. SAP HANA NSE uses a
dedicated in-memory buffer cache to load and unload pages of tables, table partitions or table columns. The initial buffer
cache size of an SAP HANA Cloud instance is 10% of the instance's memory size. You can change the initial buffer cache size
once the SAP HANA instance has been created. For more information, see SAP HANA NSE Buffer Cache.

 Example
NSE Sizing Example

SAP HANA Memory NSE Buffer Cache SAP HANA in-memory NSE Data Volume Size Total SAP HANA
data (compressed) Database Data Size

60 GB 6 GB 30 GB - 6 GB (24 GB) 48 GB 72 GB

This is custom documentation. For more information, please visit the SAP Help Portal 4
1/12/2024

Data Lake Files

Data lake Files provides managed access to structured, semistructured, and unstructured data le storage.

Data lake Files uses the concept of multiple le containers. When you create a data lake instance, you get the (default) le
container, and, optionally, the diagnostics le container. The le container is managed by you. The diagnostics le container is
managed by SAP. The diagnostics le container only exists if you enable IQ analytics during data lake instance creation.

Data Lake Relational Engine

The data lake Relational Engine component provides high performance analysis for petabyte volumes of relational data.

Data lake Relational Engine stores structured data. If you need an unstructured data store, use Data Lake Files.

Data lake Relational Engine stores and analyzes large amounts of data. It leverages inexpensive storage options to lower costs,
while maintaining excellent performance and full SQL access to data. Data lake Relational Engine includes elastically scalable
compute to provide high-performance analysis on-demand and to provide cost control during periods of lower load.

Releases and Upgrades


For information about the releases and upgrades for SAP HANA Cloud and its components, see the Releases and Upgrades in
SAP HANA Cloud topic in SAP HANA Cloud Overview.

Related Information
Open the SAP HANA Cockpit
Open the SAP HANA Database Explorer
SAP HANA Cloud, data lake

Creating and Managing SAP HANA Cloud Instances


Use SAP HANA Cloud Central or the command line to create and manage SAP HANA Cloud instances.

Creating SAP HANA Cloud Instances

Managing SAP HANA Cloud Instances

Creating and Managing SAP HANA Cloud Instances Using the CLI

Related Information
SAP HANA Cloud Administration Guide

Creating Tables and Loading Data


Learn how to create tables in the SAP HANA Cloud database and load data from various sources.

You can create different types of tables in the SAP HANA database: local database tables and virtual tables. Local database
tables allow you to import and query data just as in many other databases. Virtual tables point to tables in remote sources. For
more information, see Managing Tables.

This is custom documentation. For more information, please visit the SAP Help Portal 5
1/12/2024
In SAP HANA, you use linked database or create virtual tables, which point to remote tables in different data sources, and then
write SQL queries in SAP HANA that use these virtual tables. The SAP HANA query processor optimizes these queries by
executing the relevant part of the query in the target database, returning the results of the query to SAP HANA, and then
completing the operation. Physical data movement is not supported by SAP HANA smart data access.

Creating Tables and Loading Data Manually


Learn how to create tables in the SAP HANA Cloud, SAP HANA database and load data manually.
Connecting SAP HANA Cloud, SAP HANA database to Remote Data Sources
Learn how to establish a connection between SAP HANA Cloud, SAP HANA database and a remote source.
Importing and Exporting Data
Learn how to use SQL commands to import and export data directly from Azure Storage, Amazon Simple Storage
Service S3, and AliCloud OSS.

Creating Tables and Loading Data Manually


Learn how to create tables in the SAP HANA Cloud, SAP HANA database and load data manually.

You can create tables in the SAP HANA Cloud, SAP HANA database and import data using the SQL console of the SAP HANA
database explorer.

You can create tables at design-time in SAP Business Application Studio and SAP Web IDE Full-Stack and deploy them through
the SAP HANA Deployment Infrastructure (HDI). For more information, see SAP HANA Cloud Deployment Infrastructure
Reference.

Parent topic: Creating Tables and Loading Data

Related Information
Connecting SAP HANA Cloud, SAP HANA database to Remote Data Sources
Importing and Exporting Data
SAP Business Application Studio
SAP Web IDE

Create a Schema, Tables and Insert Data Using the SAP HANA
Database Explorer
Execute SQL statements to create schemas, tables, and load data by using the SQL console that is included with the SAP HANA
database explorer.

Prerequisites
You must have the required privileges in the SAP HANA database to execute your SQL statements.

Context
To get started with a basic set of demo data, see SAP HANA Cloud SQL Demo Data. To import data from a le stored in your
local le system, see Import Data Into a New or Existing Table.

This is custom documentation. For more information, please visit the SAP Help Portal 6
1/12/2024

Procedure
1. Open your SAP HANA database instance in SAP HANA database explorer. For more information, see Open the SAP
HANA Database Explorer.

2. Open an SQL console from the database explorer by right-clicking your database and clicking Open SQL Console.

3. Create a schema by executing the CREATE SCHEMA statement:

 Sample Code
CREATE SCHEMA hotel;

4. Create one or more tables by executing the CREATE TABLE statement:

 Sample Code
CREATE COLUMN TABLE hotel.city (zip CHAR(5) PRIMARY KEY, name CHAR (30) NOT NULL, state CHA

5. Insert data into the tables by executing the INSERT statement:

 Sample Code
INSERT INTO hotel.city VALUES ('12203','Albany','NY');
INSERT INTO hotel.city VALUES ('60601','Chicago','IL');
INSERT INTO hotel.city VALUES ('60615','Chicago','IL');

Results
You have successfully created a schema and tables, and imported data into your SAP HANA database. You can now query and
manipulate the data in the SAP HANA database.

Related Information
Execute SQL Statements
SAP HANA Cloud, SAP HANA Database SQL Reference
Data Manipulation Statements

SAP HANA Cloud SQL Demo Data


The SQL demo data represents a basic hotel administration system with information on the hotel's visitors, location, vacancies,
room prices, and so on.

 Sample Code
CREATE SCHEMA hotel;

CREATE COLUMN TABLE hotel.city(


zip CHAR(5) PRIMARY KEY,
name CHAR(30) NOT NULL,
state CHAR(2) NOT NULL
);
CREATE COLUMN TABLE hotel.customer(
cno NUMERIC(4) PRIMARY KEY,
title CHAR(7),
firstname CHAR(20),
name CHAR(40) NOT NULL,
zip CHAR(5),

This is custom documentation. For more information, please visit the SAP Help Portal 7
1/12/2024
address CHAR(40) NOT NULL
);
CREATE COLUMN TABLE hotel.hotel(
hno NUMERIC(4) PRIMARY KEY,
name CHAR(50) NOT NULL,
zip CHAR(5),
address CHAR(40) NOT NULL
);
CREATE COLUMN TABLE hotel.room(
hno NUMERIC(4),
type CHAR(6),
free NUMERIC(3),
price NUMERIC(6, 2),
PRIMARY KEY (hno, type)
);
CREATE COLUMN TABLE hotel.reservation(
rno NUMERIC(4) PRIMARY KEY,
cno NUMERIC(4),
hno NUMERIC(4),
type CHAR(6),
arrival DATE NOT NULL,
departure DATE NOT NULL
);

INSERT INTO hotel.city VALUES('12203', 'Albany', 'NY');


INSERT INTO hotel.city VALUES('60601', 'Chicago', 'IL');
INSERT INTO hotel.city VALUES('60615', 'Chicago', 'IL');
INSERT INTO hotel.city VALUES('45211', 'Cincinnati', 'OH');
INSERT INTO hotel.city VALUES('33575', 'Clearwater', 'FL');
INSERT INTO hotel.city VALUES('75243', 'Dallas', 'TX');
INSERT INTO hotel.city VALUES('32018', 'Daytona Beach', 'FL');
INSERT INTO hotel.city VALUES('33441', 'Deerfield Beach', 'FL');
INSERT INTO hotel.city VALUES('48226', 'Detroit', 'MI');
INSERT INTO hotel.city VALUES('90029', 'Hollywood', 'CA');
INSERT INTO hotel.city VALUES('92714', 'Irvine', 'CA');
INSERT INTO hotel.city VALUES('90804', 'Long Beach', 'CA');
INSERT INTO hotel.city VALUES('11788', 'Long Island', 'NY');
INSERT INTO hotel.city VALUES('90018', 'Los Angeles', 'CA');
INSERT INTO hotel.city VALUES('70112', 'New Orleans', 'LA');
INSERT INTO hotel.city VALUES('10580', 'New York', 'NY');
INSERT INTO hotel.city VALUES('10019', 'New York', 'NY');
INSERT INTO hotel.city VALUES('92262', 'Palm Springs', 'CA');
INSERT INTO hotel.city VALUES('97213', 'Portland', 'OR');
INSERT INTO hotel.city VALUES('60018', 'Rosemont', 'IL');
INSERT INTO hotel.city VALUES('95054', 'Santa Clara', 'CA');
INSERT INTO hotel.city VALUES('20903', 'Silver Spring', 'MD');
INSERT INTO hotel.city VALUES('20037', 'Seattle', 'WA');
INSERT INTO hotel.city VALUES('20005', 'Seattle', 'WA');
INSERT INTO hotel.city VALUES('20019', 'Seattle', 'WA');
INSERT INTO hotel.city VALUES('45455', 'San Diego', 'CA');
INSERT INTO hotel.city VALUES('33344', 'Boston', 'MD');
INSERT INTO hotel.city VALUES('88811', 'Springfield', 'WA');
INSERT INTO hotel.city VALUES('15505', 'Twin Peaks', 'MO');
INSERT INTO hotel.city VALUES('77709', 'Gardner', 'MA');

INSERT INTO hotel.customer VALUES(3000, 'Mrs', 'Jenny', 'Porter', '10580', '1340 N. Ash Street, #
INSERT INTO hotel.customer VALUES(3100, 'Mr', 'Peter', 'Brown', '48226', '1001 34th St., APT.3')
INSERT INTO hotel.customer VALUES(3200, 'Company', NULL, 'Datasoft', '90018', '486 Maple St.');
INSERT INTO hotel.customer VALUES(3300, 'Mrs', 'Rose', 'Brian', '75243', '500 Yellowstone Drive,
INSERT INTO hotel.customer VALUES(3400, 'Mrs', 'Mary', 'Griffith', '20005', '3401 Elder Lane');
INSERT INTO hotel.customer VALUES(3500, 'Mr', 'Martin', 'Randolph', '60615', '340 MAIN STREET, #7
INSERT INTO hotel.customer VALUES(3600, 'Mrs', 'Sally', 'Smith', '75243', '250 Curtis Street');
INSERT INTO hotel.customer VALUES(3700, 'Mr', 'Mike', 'Jackson', '45211', '133 BROADWAY APT. 1')
INSERT INTO hotel.customer VALUES(3800, 'Mrs', 'Rita', 'Doe', '97213', '2000 Humboldt St., #6');
INSERT INTO hotel.customer VALUES(3900, 'Mr', 'George', 'Howe', '75243', '111 B Parkway, #23');
INSERT INTO hotel.customer VALUES(4000, 'Mr', 'Frank', 'Miller', '95054', '27 5th St., 76');
INSERT INTO hotel.customer VALUES(4100, 'Mrs', 'Susan', 'Baker', '90018', '200 MAIN STREET, #94'
INSERT INTO hotel.customer VALUES(4200, 'Mr', 'Joseph', 'Peters', '92714', '700 S. Ash St., APT.1
INSERT INTO hotel.customer VALUES(4300, 'Company', NULL, 'TOOLware', '20019', '410 Mariposa St.,
INSERT INTO hotel.customer VALUES(4400, 'Mr', 'Antony', 'Jenkins', '20903', '55 A Parkway, #15')

INSERT INTO hotel.hotel VALUES(10, 'Congress', '20005', '155 Beechwood St.');


INSERT INTO hotel.hotel VALUES(30, 'Regency', '20037', '477 17th Avenue');
INSERT INTO hotel.hotel VALUES(20, 'Long Island', '11788', '1499 Grove Street');

This is custom documentation. For more information, please visit the SAP Help Portal 8
1/12/2024
INSERT INTO hotel.hotel VALUES(70, 'Empire State', '12203', '65 Yellowstone Dr.');
INSERT INTO hotel.hotel VALUES(80, 'Midtown', '10019', '12 Barnard St.');
INSERT INTO hotel.hotel VALUES(40, 'Eighth Avenue', '10019', '112 8th Avenue');
INSERT INTO hotel.hotel VALUES(50, 'Lake Michigan', '60601', '354 OAK Terrace');
INSERT INTO hotel.hotel VALUES(60, 'Airport', '60018', '650 C Parkway');
INSERT INTO hotel.hotel VALUES(90, 'Sunshine', '33575', '200 Yellowstone Dr.');
INSERT INTO hotel.hotel VALUES(100, 'Beach', '32018', '1980 34th St.');
INSERT INTO hotel.hotel VALUES(110, 'Atlantic', '33441', '111 78th St.');
INSERT INTO hotel.hotel VALUES(120, 'Long Beach', '90804', '35 Broadway');
INSERT INTO hotel.hotel VALUES(150, 'Indian Horse', '92262', '16 MAIN STREET');
INSERT INTO hotel.hotel VALUES(130, 'Star', '90029', '13 Beechwood Place');
INSERT INTO hotel.hotel VALUES(140, 'River Boat', '70112', '788 MAIN STREET');
INSERT INTO hotel.hotel VALUES(300, 'Ocean Star', '44332', '16 MAIN STREET');
INSERT INTO hotel.hotel VALUES(310, 'Bella Ciente', '77111', '13 Beechwood Place');
INSERT INTO hotel.hotel VALUES(320, 'River Boat', '79872', '788 MAIN STREET');

INSERT INTO hotel.room VALUES(10, 'single', 20, 135.00);


INSERT INTO hotel.room VALUES(10, 'double', 45, 200.00);
INSERT INTO hotel.room VALUES(30, 'single', 12, 45.00);
INSERT INTO hotel.room VALUES(30, 'double', 15, 80.00);
INSERT INTO hotel.room VALUES(20, 'single', 10, 70.00);
INSERT INTO hotel.room VALUES(20, 'double', 13, 100.00);
INSERT INTO hotel.room VALUES(70, 'single', 4, 115.00);
INSERT INTO hotel.room VALUES(70, 'double', 11, 180.00);
INSERT INTO hotel.room VALUES(80, 'single', 15, 90.00);
INSERT INTO hotel.room VALUES(80, 'double', 19, 150.00);
INSERT INTO hotel.room VALUES(80, 'suite', 5, 400.00);
INSERT INTO hotel.room VALUES(40, 'single', 20, 85.00);
INSERT INTO hotel.room VALUES(40, 'double', 35, 140.00);
INSERT INTO hotel.room VALUES(50, 'single', 50, 105.00);
INSERT INTO hotel.room VALUES(50, 'double', 230, 180.00);
INSERT INTO hotel.room VALUES(50, 'suite', 12, 500.00);
INSERT INTO hotel.room VALUES(60, 'single', 10, 120.00);
INSERT INTO hotel.room VALUES(60, 'double', 39, 200.00);
INSERT INTO hotel.room VALUES(60, 'suite', 20, 500.00);
INSERT INTO hotel.room VALUES(90, 'single', 45, 90.00);
INSERT INTO hotel.room VALUES(90, 'double', 145, 150.00);
INSERT INTO hotel.room VALUES(90, 'suite', 60, 300.00);
INSERT INTO hotel.room VALUES(100, 'single', 11, 60.00);
INSERT INTO hotel.room VALUES(100, 'double', 24, 100.00);
INSERT INTO hotel.room VALUES(110, 'single', 2, 70.00);
INSERT INTO hotel.room VALUES(110, 'double', 10, 130.00);
INSERT INTO hotel.room VALUES(120, 'single', 34, 80.00);
INSERT INTO hotel.room VALUES(120, 'double', 78, 140.00);
INSERT INTO hotel.room VALUES(120, 'suite', 55, 350.00);
INSERT INTO hotel.room VALUES(150, 'single', 44, 100.00);
INSERT INTO hotel.room VALUES(150, 'double', 115, 190.00);
INSERT INTO hotel.room VALUES(150, 'suite', 6, 450.00);
INSERT INTO hotel.room VALUES(130, 'single', 89, 160.00);
INSERT INTO hotel.room VALUES(130, 'double', 300, 270.00);
INSERT INTO hotel.room VALUES(130, 'suite', 100, 700.00);
INSERT INTO hotel.room VALUES(140, 'single', 10, 125.00);
INSERT INTO hotel.room VALUES(140, 'double', 9, 200.00);
INSERT INTO hotel.room VALUES(140, 'suite', 78, 600.00);

INSERT INTO hotel.reservation VALUES(100, 3000, 80, 'single', '2004-11-13', '2004-11-15');


INSERT INTO hotel.reservation VALUES(110, 3000, 100, 'double', '2004-12-24', '2005-01-06');
INSERT INTO hotel.reservation VALUES(120, 3200, 50, 'suite', '2004-11-14', '2004-11-18');
INSERT INTO hotel.reservation VALUES(130, 3900, 110, 'single', '2005-02-01', '2005-02-03');
INSERT INTO hotel.reservation VALUES(150, 3600, 70, 'double', '2005-03-14', '2005-03-24');
INSERT INTO hotel.reservation VALUES(140, 4300, 80, 'double', '2004-04-12', '2004-04-30');
INSERT INTO hotel.reservation VALUES(160, 4100, 70, 'single', '2004-04-12', '2004-04-15');
INSERT INTO hotel.reservation VALUES(170, 4400, 150, 'suite', '2004-09-01', '2004-09-03');
INSERT INTO hotel.reservation VALUES(180, 3100, 120, 'double', '2004-12-23', '2005-01-08');
INSERT INTO hotel.reservation VALUES(190, 4300, 140, 'double', '2004-11-14', '2004-11-17');

Connecting SAP HANA Cloud, SAP HANA database to Remote


Data Sources

This is custom documentation. For more information, please visit the SAP Help Portal 9
1/12/2024
Learn how to establish a connection between SAP HANA Cloud, SAP HANA database and a remote source.

Virtualization and Replication of Data


SAP HANA allows you to access remote data as if the data were stored in local tables. This can be achieved either by creating
virtual tables or by using data replication between the systems. A virtual table points to a table in another database. In a
replication scenario, data is read from remote sources and values are translated into SAP HANA datatype values. Changes can
be batch-loaded or replicated in real time from the remote source to the SAP HANA database.

SAP HANA smart data access (SDA) and SAP HANA Smart Data Integration (SDI) allow you to access remote data through
virtual tables without copying the data into SAP HANA. For more information, see Virtualizing Data from Remote Data Sources.

Remote data access can take longer because data needs to be transferred through the network each time a query is executed.
In certain situations, replicating the remote data to the local system might offer better query performance than accessing the
data in a remote table. Remote tables can be replicated into your SAP HANA database instance using SDA or SDI. For more
information, see Replicating Data from Remote Data Sources.

To replicate data between two SAP HANA Cloud instances, you can use the SDA hanaodbc adapter instead of the SDI
HanaAdapter. For more information about replication with SDA, see Con gure Remote Table Replication with the SDA HANA
Adapter.

SAP HANA smart data access (SDA) vs. SAP HANA Smart Data Integration (SDI)
You can connect an SAP HANA database instance to a number of different remote data sources using SDA and SDI. There are a
few differences between the SDA and the SDI:

When adding a remote SAP HANA source, the hanaodbc adapter must be selected to use SDA and the HanaAdapter
adapter for SDI. For a list of all supported sources, see SAP Note 2600176 .

SDA and SDI support a different set of connections.

The SAP HANA Data Provisioning Agent is required for connections with SDI.

Connections supported by SAP HANA smart data access (SDA)

Connect SAP HANA Cloud, SAP HANA Details


database to...

cloud remote source Connect to an Amazon Athena and Google BigQuery cloud database.

SAP HANA Cloud, SAP HANA database Connect to another SAP HANA Cloud, SAP HANA database instance.

Connections supported by SAP HANA Smart Data Integration (SDI)

Connect SAP HANA Cloud to... Details

on-premise remote sources Connect to any database, such as Microsoft SQL Server.

cloud remote source Connect to a Microsoft Azure cloud database.

SAP HANA on-premise Connect to another SAP HANA on-premise system.

Virtualizing Data from Remote Data Sources


Learn how to connect an SAP HANA Cloud, SAP HANA database instance to a remote data source and access its data
through virtual tables.
This is custom documentation. For more information, please visit the SAP Help Portal 10
1/12/2024
Replicating Data from Remote Data Sources
Learn how to replicate remote data sources in SAP HANA Cloud.
Accessing Data from Remote Data Sources
Learn how to add remote data sources and create virtual tables in SAP HANA Cloud.

Parent topic: Creating Tables and Loading Data

Related Information
Creating Tables and Loading Data Manually
Importing and Exporting Data
SAP Note 2600176

Virtualizing Data from Remote Data Sources


Learn how to connect an SAP HANA Cloud, SAP HANA database instance to a remote data source and access its data through
virtual tables.

Virtualizing Data Using SDA


SAP HANA smart data access (SDA) allows you to access remote data as if the data were stored in local tables in SAP HANA,
without copying the data into SAP HANA.

This capability provides operational and cost bene ts and supports the development and deployment of next-generation
analytical applications requiring the ability to access, synthesize, and integrate data from multiple systems in real time.

In SAP HANA, you use linked database or create virtual tables, which point to remote tables in different data sources, and then
write SQL queries in SAP HANA that use these virtual tables. The SAP HANA query processor optimizes these queries by
executing the relevant part of the query in the target database, returning the results of the query to SAP HANA, and then
completing the operation. For more information, see Managing Virtual Tables.

Smart data access supports connections to SAP HANA on-premise databases behind rewalls from SAP HANA Cloud using the
cloud connector. For more information, see Create an SAP HANA On-Premise Remote Source.

Connect SAP HANA Cloud, SAP HANA Details


database to...

cloud remote source Connect to an Amazon Athena and Google BigQuery cloud database.

SAP HANA Cloud, SAP HANA database Connect to another SAP HANA Cloud, SAP HANA database instance.

SAP HANA on-premise Connect to an SAP HANA on-premise system using cloud connector.

Virtualizing Data Using SDI


Virtualizing data is also supported by SAP HANA Smart Data Integration (SDI). The SAP HANA Data Provisioning Agent is
required for connections with SDI. Virtual tables can be created for remote sources supported by SDI in the same way as for
SDA. For more information, see Managing Virtual Tables.

Connect SAP HANA Cloud to... Details

on-premise remote sources Connect to any database, such as Microsoft SQL Server.

This is custom documentation. For more information, please visit the SAP Help Portal 11
1/12/2024

Connect SAP HANA Cloud to... Details

cloud remote source Connect to a Microsoft Azure cloud database.

SAP HANA on-premise Connect to an SAP HANA on-premise system.

Parent topic: Connecting SAP HANA Cloud, SAP HANA database to Remote Data Sources

Related Information
Replicating Data from Remote Data Sources
Accessing Data from Remote Data Sources
Create Virtual Tables

Replicating Data from Remote Data Sources


Learn how to replicate remote data sources in SAP HANA Cloud.

Replicating Data Using SDI


Remote SDI connections to SAP HANA Cloud can be set up so that they are exible and can switch between virtualization and
replication. This feature is supported by the HanaAdapter adapter. The main cloud and hybrid scenarios, which are currently
supported are shown in the following table:

Connect SAP HANA Cloud, SAP HANA Details


database to...

on-premise remote sources Connect to any database, such as Microsoft SQL Server.

cloud remote source Connect to a Microsoft Azure cloud database.

SAP HANA on-premise Connect to an SAP HANA on-premise system.

For more information about replication with SDI, see Con gure Smart Data Integration and Connect to SAP HANA Cloud.

Replicating Data Using SDA


To replicate data between two SAP HANA Cloud instances, you can use the SDA hanaodbc adapter instead of the SDI
HanaAdapter. Since the SDA adapter provides native support for data sources, it does not have to be registered. For this
adapter, you create a remote source to your target database with the adapter name hanaodbc. The SAP HANA Data
Provisioning Agent is not required for connections with SDA. For more information about replication with SDA, see Con gure
Remote Table Replication with the SDA HANA Adapter.

Connect SAP HANA Cloud, SAP HANA Details


database to...

cloud remote source Connect to an Amazon Athena and Google BigQuery cloud database.

SAP HANA Cloud, SAP HANA database Connect to another SAP HANA Cloud, SAP HANA database instance.

SAP HANA on-premise Connect to an SAP HANA on-premise system using cloud connector.

Data Replication Technologies in the Extended SAP HANA Landscape


This is custom documentation. For more information, please visit the SAP Help Portal 12
1/12/2024

Capability Description More Information

Trigger-Based Replication The trigger-based replication method uses See the documentation for SAP Landscape
the SAP Landscape Transformation (LT) Transformation Replication Server on the
Replication Server component to pass data SAP Help Portal.
from the source system to the SAP HANA
database target system.

Extraction Transformation Load-Based Extraction Transformation Load (ETL)- See the documentation for SAP Data
Replication based data replication uses SAP Data Services on the SAP Help Portal.
Services (also called Data Services) to load
relevant business data from SAP ERP to the
SAP HANA database. This lets you read the
business data on the application layer level.

Log-Based Replication SAP Replication Server (SRS) moves and See the documentation for SAP Replication
synchronizes transactional data including Server on the SAP Help Portal.
DML and DDL across the enterprise,
providing low impact, guaranteed data
delivery, real-time business intelligence,
and zero operational downtime.

SAP Landscape Transformation


You can connect the SAP Landscape Transformation replication server (SLT) to SAP HANA Cloud. For more information, see
SAP Note 2874749 .

Data Quality Management


The Data Quality Management, microservices for location data (DQMm) cleanse node identi es, parses, validates, and formats
addresses for address cleansing and address geocoding. For more information, see DQMm Cleanse.

Parent topic: Connecting SAP HANA Cloud, SAP HANA database to Remote Data Sources

Related Information
Virtualizing Data from Remote Data Sources
Accessing Data from Remote Data Sources
Replicating Tables from Remote Sources

Accessing Data from Remote Data Sources


Learn how to add remote data sources and create virtual tables in SAP HANA Cloud.

Use the SAP HANA database explorer to add a remote data source. Then, create one or more virtual tables to access its data.

Parent topic: Connecting SAP HANA Cloud, SAP HANA database to Remote Data Sources

Related Information
Virtualizing Data from Remote Data Sources
Replicating Data from Remote Data Sources
Add a Remote Data Source

This is custom documentation. For more information, please visit the SAP Help Portal 13
1/12/2024
Create Virtual Tables

Add a Remote Data Source


Use the SAP HANA database explorer to add a remote data source to SAP HANA Cloud.

Prerequisites
You are working in a global account and have added a quota to the SAP HANA Cloud

You have the CREATE REMOTE SOURCE system privilege.

The remote data source must be accessible. The Data Provisioning Agent in may be required to be installed and
con gured for the remote source.

Connect to a Remote Data Source Using the SAP HANA


Database Explorer

Procedure
1. Open your SAP HANA database instance in SAP HANA database explorer. For more information, see Open the SAP
HANA Database Explorer.

2. In the SAP HANA database explorer, right-click the Remote Sources object in your database catalog and click Add
Remote Source.

3. Specify the remote source properties. The Source Location eld defaults to indexserver for SDA adapters, and the
registered SAP HANA Data Provisioning Agent for smart data integration.

 Note
If you registered the SAP HANA Data Provisioning Agent at a remote source, select the adapter from the Adapter
Name drop-down. The registered agent will then become available under Source Location.

4. Specify an adapter version and connection mode. Choose Adapter Properties.

5. Fill in the other required connection property elds, which are marked with an *.

6. Specify one of the following credential modes (enter your user name and password as required):

Technical User

All connections to the remote data source share the same credentials.

Secondary Credentials

One set of credentials is used per data source.

At least one set of secondary credentials should exist before creating the remote source.

None

No credentials are required to connect to the remote data source.

7. Choose OK.

Results

This is custom documentation. For more information, please visit the SAP Help Portal 14
1/12/2024
The remote data source is connected to your SAP HANA Cloud instance. It is listed in the SAP HANA Database Explorer under
Catalog Remote Sources .

Next Steps
You can now create one or more virtual objects to access the data stored in the remote source. For more information, see
Create Virtual Tables.

Remote Source Connection Properties


The connection properties depend on your adapter type and connection mode.

Smart Data Access Connection Properties

Con gurations

Smart Data Access Connection Properties - Configurations

Property Description

Adapter Version Version of the adapter used to establish the connection. This property cannot be modi ed.

Connection Mode Speci es if the connection is established based on the adapter properties or the data source name.

Con guration File The con guration le for the speci ed adapter.

Driver The library name containing the driver for the speci ed adapter.

Server/ServerNode The server address of the remote source. For failover, list the failover server name, separated by a
comma. For example, server_name1:30015,failover_server_name1:30015.

Port The server port number.

SSL mode Speci es if SSL is disabled (default), or enabled.

Client Certi cate The gRPC client certi cate.

Client Private Key The gRPC client private key.

Custom Certi cation Authority The gRPC custom certi cation authority.

Proxy Host The gRPC proxy host.

Proxy Authentication The gRPC proxy authentication.

Database Name The name of the database you are creating the remote source for.

DML Mode Speci es if the remote source is readwrite (default), or readonly.

This is custom documentation. For more information, please visit the SAP Help Portal 15
1/12/2024

Property Description

Extra Adapter Properties Additional connection properties. Choose one of the following:

SAP HANA

(Session Connection Information Only)

sessionVariable:<session_variable_name>=?

SAP IQ

Specify the additional properties to complete the remote connection as follows:

ServerName=<iq_computer_name>;CommLinks=tcpip(host=<IQ_host>;port=<IQ_port>)

For example, the additional properties to connect to the demo database would be:

ServerName=<iq_machine_name>_iqdemo;

CommLinks=tcpip(host=<iq_machine_name>;port=2638)

SAP ASE

(For failover only) Enables automatic failover for the remote source. Enter:

HASession=1;AlternateServers=<failover_server>:<failover_port_number>

Region The AWS region for Athena query execution (region for endpoint, for example, ap-northeast-2).

Work Group The Amazon Athena workgroup required for setting output location for query result csv les and its
encryption option.

 Note
Depending on the adapter, the displayed connection properties may vary.

Credentials

Smart Data Access Connection Properties - Credentials

Property Description

Credentials Mode None

Technical User

Secondary Credentials

SSO (Kerberos, JSON Web Token)

Access Key ID The access key ID for Amazon Athena.

Secret Access Key The secret access key for Amazon Athena.

Smart Data Integration Connection Properties

Smart Data Integration Connection Properties

Property Description

Application Server The server of the application.

This is custom documentation. For more information, please visit the SAP Help Portal 16
1/12/2024

Property Description

Client The name of the client.

Host The name of the computer where the client is located.

Instance Number The instance number of the client.

Port Number The port number of the computer where the client is located.

System Object Pre x The pre x for object names of the speci ed database
system.

 Note
Depending on the adapter, the displayed connection properties may vary.

Create Virtual Tables


Create virtual tables to access data from a remote source.

Prerequisites
You have added a remote source.

You have the CREATE VIRTUAL TABLE object privilege on the remote source.

Procedure
1. In the SAP HANA Database Explorer choose Catalog Remote Sources .

A list of remote sources appears in the catalog browser item list.

2. Click a remote source from the item list to open the remote source editor.

3. On the Remote Objects tab, locate the remote objects that you would like to add as virtual objects.

4. Select one or more remote objects and choose Create Virtual Object(s).

5. Name your virtual objects.

If you select only one remote object, then either use the default virtual object name or give it a new name. If you select
more than one remote object, then you can either leave the Object Names Pre x eld blank so all objects are created
with their default names, or you can specify a pre x that is added to the name of the new virtual objects.

6. Specify the Schema to create the virtual objects in.

7. Choose Create.

Importing and Exporting Data


Learn how to use SQL commands to import and export data directly from Azure Storage, Amazon Simple Storage Service S3,
and AliCloud OSS.

For more information, see Importing and Exporting Data in the SAP HANA Cloud Administration Guide.

Parent topic: Creating Tables and Loading Data

This is custom documentation. For more information, please visit the SAP Help Portal 17
1/12/2024

Related Information
Creating Tables and Loading Data Manually
Connecting SAP HANA Cloud, SAP HANA database to Remote Data Sources

Connecting to the SAP HANA Database in SAP HANA Cloud


Learn how to connect clients and applications to the SAP HANA Cloud database.

The following topics describe how to download and install the SAP HANA client on Linux and how to connect to the SAP HANA
Cloud database using JDBC and ODBC.

The SAP HANA client is also supported on UNIX, macOS, and Microsoft Windows. For detailed information, see the SAP HANA
Client Installation and Update Guide.

Connection information for all supported clients, including HDBSQL, Node.js, Python, and many more, can be found in the SAP
HANA Client Interface Programming Reference.

MicroStrategy is a certi ed SAP partner and provides analytics solutions that can be connected to SAP HANA Cloud through a
client. Other clients for third-party analytics solutions should work ne with SAP HANA Cloud if they are supported for SAP
HANA on-premise systems. Please contact the support department of your particular third-party analytics solution to obtain
compatibility information regarding SAP HANA Cloud.

The SAP HANA client sends TCP keepalive packages on idle connections by default. Make sure that these packages reach SAP
HANA Cloud and are not blocked by any rewall or HTTP proxy that is part of your network.

 Note
The number of simultaneous connections to the SAP HANA database depends on the size of the instance. For instances with
up to 60 vCPUs, 500 simultaneous connections are supported for every vCPU. For larger instances, the supported
connections are capped at 30.000.

Connect to an SAP HANA on-premise system from your SAP HANA Cloud instance using the cloud connector. For more
information, see Data Access with SAP HANA Cloud.

Download and Install the SAP HANA Client


Use the SAP HANA client to connect to an SAP HANA Cloud database, for example via JDBC or ODBC.
Connect to the SAP HANA Database in SAP HANA Cloud via JDBC
Learn how to connect to the SAP HANA database via JDBC.
Connect to the SAP HANA Database in SAP HANA Cloud via ODBC
Learn how to connect to the SAP HANA database via ODBC.
Binding Applications to an SAP HANA Cloud Instance
Learn how to bind an application in a Cloud Foundry space to an SAP HANA Cloud, SAP HANA database instance in the
same space using the SAP BTP cockpit.

Download and Install the SAP HANA Client


Use the SAP HANA client to connect to an SAP HANA Cloud database, for example via JDBC or ODBC.

Prerequisites
This is custom documentation. For more information, please visit the SAP Help Portal 18
1/12/2024
You require the SAPCAR archiving tool to be able to unpack software component archives (*.SAR les), which is the format of
software lifecycle media and tools that you can download from the SAP Software Download Center .

Context
To connect the SAP HANA client to the SAP HANA service, download the SAP HANA CLIENT 2.0 software. Connections to SAP
HANA Cloud require version 2.4.167 or greater. This software is the same as the SAP HANA client software that is part of the
SAP HANA Platform Edition. If you already have a license for the SAP HANA Platform Edition, then you do not need to download
the SAP HANA client separately.

Procedure
1. Navigate to the SAP HANA CLIENT 2.0 archive in the SAP Software Download Center : SUPPORT PACKAGES &
PATCHES By Alphabetical Index (A-Z) H HANA CLOUD CLIENTS HANA CLOUD CLIENTS 1.0 DOWNLOADS

2. Download the installation media to an empty directory.

3. Unpack the installation media using the following command:

SAPCAR -xvf IMDB_CLIENT20_<version number>.SAR

4. Start the SAP HANA client installer in the command line:

./hdbinst

5. Follow the instructions displayed by the installer.

6. Set the environment variable LD_LIBRARY_PATH (Linux) or DYLIB_LIBRARY_PATH (macOS) to the installation root
location. On Windows, add the installation path to the PATH environment variable.

 Sample Code
export LD_LIBRARY_PATH=<client installation directory>

Results
The SAP HANA client is installed. A log le is available. The log les are stored in the following location:
/var/tmp/hdb_client_<timestamp>.

Task overview: Connecting to the SAP HANA Database in SAP HANA Cloud

Related Information
Connect to the SAP HANA Database in SAP HANA Cloud via JDBC
Connect to the SAP HANA Database in SAP HANA Cloud via ODBC
Binding Applications to an SAP HANA Cloud Instance
SAP Support Portal Home
SAP Software Download Center
SAP HANA Client Installation and Update Guide

Connect to the SAP HANA Database in SAP HANA Cloud via


JDBC
This is custom documentation. For more information, please visit the SAP Help Portal 19
1/12/2024
Learn how to connect to the SAP HANA database via JDBC.

Prerequisites
You have downloaded and installed the client SAP HANA CLIENT 2.0 (version 2.4.167 or greater) or the downloaded JDBC
driver (version 2.4.67 or greater) from the public Maven repository
https://central.sonatype.com/artifact/com.sap.cloud.db.jdbc/ngdbc .

You have downloaded the root certi cates from DigiCert. The certi cates can be found on the DigiCert website
(https://www.digicert.com/digicert-root-certi cates.htm ) in both PEM and DER/CRT formats.

DigiCert Global Root G2

DigiCert Global Root CA

 Caution
To avoid issues when commonly used browsers such as Mozilla Firefox and Google Chrome distrust older root
certi cates, DigiCert has started updating their rst generation (G1) CA certi cates to second-generation
(G2) certi cates. For more information, see https://knowledge.digicert.com/general-information/digicert-
root-and-intermediate-ca-certi cate-updates-2023 , SAP Note 3399573 , and SAP Note 3327214 .

You have added the root certi cates to the Java KeyStore (JKS) on your local machine. Alternatively, you can specify the
certi cates in the connection string through the sslTrustStore connection property.

Default Java VM KeyStore

Linux/macOS:

keytool -import -trustcacerts -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass <passwo

Windows:

keytool -import -trustcacerts -keystore "%JAVA_HOME%\jre\lib\security\cacerts" -storepass <pas

Custom Java KeyStore

Linux/macOS:

keytool -keystore /tmp/clientkeystore.jks -genkey -alias client


keytool -import -trustcacerts -keystore /tmp/clientkeystore.jks -storepass <password> -alias <

Windows:

keytool -keystore C:\temp\clientkeystore.jks -genkey -alias client


keytool -import -trustcacerts -keystore c:\temp\clientkeystore.jks -storepass <password> -alia

You have the endpoint and port number, user name, and password for the SAP HANA database instance that you are
connecting to.

 Tip
You can identify the endpoint of your instance on the overview page.

You have added the IP address of your client to the list of allowed connections in the con guration of your SAP HANA
database instance.

This is custom documentation. For more information, please visit the SAP Help Portal 20
1/12/2024

Context
Secure JDBC connections to SAP HANA Cloud require the SAP HANA JDBC 2.4.67 driver or greater and at least JVM 8. The SAP
HANA JDBC 2.4.67 driver is included with SAP HANA client version 2.4.167 or greater.

 Note
JDBC uses the TLS implementation provided with the Java VM.

Procedure
1. Create a connection string that includes the required connection parameters. The format is:

"jdbc:sap://<endpoint>:<port>/?encrypt=true";

2. Use this syntax to test your connection:

java -jar ngdbc-<version>.jar -u <user>,<password> -n <endpoint>:<port> -o encrypt=true -o tru

Example

 Sample Code
Default Java VM KeyStore

java -jar ngdbc-<version>.jar -u User1,Password123 -n 12345678-abcd-12ab-34cd-1234abcd.hana.hanac

Custom Java KeyStore

java -jar ngdbc-<version>.jar -u User1,Password123 -n 12345678-abcd-12ab-34cd-1234abcd.hana.hanac

Task overview: Connecting to the SAP HANA Database in SAP HANA Cloud

Related Information
Download and Install the SAP HANA Client
Connect to the SAP HANA Database in SAP HANA Cloud via ODBC
Binding Applications to an SAP HANA Cloud Instance
SAP Note 2769719
Managing SAP HANA Cloud Instances

Connect to the SAP HANA Database in SAP HANA Cloud via


ODBC
Learn how to connect to the SAP HANA database via ODBC.

Prerequisites
You have downloaded and installed the client SAP HANA CLIENT 2.0 (version 2.4.167 or greater).

This is custom documentation. For more information, please visit the SAP Help Portal 21
1/12/2024
You have downloaded the root certi cates from DigiCert. The certi cates can be found on the DigiCert website
(https://www.digicert.com/digicert-root-certi cates.htm ) in both PEM and DER/CRT formats.

DigiCert Global Root G2

DigiCert Global Root CA

 Caution
To avoid issues when commonly used browsers such as Mozilla Firefox and Google Chrome distrust older root
certi cates, DigiCert has started updating their rst generation (G1) CA certi cates to second-generation
(G2) certi cates. For more information, see https://knowledge.digicert.com/general-information/digicert-
root-and-intermediate-ca-certi cate-updates-2023 , SAP Note 3399573 , and SAP Note 3327214 .

You have the endpoint and port number, user name, and password for the SAP HANA database instance that you are
connecting to.

 Tip
You can identify the endpoint of your instance on the overview page.

You have added the IP address of your client to the list of allowed connections in the con guration of your SAP HANA
database instance.

Procedure
Create an ODBC data source.

Enable the encrypt connection option.

Specify the endpoint and port number for the SAP HANA database instance.

Platform Example Connection String

Microsoft Windows
driver=HDBODBC;serverNode=<endpoint>:<port>;encrypt=Yes;

odbc.ini le for Linux/UNIX/macOS


[<Server_Name>]
driver=<path>/libodbcHDB.so
serverNode=<endpoint>:<port>
encrypt=Yes
DESCRIPTION=<HANA-ODBC-Data-Source>
sslTrustStore=<certificate string>;

Alternatively, you can point to the certi cate le:


sslTrustStore=/<path>/<certificate filename>.pem.

Example
Use ODBC on Microsoft Windows to connect to an SAP HANA database instance.

1. Specify the server and port. For example:

12345678-abcd-12ab-34cd-1234abcd.hana.hanacloud.ondemand.com:443

2. In the settings, select Connect Using SSL. If you are using SAP HANA client version 2.4, also select Validate the SSL
certi cate.

This is custom documentation. For more information, please visit the SAP Help Portal 22
1/12/2024
Use the following connection string to connect with the ODBC driver via TCP/IP:

driver=libodbcHDB.so;serverNode=<endpoint>:<port>;encrypt=Yes;

The odbc.ini le de nes ODBC data sources on Linux/UNIX/macOS. User data sources are usually de ned in ~/.odbc.ini
(where ~ is the user's home directory). The ODBC driver manager uses the odbc.ini le to nd the ODBC driver and provide
connection parameters. SAP HANA speci c connection parameter names are case-sensitive. The following is an example data
source in the odbc.ini le:

[HANADB1]
driver=/usr/sap/hdbclient/libodbcHDB.so
serverNode=<endpoint>:<port>
encrypt=Yes
DESCRIPTION=<description>

Task overview: Connecting to the SAP HANA Database in SAP HANA Cloud

Related Information
Download and Install the SAP HANA Client
Connect to the SAP HANA Database in SAP HANA Cloud via JDBC
Binding Applications to an SAP HANA Cloud Instance
SAP Note 2769719

Binding Applications to an SAP HANA Cloud Instance


Learn how to bind an application in a Cloud Foundry space to an SAP HANA Cloud, SAP HANA database instance in the same
space using the SAP BTP cockpit.

Prerequisites
You have created an SAP HANA database instance.

Context
Services are exposed to applications by injecting access credentials into the application environment by means of service
bindings. Applications are bound to a service instance which describes the con guration and credentials required to consume a
service. Services instances are managed by a service broker which must be provided for each service (or for a collection of
services). Applications are bound to an SAP HANA database instance through a schema or an HDI container. Schemas or HDI
containers are set up by assigning the corresponding service plans to your database instance.

The "schema" Service Plan

The schema service plan creates a plain schema, which you need to manage by hand. Consider using this service plan if your
application uses an OR Mapper concept and a framework is available that creates the necessary database resources on
demand.

The "hdi-shared" Service Plan

When you create and bind a service instance with the service plan hdi-shared, an application receives the credentials required
for access to an HDI container, which is basically a database schema that is equipped with additional metadata.

This is custom documentation. For more information, please visit the SAP Help Portal 23
1/12/2024
HDI containers ensure isolation, and within an SAP HANA database you can de ne an arbitrary number of HDI containers. The
same objects can be deployed multiple times into different HDI containers in the same SAP HANA database, for example, to
install several instances of the same software product in the same SAP HANA database. HDI containers are isolated from each
other by means of schema-level access privileges. Cross-container access at the database level is prevented by default, but can
be enabled by explicitly granting the necessary privileges, for example, using synonyms.

Database objects (tables, views, procedures, and so on) have an owner: the user who created the object. When the owner of a
database object is deleted, all objects owned by the deleted user are removed from the database, too. In addition, if application
objects are created by end users, the objects are deleted when the end user is deleted, for example when the employee leaves
the organization. HDI ensures that during deployment all database objects are created by a container-speci c technical user,
which is never deleted as long as the container exists.

In HDI, database schema content (for example, tables, views, procedures, etc.) is de ned in corresponding design-time les as
part of a development project. These de nition artifacts are pushed to the platform as part of the HDI Deployer application
@sap/hdi-deploy, which is a Node.js application that is publicly available for use in Cloud Foundry. This deployer application binds
to an SAP HANA service instance and, on startup, creates the set of database objects that correspond to the pushed de nition
les, for example: myTable.hdbtable, myView.hdbview, or myProcedure.hdbprocedure.

To bind an application, perform the steps listed below.

1. Set Up an HDI Container (Kyma)


Set up an HDI container in your SAP HANA Cloud instance by creating an instance of the SAP HANA service broker (SAP
HANA Schemas & HDI Containers) with the service plan schema or hdi-shared.

2. Set Up a Schema or an HDI Container (Cloud Foundry)


Set up a schema or an HDI container in your SAP HANA Cloud instance by creating an instance of the SAP HANA service
broker (SAP HANA Schemas & HDI Containers) with the service plan schema or hdi-shared.

Task overview: Connecting to the SAP HANA Database in SAP HANA Cloud

Related Information
Download and Install the SAP HANA Client
Connect to the SAP HANA Database in SAP HANA Cloud via JDBC
Connect to the SAP HANA Database in SAP HANA Cloud via ODBC
Subscribing to the SAP HANA Cloud Administration Tools (Multi-Environment)
SAP HANA Cloud Deployment Infrastructure Reference
Maintaining Multitarget Application Services in Cloud Foundry

Set Up an HDI Container (Kyma)


Set up an HDI container in your SAP HANA Cloud instance by creating an instance of the SAP HANA service broker (SAP HANA
Schemas & HDI Containers) with the service plan schema or hdi-shared.

Prerequisites
You have access to the Kyma runtime service and the Kyma Dashboard. For more information, see Kyma Environment.

You have a global account with SAP BTP and have added the hdi-shared plan under Entitlements SAP HANA
Schemas & HDI Containers to your subaccount. For more information on how to add space quota plans, see Assign
Quota Plans to Spaces in the SAP BTP documentation.

Context
This is custom documentation. For more information, please visit the SAP Help Portal 24
1/12/2024
HDI Containers created in the Kyma Environment can only be managed by using the kubectl CLI or the Kyma dashboard.

Procedure
1. Navigate to your subaccount.

2. Create an SAP HANA database.

For more information, see Creating SAP HANA Cloud Instances.

3. In SAP BTP cockpit, on your subaccount Overview page, click the Kyma Environment tab.

4. Click Link to dashboard to enter the Kyma Environment.

5. Click the Namespaces tab in the navigation pane. Select an exsiting namespace or click Create Namespace to create a
new namespace.

6. In SAP HANA Cloud Central, map the instance you created to the Kyma environment and enter your Kyma namespace as
the Environment Group. For more information, see Map an SAP HANA Database to another Environment Context.

7. Return to your namespace in the Kyma dashboard.

8. In the Kyma environment navigation pane, choose Service Management Service Instances and select Create
Service Instance.

9. Enter a name, offering name, and plan name, then click Create. The offering name is hana and the plan name is hdi-
shared.

10. Once the service instance is provisioned, click Create Service Binding.

11. Enter a name, select the service instance you created, and click Create.

Results
Your HDI Container has been created. It appears in Services Instances .

Task overview: Binding Applications to an SAP HANA Cloud Instance

Next task: Set Up a Schema or an HDI Container (Cloud Foundry)

Related Information
Map an SAP HANA Database to another Environment Context

Set Up a Schema or an HDI Container (Cloud Foundry)


Set up a schema or an HDI container in your SAP HANA Cloud instance by creating an instance of the SAP HANA service broker
(SAP HANA Schemas & HDI Containers) with the service plan schema or hdi-shared.

Prerequisites
You have completed the prerequisites for creating an instance in your SAP HANA Cloud. For more information, see
Subscribing to the SAP HANA Cloud Administration Tools (Multi-Environment).

You have enabled Cloud Foundry in your subaccount.

You are working in an enterprise account and have added the schema or hdi-shared plan under Entitlements SAP
HANA Schemas & HDI Containers in your subaccount. For more information on how to add space quota plans, see
Assign Quota Plans to Spaces in the SAP Business Technology Platform documentation.

You have the GUID of the SAP HANA Cloud database instance that you are connecting to.

This is custom documentation. For more information, please visit the SAP Help Portal 25
1/12/2024

 Tip
You can obtain the GUID of your instance on the instance overview page in SAP HANA Cloud Central. From the
Actions menu choose Copy Instance ID.

Procedure
1. Navigate to your subaccount in the SAP BTP cockpit.

2. In the navigation pane, choose Services Service Marketplace .

All services available to you appear. The list of services you see in the SAP Service Marketplace is determined by the
services to which you have subscribed.

3. Choose SAP HANA Schemas & HDI Containers.

4. Choose Create Instance.

5. Choose a service plan and enter a name. Then, choose Next.

schema

The schema service plan creates a plain schema, which you need to manage manually or with application code. No
automated deployment or schema-management services are provided. See The “schema” Service Plan.

hdi-shared

When you create and bind a service instance with the service plan hdi-shared, an application receives the
credentials required for access to an HDI container, which is basically a database schema that is equipped with
additional metadata. See The “hdi-shared” Service Plan.

6. (Optional) If you have multiple SAP HANA Cloud database instances, specify the endpoint of the database that will be
the deployment target for your application in JSON format:

{"database_id": "<GUID>"}

 Sample Code
{"database_id": "abcd1234-5678-1234-a1b2-abcdef123456"}

7. (Optional) If you've already deployed an application that you want to bind to the new service instance, choose it from the
list, and then choose Next.

8. Enter a name for your instance and choose Create Instance.

Results
An instance of the service is created and appears in Services Service Instances .

Task overview: Binding Applications to an SAP HANA Cloud Instance

Previous task: Set Up an HDI Container (Kyma)

Related Information
SAP HANA Service Plans

Bind the Application

This is custom documentation. For more information, please visit the SAP Help Portal 26
1/12/2024
Bind your application to your SAP HANA Cloud instance through the SAP HANA service broker instance (SAP HANA Schemas &
HDI Containers).

Prerequisites
The application is deployed. For more information about developing and deploying applications in the Cloud Foundry
environment, see the SAP Business Technology Platform documentation.

Procedure
1. Navigate to your Cloud Foundry space.

2. In the navigation area, choose Applications.

The overview lists all applications to which the selected application is currently bound.

3. Choose an application.

4. In the navigation area, choose Service Bindings, then select Bind Service.

5. On the Choose Service Type tab, select the Service from the catalog radio button and choose Next.

6. On the Choose Service tab, select SAP HANA Schemas & HDI Containers and choose Next.

7. On the Choose Service Plan tab, select Re-use existing instance.

8. Select the service broker instance that you bound to your SAP HANA Cloud instance and choose Next.

9. Choose Finish.

Results
The application is bound to your SAP HANA Cloud instance through the SAP HANA service broker instance (SAP HANA
Schemas & HDI Containers).

Restart the Application


Once you've created the binding, restart your application.

Procedure
1. Navigate to the Cloud Foundry space and choose Applications. Select the Stop icon for your application.

 Note
An application’s status in uences when a newly bound SAP HANA database becomes effective. If an application is
already running (Started state), it does not have access to the newly bound database until it has been restarted.

2. Once the application status has changed to Stopped, select the Start icon for your application.

Results
You have created a service binding for an SAP HANA database in your Cloud Foundry space.

To unbind a database from an application, choose (Delete) in the Actions column. The application maintains access to the
database until it is restarted.

This is custom documentation. For more information, please visit the SAP Help Portal 27
1/12/2024
To change database parameters (for example, to assign a higher memory limit to one of its processes), choose the Change
Quota button on the Overview page.

Developing Database Content for SAP HANA Cloud


Learn about how to create database content for the SAP HANA database.

Calculation Views
A calculation view allows users to de ne more advanced slices on the data available in the SAP HANA database.

Calculation views are mainly used for analyzing operational data marts or running multidimensional reports on revenue,
pro tability, and more. Calculation views consume various combinations of content data (that is, non-metadata) to model a
business use case. You can classify content data as:

Attributes: Descriptive data - such as customer ID, city, and country.

Measures: Quanti able data - such as revenue, quantity sold, and counters.

Calculation views simulate entities (such as customer, product, sales, and more) and their relationships. Data visualization and
analysis applications such as SAP BusinessObjects Explorer and Microsoft Office based reporting tools consume these
calculation views and help decision makers in their decision process.

You can create calculation views with layers of calculation logic, which include measures sourced from multiple source tables, or
advanced SQL logic, and much more. The data sources in a calculation view can include any combination of tables and
calculation views. You can create joins, unions, projections, and aggregations on data sources.

You can model calculation views using SAP Business Application Studio and SAP Web IDE Full-Stack. For more information, see
SAP HANA Cloud, SAP HANA Database Modeling Guide for SAP Business Application Studio and SAP HANA Cloud Modeling
Guide for SAP Web IDE Full-Stack.

Using the Machine Learning Libraries (APL and PAL) in the SAP
HANA Cloud, SAP HANA Database
Set up the environment for using the SAP HANA Automated Predictive Library (APL) and SAP HANA Predictive Analysis Library
(PAL) in the SAP HANA Cloud, SAP HANA database. APL and PAL are already installed in the SAP HANA database in SAP HANA
Cloud.

Enabling the Script Server


The script server is used to execute application function libraries. Both APL and PAL require the script server to be running.

You can enable the script server for new and existing SAP HANA Cloud database instances from the SAP BTP cockpit. See
Create an SAP HANA Database Instance Using SAP HANA Cloud Central and Managing SAP HANA Database Instances.

Setting Up PAL
Grant users the privileges required to work with the SAP HANA Predictive Analysis Library. These are contained in the
AFL__SYS_AFL_AFLPAL_EXECUTE database role. You can assign the role to a user by running the following SQL statement
as the DBADMIN user and replacing <PAL_user> with the appropriate SAP HANA user name:

This is custom documentation. For more information, please visit the SAP Help Portal 28
1/12/2024

GRANT AFL__SYS_AFL_AFLPAL_EXECUTE TO <PAL user>;

For more information, see Getting Started with PAL.

Setting Up APL
APL users

Grant users the privileges required to work with the SAP HANA APL function library. These are contained in the
sap.pa.apl.base.roles::APL_EXECUTE database role. You can assign the role to a user by running the following
SQL statement as the DBADMIN user and replacing <APL_user> with the appropriate SAP HANA user name:

GRANT "sap.pa.apl.base.roles::APL_EXECUTE" TO <APL_user>;

Tracing and auditing

If required, activate the traces and set up auditing. See Activate the Traces and Audit SAP HANA APL.

APL samples

The SAP HANA APL GitHub repository contains sample datasets and scripts. See SAP HANA APL GitHub Repository .

To try out an APL function using a sample script, you must rst import the sample datasets into SAP HANA tables. The
dataset schema is named APL_SAMPLES. The examples shown in the function reference use the sample datasets.

For information about how to import data using the SAP HANA database explorer, see Import Data Into a New or
Existing Table.

SQL and SQLScript Compatibility


Some SQL and SQLScript features available in other versions of SAP HANA are not supported in the SAP HANA database in
SAP HANA Cloud. For example, the SQL clause WITH OVERVIEW is not supported:

CALL <procedure> ... WITH OVERVIEW

See SQL and SQLScript Compatibility.

Related Information
SAP HANA Cloud Predictive Analysis Library (PAL)
SAP HANA Automated Predictive Library Developer Guide
User Management with the SAP HANA Cloud Administrator DBADMIN

SAP HANA as Geographic Information System for ESRI


Before using SAP HANA database with ESRI software create all the necessary spatial reference systems (SRSs).

SAP HANA contains most of the SRSs de ned by EPSG and ESRI as prede ned SRSs. With the following statement you create
all prede ned SRSs:

CREATE PREDEFINED SPATIAL REFERENCE SYSTEMS;

In case you only want to create the SRSs you need in your ESRI software, you can use the following statement:

This is custom documentation. For more information, please visit the SAP Help Portal 29
1/12/2024

CREATE PREDEFINED SPATIAL REFERENCE SYSTEM IDENTIFIED BY <srs-id>;

 Note
An SRS cannot be changed or deleted as long as it is used by a table column.

Prede ned SRSs can change on an SAP HANA upgrade, but all SRSs created from the previous prede ned SRSs remain
unchanged.

Related Information
CREATE PREDEFINED SPATIAL REFERENCE SYSTEMS Statement
CREATE PREDEFINED SPATIAL REFERENCE SYSTEM IDENTIFIED BY "srs-id" Statement

Important Disclaimer for Features in SAP HANA Cloud


Some SAP HANA features and capabilities mentioned in this document may not be applicable in your provisioning scenario.

For information about the capabilities available for your provisioning scenario, refer to the Feature Scope Description for SAP
HANA Cloud.

This is custom documentation. For more information, please visit the SAP Help Portal 30

You might also like