Connectivity Service
Connectivity Service
PUBLIC
2024-05-02
1 Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
1.1 Connectivity in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
What Is SAP BTP Connectivity?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
What's New for Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Developing Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Monitoring and Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Resilience Recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
On-Premise Connectivity in the Kyma Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
1.2 Cloud Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Administration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
Update the Java VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .722
Uninstallation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
Frequently Asked Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
REST APIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
1.3 Connectivity Proxy for Kubernetes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
Lifecycle Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
Verification and Testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
Using the Connectivity Proxy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .794
Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .798
Frequently Asked Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
1.4 Transparent Proxy for Kubernetes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
Using the Transparent Proxy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
Multitenancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
Destination Custom Resource Scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823
Integration with SAP BTP Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .824
Lifecycle Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
Note
This documentation refers to SAP BTP, Cloud Foundry environment. If you are looking for information
about the Neo environment, see Connectivity for the Neo Environment.
Content
In this Topic
Hover over the elements for a description. Click an element for more information.
• Overview [page 5]
• Features [page 6]
• Restrictions [page 6]
In this Guide
Hover over the elements for a description. Click an element for more information.
SAP BTP Connectivity allows SAP BTP applications to securely access remote services that run on the Internet
or on-premise. This component:
A typical scenario for connecting your on-premise network to SAP BTP looks like this:
• Your company owns a global account on SAP BTP and one or more subaccounts that are assigned to this
global account.
• Using SAP BTP, you subscribe to or deploy your own applications.
• To connect to these applications from your on-premise network, the Cloud Connector administrator sets
up a secure tunnel to your company's subaccount on SAP BTP.
• The platform ensures that the tunnel can only be used by applications that are assigned to your
subaccount.
• Applications assigned to other (sub)accounts cannot access the tunnel. It is encrypted via transport layer
security (TLS), which guarantees connection privacy.
For inbound connections (calling an application or service on SAP BTP from an external source), you can use
Cloud Connector service channels [page 611] (on-premise connections) or the respective API endpoints of
your SAP BTP region (Internet connections).
Protocol Scenario
Restrictions
General [page 6]
Protocols [page 7]
Note
For information about general SAP BTP restrictions, see Prerequisites and Restrictions.
General
Java Connector To develop a Java Connector (JCo) application for RFC com-
munication, your SDK local runtime must be hosted by a 64-
bit JVM, on a x86_64 operating system (Microsoft Windows
OS, Linux OS, or Mac OS X).
Ports For Internet connections, you are allowed to use any port
>1024. For cloud to on-premise solutions there are no port
limitations.
Destination Configuration • You can use destination configuration files with exten-
sion .props, .properties, .jks, and .txt, as
well as files with no extension.
• If a destination configuration consists of a keystore or
truststore, it must be stored in JKS files with a stand-
ard .jks extension.
Protocols
For the cloud to on-premise connectivity scenario, the following protocols are currently supported:
Protocol Info
HTTP HTTPS is not needed, since the tunnel used by the Cloud
Connector is TLS-encrypted.
RFC You can communicate with SAP systems down to SAP R/3
release 4.6C. Supported runtime environment is SAP Java
Buildpack with a minimal version of 1.8.0.
TCP You can use TCP-based communication for any client that
supports SOCKS5 proxies.
Topic Restriction
Service Channels Service channels are supported only for SAP HANA data-
base, see Using Service Channels [page 611].
Topic Restriction
Related Information
Consuming SAP BTP Connectivity for your application in the Cloud Foundry environment: Overview.
Note
This documentation refers to SAP BTP, Cloud Foundry environment. If you are looking for information
about the Neo environment, see Connectivity for the Neo Environment.
Hover over the elements for a description. Click an element for more information.
Related Information
Use SAP BTP Connectivity for your application in the Cloud Foundry environment: available services,
connectivity scenarios, user roles.
SAP BTP Connectivity lets you connect your SAP BTP applications to other Internet resources, or to your
on-premise systems running in isolated networks. It provides an extensive set of features to choose different
connection types and authentication methods. Using its configuration options, you can tailor access exactly to
your needs.
Note
You can use SAP BTP Connectivity for the Cloud Foundry environment and for the Neo environment. This
documentation refers to SAP BTP, Cloud Foundry environment. If you are looking for information about the
Neo environment, see Connectivity for the Neo Environment.
Content
Features
SAP BTP Connectivity provides two services for the Cloud Foundry environment, the Connectivity service and
the Destination service.
The Destination service and the Connectivity service together provide virtually the same functionality that is
included in the Connectivity service of the Neo environment.
In the Cloud Foundry environment however, this functionality is split into two separate services:
• The Connectivity service provides a connectivity proxy that you can use to access on-premise resources.
• Using the Destination service, you can retrieve and store the technical information about the target
resource (destination) that you need to connect your application to a remote service or system.
You can use both services together as well as separately, depending on the needs of your specific scenario.
Use Cases
• Use the Connectivity service to connect your application or an SAP HANA database to on-premise
systems:
• Set up on-premise communication via HTTP or RFC for your cloud application.
• Use a service channel to connect to an SAP HANA database on SAP BTP from your on-premise
system, see Configure a Service Channel for an SAP HANA Database.
• Use the Destination service:
• To retrieve technical information about destinations that are required to consume the Connectivity
service (optional), or
• To provide destination information for connecting your Cloud Foundry application to any other Web
application (remote service). This scenario does not require the Connectivity service.
In this document, we refer to different types of user roles – responsibility roles and technical roles.
Responsibility roles describe the required user groups and their general tasks in the end-to-end setup process.
Configuring technical roles, you can control access to the dedicated cloud management tools by assigning
specific permissions to users.
Responsibility Roles
The end-to-end use of the Connectivity service and the Destination service requires these user groups:
• Application operators - are responsible for productive deployment and operation of an application on SAP
BTP. Application operators are also responsible for configuring the remote connections (destination and
trust management) that an application might need, see Administration [page 58].
• Application developers - develop a connectivity-enabled SAP BTP application by consuming the
Connectivity service and/or the Destination service, see Developing Applications [page 214].
• IT administrators - set up the connectivity to SAP BTP in your on-premise network, using the Cloud
Connector [page 343].
Some procedures on the SAP BTP can be done by developers as well as by application operators. Others may
include a mix of development and operation tasks. These procedures are labeled using icons for the respective
task type.
Task Types
Technical Roles
To perform connectivity tasks in the Cloud Foundry environment, the following technical roles apply:
Note
To apply the correct technical roles, you must know on which cloud management tools feature set (A or B)
your account is running. For more information on feature sets, see Cloud Management Tools — Feature Set
Overview.
Subaccount View Cloud Connectors connected to a A Cloud Foundry org role containing
subaccount (cockpit) the permission readSCCTunnels,
for example, the role Org Manager.
Note
As a prerequisite, a Cloud Foundry
org must be available.
Service instance Manage destinations (all CRUD oper- One of these roles:
ations) on service instance level (cock-
pit) • Org Manager
Manage certificates (all CRUD opera- See User and Member Management.
tions) on service instance level (cock-
pit)
Feature set B provides dedicated roles for specific operations. They can be assigned to custom role
collections, but some of them are also available in default role collections.
Note
To see the Destination editor on subaccount level, you must have at least the Destination Viewer role, or
both the Destination Configuration Viewer and the Destination Certificate Viewer roles.
For the Destination editor on service instance level, the corresponding roles apply: You need at least the
Destination Viewer Instance role, or both the Destination Configuration Instance Viewer and the Destination
Certificate Instance Viewer roles.
Service instance Manage destinations (all CRUD oper- One of these roles:
ations) on service instance level (cock-
pit) • Destination Administrator
• Destination Configuration Admin-
istrator
• Destination Administrator Instance
• Destination Configuration Instance
Administrator
• Org Manager
• Space Manager
• Space Developer
• Org Manager
• Space Manager
• Space Developer
• Org Manager
• Space Manager
• Space Developer
• Org Manager
• Space Manager
• Space Developer
Subaccount Administrator
• Cloud Connector Administrator
• Destination Administrator
Subaccount Viewer
• Cloud Connector Auditor
• Destination Viewer
Note
If a user has access to the Destination service REST API (via service instance binding credentials or a
service key), he has full access to the destination and certificate configurations managed by that instance
of the Destination service.
For more information, see About Roles in the Cloud Foundry Environment and check the activity Instantiate
and bind services to apps in the linked Cloud Foundry documentation (docs.cloudfoundry.org).
Additionally, applications have access to the REST API of the Destination service instance they are bound
to.
Related Information
Find the latest features, enhancements and bug fixes for SAP BTP Connectivity .
2021
Tech-
nical
Com- Envi- Avail-
po- Capa- ron- able
nent bility ment Title Description Action Type as of
Con- Inte- Cloud Connectiv- SAP Cloud PKI (public key infrastructure) is enabled for Info New 2021-1
nectiv- gration Foun- ity Service - technical communication between Cloud Connector and only 2-02
dry
ity Suite Cloud Con- Connectivity service.
nector -
SAP Cloud
• The change is transparent for the Cloud Connector
- as soon as you renew your subaccount certificate
PKI
in the Cloud Connector, the newly issued X.509 cli-
ent certificate will be part of SAP Cloud PKI.
• If you are using Connectivity proxy software compo-
nents as part of your solution, make sure you use
version 2.4.1 or higher. For scenarios with termina-
tion in the ingress, version 2.3.1 of the Connectivity
proxy is sufficient.
Con- Inte-Cloud Destination You can now configure destinations of type MAIL with Info New 2021-1
nectiv- gration Foun- Service - any OAuth-based authentication type as available for only 2-02
dry
ity Suite Mail Desti- HTTP destinations, including the option for mTLS via
Neo nations X.509 client certificate.
Restriction
The Mail Java API (Neo environment) does not pro-
vide the javax.mail.Session object out of the
box. It must be configured manually.
Con- Inte- Cloud Cloud Con- Release of Cloud Connector version 2.14.0.1 introduces Rec- New 2021-1
nectiv- gration Foun- nector the following enhancements: om- 2-02
dry
ity Suite 2.14.0.1 - mend
Neo Enhance-
• Cloud Connector can now use SAPMachine 11 as
ed
Java runtime.
ments
For more information, see Prerequisites.
• Cloud Connector supports Windows Server 2022
as additional OS version.
For more information, see Prerequisites.
• Monitoring was extended to show usage informa-
tion for service channels (on-premise to cloud sce-
narios).
For more information, see Monitoring.
• An administrator can now configure more connec-
Con- Inte- Cloud Cloud Con- Release of Cloud Connector version 2.14.0.1 provides the Info Chang 2021-1
nectiv- gration Foun- nector following bug fixes: only ed 2-02
dry
ity Suite 2.14.0.1 -
Neo Fixes
• Audit log selection was not working correctly if the
end of the time interval was before noon.
This issue has been fixed.
• If an RFC connection is broken while waiting for
data from the ABAP system, the processing engine
could get into an inconsistent state, causing wrong
processing for succeeding requests sent from the
cloud application, which eventually could make the
cloud application hang.
This issue has been fixed.
• Fixed a race condition that could occur if many re-
quests were sent from the cloud application over
the same RFC connection, and if the network from
the cloud application to the Cloud Connector was
very fast.
In such a situation, two threads were
processing this single RFC connection,
causing an inconsistency that could
lead to a NullPointerException in
RfcBlock.populateRequestStatistics
when trying to access the field
<performanceStatistics>.
• When configuring an RFC SNC acccess control en-
try, but overall SNC configuration is incomplete,
Cloud Connector now reports the configuration er-
ror at runtime instead of falling back to plain RFC, if
offered by the backend.
Con- Inte- Cloud Destination • The Destinations UI in the cockpit lets you Info New 2021-1
nectiv- gration Foun- Service - configure an X.509 client certificate for au- only 1-18
dry
ity Suite Features tomatic token retrieval when using the rele-
vant OAuth-based authentication types. The
AuthenticationHeaderProvider Java cli-
ent library, part of SAP Java Buildpack, was
adapted as well. The REST API already supports it.
Note
The CheckConnection functionality as well
as automatic token retrieval are not yet sup-
ported.
Con- Inte- Neo Destination The Destinations UI in the cockpit lets you configure Info New 2021-1
nectiv- gration Service - an X.509 client certificate for automatic token retrieval only 1-18
ity Suite Features when using the relevant OAuth-based authentication
types. The AuthenticationHeaderProvider
Java client library, part of the Neo runtimes, was
adapted as well.
Con- Inte- Cloud Destination • A change has been applied that improves the stabil- Info Chang 2021-1
nectiv- gration Foun- Service - ity and availability of the service during startup, for only ed 1-18
dry
ity Suite Bug Fix example, in case of a rolling update.
Con- Inte-Cloud Destination • Destinations with ProxyType set to OnPremise Info New 2021-1
nectiv- gration Foun- Service - can now be configured with OAuth-based authenti- only 1-04
dry
ity Suite OAuth cation types, both via the BTP cockpit UI and the
Destination service REST API.
Con- Inte- Cloud Connectiv- Connectivity proxy version 2.4.1 is now available. Info An- 2021-1
nectiv- gration Foun- ity Proxy only nounc 0-21
ity Suite
dry
Version
• Connectivity CA is now automatically downloaded
ement
Kyma during help deployment.
2.4.1
• Applies critical preparation for adoption of SAP
Cloud PKI.
• Improved server certificate validation towards re-
mote targets.
• Multiple open source software components re-
ported as vulnerable have been replaced.
Con- Inte-Cloud Connectiv- An internal service exception could result in a 502 Bad Chang 2021-1
nectiv- gration Foun- ity Service - Gateway error on the client side, which was visible in the ed 0-07
dry
ity Suite Bug Fix Cloud Connector logs during automatic reconnect.
Con- Inte- Cloud Destination • In some cases, a Destination service instance could Chang 2021-1
gration Foun- Service - ed 0-07
nectiv- not be deleted. The operation now works as ex-
dry
ity Suite Bug Fixes pected.
• A performance optimisation reduces the overall
amount of remote calls to XSUAA when the Desti-
nation service is called with a user token.
As a result, less load is put on XSUAA, and the Des-
tination service responds faster. This fix contributes
to improving overall stability.
Con- Inte- Cloud SAP Java SAP Java Buildpack has been updated from version New 2021-
nectiv- gration Foun- Buildpack 1.38.0 to 1.39.0. 09-13
dry
ity Suite
• The com.sap.cloud.security.xsuaa API
has been updated to version 2.10.5.
• The Connectivity API extension has been updated
to version 3.12.0.
Con- Inte- Cloud Destination When using authentication type New 2021-
nectiv- gration Foun- Service - OAuth2ClientCredentials, you can choose a 08-26
dry
ity Suite Authentica- tenant to perform automated token retrieval that is dif-
tion Types ferent from the tenant used to look up the destination
configuration.
Note
You cannot use this feature in combination with a
passed user context. In this case, the tenant used
to perform automated token retrieval is exclusively
determined by the user context.
Con- Inte- Neo Destina- You can configure timeout properties in the destination New 2021-
nectiv- gration tions - configuration, following a documented naming conven- 08-26
ity Suite Timeout tion.
Properties
This feature lets you manage timeouts externally, re-
gardless of the cloud application's lifecycle.
Con- Inte- Cloud Cloud Con- Multiple vulnerabilities in the Cloud Connector have Chang 2021-
been fixed.
nectiv- gration Foun- nector - Se- ed 08-12
dry
ity Suite curity Fixes For more information, see SAP security note
Neo 3058553 .
Con- Inte- Cloud Destination When generating an X.509 client certificate (as an- New 2021-
nectiv- gration Foun- Service - nounced on July 1), you can set a password to protect 08-12
dry
ity Suite Client Cer- the private key. If you choose a PKCS12 file format, also
tificates the keystore is protected by the same password.
Con- Inte- Neo Destination As of HttpDestination version 2.14.0 , any defined New 2021-
nectiv- gration Service - HTTP header in the destination configuration (see HTTP 08-12
ity Suite HTTP Head- Destinations) is processed at runtime, that is, it is added
ers to the request that is sent to the target server.
Con- Inte- Neo Connectiv- A few stabilisation fixes have been applied in the Con- Chang 2021-
nectiv- gration ity Service - nectivity service to handle rare cases in which an abnor- ed 08-12
ity Suite Bug Fix mal amount of metering data was received by the Cloud
Connector. This could cause a partial blockage of the
service.
Con- Inte- Cloud Destination A few stabilisation fixes have been applied on the REST Chang 2021-
nectiv- gration Foun- Service - server-side logic, allowing more efficient parallel request ed 08-12
dry
ity Suite Bug Fix handling under load.
Con- Inte-Cloud Cloud Con- Release of Cloud Connector version 2.13.2 introduces New 2021-
nectiv- gration Foun- nector the following improvement: 07-15
dry
ity Suite 2.13.2 - En-
Neo hance-
• The HTML validation of login information has been
improved.
ments
Con- Inte- Cloud Cloud Con- Release of Cloud Connector version 2.13.2 provides the Chang 2021-
nectiv- gration Foun- nector following bug fixes: ed 07-15
dry
ity Suite 2.13.2 -
Neo Fixes
• A regression prevented that links provided in the
login info widget (login screen) could be clicked.
This issue has been fixed.
• When rewriting a location header for redirect re-
sponses (status codes 30x), the lookup to deter-
mine the virtual host that replaces the internal one
is now case insensitive.
• When using custom attributes in JWTs (JSON web
tokens) for principal propagation, the value can now
be extracted even if represented as single element
array.
• A slow network could prevent a successful ini-
tial push, caused by an unintended timeout. As
a consequence, the configuration on the shadow
instance could be incomplete, even though the
shadow showed a successful connection.
This issue has been fixed.
• CPIC traces can now be turned on and off multiple
times without the need to restart.
• Issues with restoring a 2.13.x backup into a fresh
installation on Linux have been fixed.
Con- Inte-Cloud Destination • Using the Destination service REST API , you can New 2021-
nectiv- gration Foun- Service configure a service-side issued X.509 client certifi- 07-01
dry
ity Suite cate as part of the SAP Cloud PKI (public key
infrastructure) and formally choose automatic re-
newal of the certificate.
• The same feature is available in the Destina-
Destinations ).
• The SAP Java Buildpack now includes Java
APIs as part of a client library for
the Destination service. You can use the
ConnectivityConfiguration Java API to
retrieve destination and certificate configurations,
and AuthenticationHeaderProvider Java
API to provide prepared HTTP headers holding au-
thentication tokens for various scenarios.
For more information, see Destination Java APIs.
Con- Inte-Cloud Destination The Destinations UI (editor) in the cockpit lets you cre- New 2021-
nectiv- gration Foun- Service - ate a certificate configuration entry containing an X.509 06-17
dry
ity Suite Destina- client certificate part of SAP Cloud PKI (public key infra-
tions Editor structure).
Con- Inte-Cloud Destination The Destinations UI (editor) in the cockpit has intro- New 2021-
nectiv- gration Foun- Service - duced a warning message as a reminder that the user's 06-17
dry
ity Suite Destina- personal password should not be used when configuring
Neo tions Editor the authentication type of a destination, for example,
BasicAuthentication or OAuth2Password.
Con- Inte- Cloud Connectiv- In rare cases, the Cloud Connector version shown in the Chang 2021-
nectiv- gration Foun- ity Service - cockpit (Connectivity → Cloud Connectors) got lost. ed 06-17
dry
ity Suite Bug Fix
This issue has been fixed.
Neo
Con- Inte- Cloud Destination An issue has been resolved which prevented updating a Chang 2021-
nectiv- gration Foun- Service - service instance (previously created in the cockpit) via ed 06-17
dry
ity Suite Bug Fix the Cloud Foundry CLI.
Con- Inte- Cloud Connectiv- You can configure a principal propagation scenario using New 2021-
nectiv- gration Foun- ity Service - a user access token issued by the Identity Authentica- 06-03
dry
ity Suite Principal tion service (IAS), in addition to the scenarios based on
Propagation XSUAA.
Con- Inte- Cloud Destination A naming convention has been introduced, specifying New 2021-
nectiv- gration Foun- Service - how to properly configure HTTP headers and queries in 06-03
dry
ity Suite HTTP Desti- a destination configuration.
nations
For more information, see HTTP Destinations.
Con- Inte- Cloud Connectiv- On the cloud side, the tunnel connection idle threshold Chang 2021-
nectiv- gration Foun- ity Service - has been increased to better match both older (yet sup- ed 06-03
dry
ity Suite Tunnel Con- ported) and latest Cloud Connector versions (versions
Neo nections lower or equal to 2.13). This ensures the internal heart-
beat mechanism would work properly even in some
special cases in which short interruptions have been ob-
served.
Con- Inte- Cloud Destination The service could have experienced delays in case of Chang 2021-
nectiv- gration Foun- Service - parallel load which caused requests to be executed un- ed 06-03
dry
ity Suite Bugfix expectedly slower on random basis. This issue has been
fixed.
Con- Inte-Cloud Java Con- Release of Java Connector (JCo) version 3.1.4.0 introdu- New 2021-
nectiv- gration Foun- nector ces the following features and improvements: 05-20
dry
ity Suite 3.1.4.0 - En-
Neo hance- • JCo now offers the ABAP server processing
ments time for JCo client scenarios via method
JCoThrougput.getServerTime().
• JCoRepository methods were enhanced to ig-
nore trailing blanks in passed structure, table, and
function module names which are supposed to be
looked up.
• Performance was improved for setting DATE and
TIME datatype fields when using strings as input
values.
Con- Inte- Cloud Java Con- Release of Java Connector (JCo) version 3.1.4.0 provides Chang 2021-
nectiv- gration Foun- nector the following bug fixes: ed 05-20
dry
ity Suite 3.1.4.0 - Bug
Neo Fixes • When invoking remote function modules (RFMs)
via the t/q/bgRFC protocol to the same
JCoDestination in multiple threads simultane-
ously, used TIDs, queue names and unit IDs could
have been overwritten and used in the wrong thread
context, which might have led to data loss in the
target system.
For example, IDocs that seemed to have been trans-
ferred correctly without an error, were not stored
in the target system, because the TID contract was
broken and several IDocs were erroneously sent at
the same time with the same TID although different
ones had been specified.
This issue has been fixed.
Note
This regression bug was introduced with JCo
3.1.3.
Con- Inte- Cloud Cockpit: The Cloud Connectors view in the SAP BTP cockpit is New 2021-
nectiv- gration Foun- Cloud Con- now also available if your account is running on cloud 04-22
ity Suite dry nectors management tools feature set B. For more information,
View see Monitoring (section Monitoring from the Cockpit).
Con- Inte- Neo Cloud Con- Release of Cloud Connector version 2.13.1 introduces New 2021-
nectiv- gration Cloud nector the following features and improvements: 03-25
ity Suite Foun- 2.13.1 - En-
hance-
• Additional audit log entries for changing the trace
dry
level are available.
ments
• You can open the support log assistant directly
from the Log And Trace Files screen.
For more information, see Troubleshooting, section
Log And Trace Files.
• The dependency on ping checks for connections to
the LDAP system, which is used for UI authentica-
tion, has been minimized to avoid unnecessary role
switches in high availability mode.
Con- Inte- Neo Cloud Con- Release of Cloud Connector version 2.13.1 provides the Chang 2021-
nectiv- gration Cloud nector following bug fixes: ed 03-25
ity Suite Foun- 2.13.1 -
Fixes
• Connecting a subaccount with several hundred ac-
dry
cess control entries is working again.
• With release 2.13.1, restoring a backup created in
version 2.13.0 works properly again for a Cloud
Connector running on Linux.
• Backups created in version 2.12.5 and older can be
restored properly. Failures on restore led to a non-
usable Cloud Connector setup.
• The following high availability issues have been
fixed:
• Improved implementation ensures that a high
availability setup does not end up in a shadow/
shadow situation. This issue could occur under
rare circumstances.
• Errors could occur if subaccounts have a larger
number of access control entries.
• Network issues could prevent the individual
replication of configuration changes.
• After switching roles, connections can now al-
ways be reestablished correctly.
• The connection test for LDAPS access control en-
tries now works correctly.
• A memory leak in the comprised netty library has
been fixed by upgrading to a newer version.
• A subaccount display issue has been fixed:
In version 2.13.0, subaccounts on eu2.hana.onde-
mand.com were displayed as belonging to region
Europe (Rot) instead of Europe (Frankfurt).
Con- Inte- Cloud Destination You can create a destination configuration pointing to New 2021-
nectiv- gration Foun- Service - a mobile service instance, resulting in a fully functional 02-25
ity Suite dry Mobile destination configuration, including automatic token re-
Service In- trieval for the respective OAuth flows supported by the
stances mobile service.
Con- Inte- Kyma Destination Consumption of the Destination service from the Kyma New 2021-
nectiv- gration Other Service - or Kubernetes environments has been officially docu- 02-25
ity Suite Consump- mented.
tion from
For more information, see Create and Bind a Destination
Kyma or Ku-
Service Instance.
bernetes
Environ-
ments
Con- Inte- Cloud Principal When sending a user principal via the HTTP header X- New 2021-
nectiv- gration Foun- Propagation user-token, you can use any OpenID Connect-com- 02-11
ity Suite dry Authentica- pliant OAuth server and a related OpenID access token
tion - for passing the user identity.
OpenID
To enable this feature, you must
Connect
specify either x_user_token.jwks_uri
x_user_token.jwks as additional attribute, as de-
scribed in the respective authentication type. or
Con- Inte- Cloud OAuth - You can use X.509 client certificates for OAuth flows New 2021-
nectiv- gration Foun- X.509 Cli- supported by the respective authentication types, see 02-11
ity Suite dry ent Certifi- OAuth with X.509 Client Certificates.
cates
Con- Inte- Cloud "Find Desti- The "Find Destination" REST API endpoint has been New 2021-
nectiv- gration Foun- nation" enhanced with a new feature enabling the client appli- 02-11
ity Suite dry REST API - cation to initiate a skip of credentials in the returned
Skip Cre- response.
dentials
This parameter is useful especially for OAuth destina-
tions (such as OAuth2 User Token Exchange, OAuth2
JWT Bearer, OAuth2 SAML Bearer Assertion).
Con- Inte- Cloud Destina- • The property SystemUser is deprecated. The Chang 2021-
nectiv- gration Foun- tions - Au- cockpit now shows an alert if this feature is still in ed 02-11
ity Suite dry thentication use, suggesting what to do instead. Alternatives for
Types technical user authentication are Basic Authentica-
tion, OAuth2 Client Credentials, or Client Certificate
Authentication.
See also OAuth SAML Bearer Assertion Authentica-
tion.
Note
In general, we recommend that you work on
behalf of specific (named) users rather than
working with a technical user. To extend an
OAuth access token's validity, consider using
an OAuth refresh token.
Con- Inte- Neo Password The Password Storage API documentation on SAP API Chang 2021-
nectiv- gration Storage API Business Hub has been moved from the deprecated API ed 01-28
ity Suite package to SAP Cloud Platform Credential Store .
Con- Inte- Neo Cloud Con- Release of Cloud Connector version 2.13.0 introduces New 2021-
nectiv- gration Cloud nector the following features and improvements: 01-14
ity Suite Foun- 2.13.0 - En-
hance-
• Cloud Connector 2.13 is based on a different run-
dry
time container.
ments
Up to version 2.12.x, the JavaWeb 1.x runtime based
on Tomcat 7 was used. It now switches to JavaWeb
3.x based on Tomcat 8.5.
As a consequence, the internal structure has
changed and works differently. The upgrade will ad-
just these changes as much as possible for versions
2.9 and higher.
• Linux on ppc64 little endian (ppc64le) is added as a
supported platform for the Cloud Connector.
For more information, see Prerequisites.
• A set of new configuration REST APIs has been
added.
For more information, see Configuration REST APIs.
• An additional screen in the subaccount-specific
monitoring provides usage statistics of the various
access control entries.
Alternatively, you can access the same data using a
new monitoring REST API.
For more information, see Monitoring.
• For access control entries of type TCP, you can con-
figure a port range instead of a single port.
For more information, see Configure Access Control
(TCP).
• You can configure a widget that shows information
about the Cloud Connector on the login screen.
For more information, see Configure Login Screen
Information.
• Improved high availability communication super-
sedes applying SAP note 2915578 .
• For new Cloud Connector versions, a notification
and alert is shown to help you schedule the update.
• The Cloud Connector supports JSON Web tokens
(JWTs) based on OpenID-Connect (OIDC) for princi-
pal propgation authentication (Cloud Foundry envi-
ronment).
Con- Inte- Neo Cloud Con- Release of Cloud Connector version 2.13.0 provides the Chang 2021-
nectiv- gration Cloud nector following bug fixes: ed 01-14
ity Suite Foun- 2.13.0 -
Fixes
• When doing a rollover at midnight, the initial audit
dry
log entry for a new file was not created correctly
and the audit log checker wrongly assessed such
files as corrupted.
This issue has been fixed.
• Incorrect host information could be used in audit
logs related to access control audit entries.
This issue has been fixed.
Con- Inte- Cloud Cloud Con- For a subaccount that uses a custom identity provider New 2021-
nectiv- gration Foun- nector - (IDP), you can choose this IDP for authentication in- 01-14
ity Suite dry Subaccount stead of the (default) SAP ID service when configuring
Configura- the subaccount in the Cloud Connector.
tion
For more information, see Use a Custom IDP for Subac-
count Configuration.
Techni-
cal Envi- Availa-
Com- Capa- ron- ble as
ponent bility ment Title Description Type of
Con- Inte- Neo Java JCo provides the new property New 2020-1
nectiv- gration Cloud Connec- jco.client.tls_client_certificate_logon to 2-17
ity Suite Foun- tor (JCo) support the usage of a TLS client certificate for logging on to
dry - Client an ABAP system via WebSocket RFC.
Certifi-
For more information, see:
cates
User Logon Properties (Cloud Foundry environment)
WebSocket RFC
Con- Inte- Cloud HTTP Authentication type SAP Assertion SSO is deprecated. It will Depre- 2020-1
nectiv- gration Foun- Destina- soon be removed as a feature from the Destination service. cated 2-17
ity Suite dry tions -
Use Principal Propagation SSO Authentication instead, which is
Authenti-
the recommended mechanism for establishing single sign-on
cation
(SSO).
Types
Con- Inte- Neo HTTP Authentication type SAP Assertion SSO is deprecated. Depre- 2020-1
nectiv- gration Destina- cated 2-17
Use Principal Propagation SSO Authentication instead, which is
ity Suite tions -
the recommended mechanism for establishing single sign-on
Authenti-
(SSO).
cation
Types
Con- Inte- Cloud HTTP The destination property SystemUser for the authentication An- 2020-1
nectiv- gration Foun- Destina- types: nounce 2-03
ity Suite dry tions - ment
Destina- • OAuth SAML Bearer Assertion Authentication
tion • SAP Assertion SSO Authentication
Proper-
ties will be removed soon. More information on timelines and re-
quired actions will be published in the release notes at a later
stage.
See also:
Con- Inte- Neo JCo Run- JCo Runtime 3.1.3.0 introduces the following enhancement: New 2020-1
nectiv- gration Cloud time - 1-05
If the backend is known to be new enough, JCo does not check
ity Suite Foun- Enhance-
for the existence of RFC_METADATA_GET, thus avoiding the
dry ment
need to provide additional authorizations for the repository user.
Con- Inte- Neo JCo Run- JCo Runtime 3.1.3.0 provides the following bug fix: Chang 2020-1
nectiv- gration Cloud time - ed 1-05
Up to JCo 3.1.2, the initial value for fields of type STRING and
ity Suite Foun- Bug Fix
XSTRING was null. Since the initial value check in ABAP is differ-
dry
ent, JCo now behaves the same way and uses an emtpy string
and an empty byte array, respectively.
Con- Inte- Cloud Destina- The Destination service offers a new feature related to the au- New 2020-1
nectiv- gration Foun- tion tomatic token retrieval functionality, which lets the destination 1-05
ity Suite dry Service - administrator define HTTP headers and query parameters as ad-
Auto- ditional configuration properties, used at runtime when request-
matic To- ing the token service to obtain an access token.
ken Re-
See HTTP Destinations.
trieval
Con- Inte- Cloud Docu- The documentation of principal propagation (user propagation) Chang 2020-1
nectiv- gration Foun- menta- scenarios provides improved information on the basic concept ed 0-22
ity Suite dry tion - and guidance on how to set up different scenarios.
Principal
See Principal Propagation.
Propaga-
tion Sce-
narios
Con- Inte- Cloud Cloud Release of Cloud Connector version 2.12.5 introduces the follow- New 2020-1
nectiv- gration Foun- Connec- ing improvements: 0-22
ity Suite dry tor 2.12.5
- En-
• For principal propagation scenarios, custom attributes
stored in xs.user.attributes of the JWT (JSON Web
hance-
token) are now accessible for the subject pattern. See Con-
ments
figure a Subject Pattern for Principal Propagation.
• Improved resolving for DNS names with multiple IP ad-
dresses by adding randomness to the choice of the IP to
use. This is relevant for many connectivity endpoints in SAP
Cloud Platform, Cloud Foundry environment.
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.5 provides the following Chang 2020-1
nectiv- gration Cloud Connec- bug fixes: ed 0-22
ity Suite Foun- tor 2.12.5
- Fixes
• After actively performing a master-shadow switch for a
dry
disaster recovery subaccount, a zombie connection could
cause a timeout of all application requests to on-premise
systems. This issue has been fixed.
• When refreshing the subaccount certificate in an high avail-
ability setup, transferring the changed certificate to the
shadow was not immediately triggered, and the updated
certificate could get lost. This issue has been fixed.
• If many RFC connections were canceled at the same time,
the Cloud Connector could crash in the native layer, causing
the process to die. This issue has been fixed.
• The LDAP configuration test now supports all possible con-
figuration parameters.
Con- Inte- Cloud Connec- When using service plan “lite”, quota management is no longer Chang 2020-1
nectiv- gration Foun- tivity required for this service. From any subaccount you can consume ed 0-08
ity Suite dry Service - the service using service instances without restrictions on the
Service instance count.
Instan-
Previously, access to service plan “lite” has been granted via
ces -
entitlement and quota management of the application runtime.
Quota
It has now become an integral service offering of SAP Cloud
Manage-
Platform to simplify its usage.
ment
See also Create and Bind a Connectivity Service Instance.
Con- Inte- Cloud Destina- When using service plan “lite”, quota management is no longer Chang 2020-1
nectiv- gration Foun- tion required for this service. From any subaccount you can consume ed 0-08
ity Suite dry Service - the service using service instances without restrictions on the
Service instance count.
Instan-
Previously, access to service plan “lite” has been granted via
ces -
entitlement and quota management of the application runtime.
Quota
It has now become an integral service offering of SAP Cloud
Manage-
Platform to simplify its usage.
ment
See also Create and Bind a Destination Service Instance.
Con- Inte- Cloud SAP Java The SAP Java Buildpack has been updated from 1.27.3. to 1.28.0. Chang 2020-0
nectiv- gration Foun- Build- ed 9-24
ity Suite dry pack -
• TomEE Tomcat has been updated from 7.0.104 to 7.0.105.
Note
The previous activation process for the JCo compo-
nent is deprecated and will expire after a transition
period.
Con- Inte- Cloud Destina- Error handling has been improved for updating service instances Chang 2020-0
nectiv- gration Foun- tion via the Cloud Foundry CLI and the cloud cockpit when providing ed 9-10
ity Suite dry Service - the configuration JSON data.
Error
Handling
Con- Inte- Neo Connec- A synchronization issue has been fixed on cloud side that in Chang 2020-0
nectiv- gration tivity very rare cases could lead to a zombie tunnel from the Cloud ed 9-10
ity Suite Service - Connector to SAP Cloud Platform, which required to reconnect
Bug Fix the Cloud Connector.
Con- Inte- Cloud Destina- During Check Connection processing of a destination with basic Chang 2020-0
nectiv- gration Foun- tion authentication, the Destination service now uses the user cre- ed 9-10
ity Suite dry Service - dentials for both the HTTP HEAD and HTTP GET requests to
Bug Fix verify the connection on HTTP level.
Con- Inte- Cloud Destina- Using authentication type OAuth2SAMLBearerAssertion, Chang 2020-0
nectiv- gration Foun- tion an issue could occur when adding the user's SAML group attrib- ed 8-13
ity Suite dry Service - utes into the resulting SAML assertion that is sent to the target
Bug Fix token service. This issue has been fixed.
Con- Inte- Cloud Destina- The REST API pagination feature provides improved error han- Chang 2020-0
nectiv- gration Foun- tion dling in case of issues with the pagination, for example, if an ed 8-13
ity Suite dry Service invalid page number is provided.
REST API
- Pagina-
tion Fea-
ture
Con- Inte- Neo HttpDes- The HttpDestination v2 library has been officially released New 2020-0
nectiv- gration tination in the Maven Central Repository . It enables the usage in Tom- 7-30
ity Suite Library - cat and TomEE-based runtimes the same way as in the depre-
New Ver- cated JavaWeb and Java EE 6 Web Profile runtimes. See also
sion HttpDestination Library.
Con- Inte- Cloud Destina- An error handling issue has been fixed in the Destination service, Chang 2020-0
nectiv- gration Foun- tion which is related to the recently introduced SAP Assertion SSO ed 7-30
ity Suite dry Service - authentication type. If a wrong input was provided, you can now
Bug Fix see the error properly, and recover it.
Con- Inte- Cloud Destina- You can use authentication type OAuth2JWTBearer when New 2020-0
nectiv- gration Foun- tions - configuring a Destination. It is a simplified version of the authen- 7-02
ity Suite dry Authenti- tication type OAuth2UserTokenExchange and represents
cation the official OAuth grant type for exchanging OAuth tokens. See
Types HTTP Destinations.
Con- Inte- Cloud Destina- The Destination service provides a prepared HTTP header that New 2020-0
nectiv- gration Foun- tion simplifies application and service development. See HTTP Desti- 7-02
ity Suite dry Service - nations (code samples).
HTTP
Header
Con- Inte- Cloud Destina- A concurrency issue in the Destination service, related to parallel Chang 2020-0
nectiv- gration Foun- tion auth token retrieval in the token cache functionality, could result ed 7-02
ity Suite dry Service - in partial request failures. This issue has been fixed.
Bug Fix
Con- Inte- Cloud HTTP The Cloud Foundry environment supports SAP Assertion New 2020-0
nectiv- gration Foun- Destina- SSO as authentication type for configuring destinations in the 6-18
ity Suite dry tions - Destination service. See HTTP Destinations.
Authenti-
cation
Types
Con- Inte- Cloud Destina- The "Find Destination" REST API now includes the scopes of the New 2020-0
nectiv- gration Foun- tion automatically retrieved access token in the response that is re- 6-04
ity Suite dry Service turned to the caller. See "Find Destination" Response Structure.
REST API
Con- Inte- Cloud Destina- For subscription-based scenarios, you can use an automated New 2020-0
nectiv- gration Foun- tions for procedure to create a destination that points to your service 6-04
ity Suite dry Service instance. See Managing Destinations.
Instan-
ces
Con- Inte- Neo Connec- In rare cases, establishing a secure tunnel between Cloud Con- Chang 2020-0
nectiv- gration Cloud tivity nector (version 2.12.3 or older) and the Connectivity service ed 5-21
ity Suite Foun- Service - could cause an issue that requires to manually disconnect and
dry Bug Fix connect the Cloud Connector.
This issue has been fixed. The fix requires Cloud Connector ver-
sion 2.12.4 or higher.
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.4 introduces the follow- New 2020-0
nectiv- gration Cloud Connec- ing features and enhancements: 5-07
ity Suite Foun- tor 2.12.4
- Fea-
• You can activate the SSL trace in the Cloud Connector ad-
dry
ministration UI also for the shadow instance.
tures
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.4 provides the following Chang 2020-0
nectiv- gration Cloud Connec- bug fixes: ed 5-07
ity Suite Foun- tor 2.12.4
- Fixes
• You can edit and delete domain mappings in the Cloud Con-
dry
nector administration UI correctly.
• The REST API does no longer return an empty configura-
tion.
• REST API DELETE operations do not require setting a con-
tent-type application/json to function properly.
• If more than 2000 audit log entries match a selection, rede-
fining the search and getting a shorter list now works as
expected.
• A potential leak of HTTP backend connections has been
closed.
Con- Inte- Cloud Connec- A fix has been applied in the Connectivity service internal load Chang 2020-0
nectiv- gration Foun- tivity balancers, enabling the sending of TCP keep-alive packets on cli- ed 3-26
ity Suite dry Service - ent and server side. This change mainly affects SOCKS5-based
Bug Fix communication scenarios.
Con- Inte- Cloud Destina- You can create a service instance specifying an update policy. New 2020-0
nectiv- gration Foun- tion This allows you to avoid name conflicts with existing destina- 3-26
ity Suite dry Service - tions. See Create and Bind a Destination Service Instance.
Service
Instan-
ces
Con- Inte- Cloud Cockpit - The Destinations editor in the cockpit is available for accounts New 2020-0
nectiv- gration Foun- Destina- running on the cloud management tools feature set B. See Man- 3-12
ity Suite dry tion aging Destinations.
Manage-
ment
Con- Inte- Neo Connec- When creating or editing a destination with authentication type Chang 2020-0
nectiv- gration tivity OAuth2ClientCredentials in the cockpit, the parameter ed 3-12
ity Suite Service - Audience could not be added as additional property. This is-
Bug Fix sue has been fixed.
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.3 introduces the follow- New 2020-0
nectiv- gration Cloud Connec- ing features and enhancements: 2-27
ity Suite Foun- tor 2.12.3
- Fea-
• When using the SAP JVM as runtime, the thread dump in-
dry
cludes additional information about currently executed RFC
tures
function modules.
• The hardware monitor includes a Java Heap history, show-
ing the usage in the last 24 hours.
• If you are using the file scc_daemon_extension.sh
to extend the daemon in a Linux installation, the content is
included in the initialization section of the daemon. This lets
you make custom extensions to the daemon that survive
an upgrade. See Installation on Linux OS, section Installer
Scenario.
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.3 provides the following Chang 2020-0
nectiv- gration Cloud Connec- bug fixes: ed 2-27
ity Suite Foun- tor 2.12.3
- Fixes
• When switching roles between master and shadow instance
dry
in a high availability setup, the switch is no longer blocked
by active RFC function module invocations.
• A fix in the backend HTTP connection handling prevents
issues when the backend tries to send the HTTP response
before completely reading the HTTP request.
• When sending large amounts of data to an on-premise
system, and using RFC with a network that provides large
bandwidth, the Cloud Connector could fail with the error
message Received invalid block with negative size. This issue
has been fixed.
• The Cloud Connector admin UI now shows the correct user
information for installed Cloud Connector instances in the
About window.
• Fixes in the context of disaster recovery:
• The location ID is now handled properly when setting it
after adding the recovery subaccount.
• Application trust settings and application-specific con-
nections are applied in the disaster case.
• Principal propagation settings are applied in the disas-
ter case
Con- Inte- Neo JCo Run- The JCo runtime in SAP Cloud Platform lets you use WebSocket New 2020-0
nectiv- gration Cloud time - RFC (RFC over Internet) with ABAP servers as of S/4HANA (on- 2-13
ity Suite Foun- Web- premise) version 1909. In the RFC destination configuration, this
dry Socket is reflected by new configuration properties and by the option to
RFC choose between different proxy types.
Con- Inte- Cloud Connec- The Connectivity service is operational again for trial accounts. A Chang 2020-0
nectiv- gration Foun- tivity change in the Cloud Foundry Core component caused the serv- ed 1-30
ity Suite dry Service ice not be accessible by applications hosted in DiegoCell that are
for Trial dedicated for trial usage in a separate VPC (virtual private cloud)
Accounts account. This issue has been fixed.
- Bug Fix
2019
Techni-
cal Envi- Availa-
Com- Capa- ron- ble as
ponent bility ment Title Description Type of
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.2 introduces the follow- New 2019-1
nectiv- gration Cloud Connec- ing features and enhancements: 2-05
ity Suite tor 2.12.2
Foun- • You can turn on the TLS trace from the Cloud Connector
dry - Fea-
administration UI instead of modifying the props.ini file
tures
on OS level. See Troubleshooting.
• The status of the used subaccount certificate is shown on
the Subaccount overview page of the Cloud Connector ad-
ministration UI, in addition to expiring certificates shown in
the Alerting view. See Establish Connections to SAP Cloud
Platform.
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.2 provides the following Chang 2019-1
nectiv- gration Cloud Connec- bug fixes: ed 2-05
ity Suite Foun- tor 2.12.2
- Fixes
• Subject values for certificates requiring escaping are
dry
treated correctly.
• Establishing a connection to the master is now possible
when being logged on to the shadow with a user that has a
space in its name.
• Performance statistics could show too long total execution
times. This issue has been fixed.
• IP address changes for the connectivity service hosts are
recognized properly.
• The Cloud Connector could crash on Windows, when trying
to enable the payload trace with 4-eyes-principle without
the required user permissions. This issue has been fixed.
Con- Inte- Cloud Connec- Applications sending a significant amount of data payload during Chang 2019-11
nectiv- gration Foun- tivity OAuth authorization processing could cause an out-of-memory ed -21
ity Suite dry Service - error on the Connectivity service side. This issue has been fixed.
Bug Fix
Con- Inte- Neo Region The following IP addresses of the Connectivity service hosts for An- 2019-1
nectiv- gration Europe region Europe/Frankfurt (eu2.hana.ondemand.com) will nounce 0-03
ity Suite (Frank- change on 26 October 2019: ment
furt) -
Change
• connectivitynotification.eu2.hana.ondema
of Con- nd.com: from 157.133.70.140 (current) to 157.133.206.143
nectivity (new)
Service • connectivitycertsigning.eu2.hana.ondeman
Hosts d.com: from 157.133.70.132 (current) to 157.133.205.174
(new)
• connectivitytunnel.eu2.hana.ondemand.com:
from 157.133.70.141 (current) to 157.133.205.233 (new)
Con- Inte- Cloud Destina- Using the Destinations editor in the cockpit, you can check con- Chang 2019-0
nectiv- gration Foun- tion nections also for on-premise destinations. ed 9-26
ity Suite dry Service -
See Check the Availability of a Destination.
Connec-
tion
Check
Con- Inte- Neo Cloud The support for using Cloud Connector with Java runtime ver- An- 2019-0
nectiv- gration Cloud Connec- sion 7 will end on December 31, 2019. Any Cloud Connector nounce 9-13
ity Suite Foun- tor - Java version released after that date may contain Java byte code ment
dry Runtime requiring at least a JVM 8.
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.1 introduces the follow- New 2019-0
nectiv- gration Cloud Connec- ing features and enhancements: 8-15
ity Suite Foun- tor 2.12.1
- Fea-
• Subject Alternative Names are separated from the subject
dry
definition and provide enhanced configuration options. You
tures
can configure complex values easily when creating a certifi-
cate signing request.
See Exchange UI Certificates in the Administration UI.
• In a high availability setup, the master instance detection no
longer switches automatically if the configuration between
the two instances is inconsistent.
• Disaster recovery switch back to main subaccount is period-
ically checked (if not successful) every 6 hours.
• Communication to on-premise systems supports SNI
(Server Name Indication).
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12.1 provides the following Chang 2019-0
nectiv- gration Cloud Connec- bug fixes: ed 8-15
ity Suite Foun- tor 2.12.1
- Fixes
• The communication between master and shadow instance
dry
no longer ends up in unusable clients that show 403 results
due to CSRF (Cross-Site Request Forgery) failures, which
could cause undesired role switches.
• When restoring a backup, the administrator password check
works with all LDAP servers.
• The LDAP configuration test utility properly supports secure
communication.
• The Refresh Subaccount Certificate dialog is no longer
hanging when the refresh action fails due to some authenti-
cation or authorization issue.
Con- Inte- Cloud Destina- You can use the scope destination attribute for the OAuth- New 2019-0
nectiv- gration Foun- tion based authentication types OAuth2ClientCredentials, 8-15
ity Suite dry Service - OAuth2UserTokenExchange and
Scope OAuth2SAMLBearerAssertion. This additional attribute
Attribute provides flexibility on destination configuration level, letting you
for specify what scopes are selected when the OAuth access token
OAuth- is automatically retrieved by the service.
based
Authenti- See HTTP Destinations.
cation
Types
Con- Inte- Neo JCo Run- • Additional APIs have been added to New 2019-0
nectiv- gration time for JCoBackgroundUnitAttributes. See API documen- 7-18
ity Suite SAP tation for details.
Cloud
• If a structure or table contains only char-like fields, new
Platform
APIs let you read or modify all of them at once for the
- Fea-
structure or the current table row.
tures
See API documentation of JCoTable and
JCoStructure.
Con- Inte- Neo JCo Run- • qRFC and tRFC requests sent to an ABAP system by JCo Chang 2019-0
nectiv- gration time for can be monitored again by AIF. ed 7-18
ity Suite SAP • Structure fields of type STRING are no longer truncated if
Cloud there is a white space at the end of the field.
Platform
- Fixes
Con- Inte- Cloud Connec- The Connectivity service supports multitenancy for JCo applica- New 2019-0
nectiv- gration Foun- tivity tions. 6-20
ity Suite dry Service -
This feature requires a runtime environment with SAP Java
JCo Mul-
Buildpack version 1.9.0 or higher.
titenancy
See Scenario: Multitenancy for JCo Applications (Advanced).
Con- Inte- Cloud Cloud The Cloud Connector view is available also for Cloud Foundry New 2019-0
nectiv- gration Foun- Cockpit - regions. It lets you see which Cloud Connectors are connected to 4-25
ity Suite dry Cloud a subaccount.
Connec-
tor View
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12 introduces the following New 2019-0
nectiv- gration Cloud Connec- features and enhancements: 4-25
ity Suite Foun- tor 2.12 -
Features
• The administration UI is now accessible not only with an
dry
administrator role, but also with a display and a support
role. See Configure Named Cloud Connector Users and Use
LDAP for Authentication.
• For HTTP access control entries, you can
• allow a protocol upgrade, e.g. to WebSockets, for ex-
posed resources. See Limit the Accessible Services for
HTTP(S).
• define which host (virtual or internal) is sent in the host
header. See Expose Intranet Systems, Step 8.
• A disaster recovery subaccount in disaster recovery mode
can be converted into a standard subaccount, if a disaster
recovery region replaces the original region permanently.
See Convert a Disaster Recovery Subaccount into a Stand-
ard Subaccount.
• A service channel overview lets you check at a glance, which
server ports are used by a Cloud Connector installation. See
Service Channels: Port Overview.
• Important subaccount configuration can be exported, and
imported into another subaccount. See Copy a Subaccount
Configuration.
• An LDAP authentication configuration check lets you ana-
lyze and fix configuration issues before activating the LDAP
authentication. See Use LDAP for Authentication.
• You can use different user roles to access the Cloud Con-
nector configuration REST APIs. See Configuration REST
APIs.
• REST APIs for shadow instance configuration have been
added. See Shadow Instance Configuration.
• You can define scenarios for resources. Such a scenario can
be exported, and imported into other hosts. See Configure
Accessible Resources.
Con- Inte- Neo Cloud Release of Cloud Connector version 2.12 provides the following Chang 2019-0
nectiv- gration Cloud Connec- bug fixes: ed 4-25
ity Suite Foun- tor 2.12 -
Fixes
• The SAN (subjectAlternativeName) usage in certificates can
dry
be defined in a better way and is stored correctly in the cer-
tificate. See Exchange UI Certificates in the Administration
UI.
• IllegalArgumentException does not occur any
more in HTTP processing, if the backend closes a connec-
tion and data are streamed.
• DNS caching is now recognized in reconnect situations if
the IP of a DNS entry has changed.
• SNC with load balancing now works correctly for RFC SNC-
based access control entries.
• A master-master situation is also recognized if, at startup
of the former master instance, the new master (the former
shadow instance) is not reachable.
• Solution management model generation works correctly for
a shadow instance.
• The daemon is started properly on SLES 12 standard instal-
lations at system startup.
Con- Inte- Cloud Destina- Authentication type OAuth2SAMLBearerAssertion pro- New 2019-0
nectiv- gration Foun- tion vides two different types of Token Service URL: 4-11
ity Suite dry Service -
Authenti- • Dedicated: used in the context of a single tenant, or
cation • Common: used in the context of multiple tenants.
Types
For type Common, the tenant subdomain is automatically set to
the target Token Service URL.
Con- Inte- Neo Connec- When an on-premise system closed a connection that uses an Chang 2019-0
nectiv- gration Cloud tivity RFC or SOCKS5 proxy, the Connectivity service kept the connec- ed 4-11
ity Suite Foun- Service - tion to the cloud application alive.
dry Fix
This issue has been fixed. The connection is now always closed
right after sending the response.
Con- Inte- Cloud Connec- The Connectivity service supports TCP connections to on-prem- New 2019-0
nectiv- gration Foun- tivity ise systems, exposing a SOCKS5 proxy to cloud applications. 3-14
ity Suite dry Service - This feature follows the concept of binding the credentials of a
Proto- Connectivity service instance.
cols
See Using the TCP Protocol for Cloud Applications.
Con- Inte- Neo Connec- After receiving an on-premise system response with HTTP Chang 2019-0
nectiv- gration tivity header Connection: close, the Connectivity service kept the ed 3-14
ity Suite Service - HTTP connection to the cloud application alive.
Fix
This issue has been fixed. The connection is now always closed
right after sending the response.
Con- Inte- Neo Cloud For the Connectivity service (Neo environment), a new, region- An- 2019-0
nectiv- gration Connec- specific certificate authority (X.509 certificate) is being intro- nounce 2-28
ity Suite tor - Cer- duced. ment
tificate
If you use the Cloud Connector for on-premise connections to
Update
the Neo environment, you must import the new certificate au-
thority into your trust configuration.
Con- Inte- Cloud Destina- The new authentication type OAuth2UserTokenExchange New 2019-0
nectiv- gration Foun- tion lets your applications use an automated exchange of user ac- 2-14
ity Suite dry Service - cess tokens when accessing other applications or services. The
Authenti- feature supports single-tenant and multi-tenant scenarios. See
cation OAuth User Token Exchange Authentication.
Types
Con- Inte- Neo RFC - You can make a stateful sequence of function module invoca- Chang 2019-0
nectiv- gration Stateful tions work across several request/response cycles. See Invoking ed 1-31
ity Suite Sequen- ABAP Function Modules via RFC.
ces
Con- Inte- Neo Cloud A security note for Cloud Connector version 2.11.3 has been Chang 2019-0
nectiv- gration Cloud Connec- issued. See SAP note 2696233 . ed 1-15
ity Suite Foun- tor 2.11.3
dry
Con- Inte- Cloud Proto- You can use the RFC protocol to set up communication with New 2019-0
nectiv- gration Foun- cols - on-premise ABAP systems for applications in the Cloud Foundry 1-17
ity Suite dry RFC environment.
Commu-
This feature requires a runtime environment with SAP Java
nication
Buildpack version 1.8.0 or higher. See Invoking ABAP Function
Modules via RFC.
Con- Inte- Cloud Destina- A button in the Destinations editor lets you update the validity New 2019-0
nectiv- gration Foun- tions - period of an X.509 certificate. See Set up Trust Between Sys- 1-17
ity Suite dry Renew tems.
Certifi-
cates
2018
Techni-
cal Envi- Availa-
Com- Capa- ron- ble as
ponent bility ment Title Description Type of
Con- Integra- Neo Con- A change in the SAP Cloud Platform Connectivity service im- Chang 2018-1
nectiv- tion nectiv- proves performance of data upload (on-premise to cloud) and ed 2-20
ity ity data download (cloud to on-premise) up to 4 times and 15-30
Service times respectively.
- Per-
for-
mance
Con- Integra- Neo Con- The Connectivity service has a better protection against zombie Chang 2018-1
nectiv- tion nectiv- connections, which improves resilience and overall availability for ed 2-20
ity ity the cloud applications consuming it.
Service
- Resil-
ience
Con- Integra- Neo Pass- A Password Storage REST API is available in the SAP API Business New 2018-1
nectiv- tion word Hub, see Password Storage (Neo Environment) . 2-06
ity Stor-
age
Service
Con- Integra- Neo Desti- A Destination Configuration service REST API is available in the New 2018-1
nectiv- tion nation SAP API Business Hub. 2-06
ity Config-
uration
Service
Con- Integra- Cloud Desti- A Destination service REST API is available in the SAP API New 2018-1
nectiv- tion Foun- nation Business Hub. 2-06
ity dry Service
Con- Integra- Neo JCo • When using JCoRecord.fromJSON() for a structure pa- Chang 2018-1
nectiv- tion Run- rameter, the data is now always sent to the backend system. ed 2-06
ity time Also, you do not need to append the number of provided rows
for SAP for table parameters before parsing the JSON document any
Cloud more.
Plat-
• Depending on the configuration of certain JCo properties,
form -
an internally managed connection pool could throw a
Fixes
JCoException (error group JCO_ERROR_RESOURCE).
In a thread waiting for a free connection from this pool, an
error message then erroneously reported that the pool was
exhausted .
This error situation could occur if the used des-
tination was not configured with the property
jco.destination.max_get_client_time set to 0
and the destination's jco.destination.peak_limit
value was set higher than the
jco.destination.pool_capacity.
This issue has been fixed.
Con- Integra- Neo JCo Support of the RFC fast serialization. Depending on the exchanged Chang 2018-1
nectiv- tion Run- parameter and data types, the performance improvements for ed 2-06
ity time RFC communication can reach multiple factors.
for SAP
See SAP note 2372888 (prerequisites) and Parameters Influ-
Cloud encing Communication Behavior (JCo configuration in SAP Cloud
Plat- Platform).
form -
Fea-
tures
Con- Integra- Neo JCo Local runtimes on Windows must install the VS 2013 redistributa- Chang 2018-1
nectiv- tion Run- bles for x64, instead of VS 2010. ed 2-06
ity time
for SAP
Cloud
Plat-
form -
Infor-
mation
Con- Integra- Neo Cloud Release of Cloud Connector 2.11.3: Chang 2018-1
nectiv- tion Cloud Con- ed 2-06
ity nector
• An issue in RFC communication could cause the trace entry
Foun-
com.sap.scc.jni.CpicCommunicationException: no SAP ErrInfo
dry Fixes
available when the network is slow. This issue has been fixed.
• The Windows service no longer runs in error 1067 when stop-
ped by an administrator.
• In previous releases, the connection between a shadow and a
master instance occasionally failed at startup and produced
an empty error message. This issue has been fixed.
• The Cloud Connector does not cache Kerberos tokens in the
protocol handler any more, as they are one-time tokens and
cannot be reused.
• For HTTP access control entries, you can configure resources
containing a # character.
Con- Integra- Neo Cloud Release of Cloud Connector 2.11.3: Chang 2018-1
nectiv- tion Cloud Con- ed 2-06
ity nector
• If the user sapadm exists on a system, the installation on
Foun-
Linux assigns it to the sccgroup, which is a prerequisite for
dry En-
solution management integration to work properly, see Con-
hance-
figure Solution Management Integration [page 622].
ments
• Restoring a backup has been improved. See Configuration
Backup [page 626].
• The HTTP session store size has been reduced. You can han-
dle higher loads with a given heap size.
• Cipher suite configuration has been improved. Also, there is
a new security status entry for cipher suites, see Recommen-
dations for Secure Setup [page 382].
Con- Integra- Neo HTTP The OAuth2 Client Credentials grant type is supported by the Chang 2018-1
nectiv- tion Desti- Destinations editor in the SAP Cloud Platform cockpit as well ed 0-11
ity nations as by the client Java APIs ConnectivityConfiguration,
AuthenticationHeaderProvider and
HttpDestination, available in SAP Cloud Platform Neo run-
times.
Con- Integra- Cloud User The connectivity service supports the SaaS application subscrip- Chang 2018-0
nectiv- tion Foun- Propa- tion flow and can be declared as a dependency in the get depend- ed 9-27
ity dry gation encies subscription callback, also via MTA (multi-target)-bundled
applications.
Con- Integra- Neo Cloud Release of Cloud Connector 2.11.2 Chang 2018-0
nectiv- tion Con- ed 8-16
Cloud • SNC configuration now provides the value of the environment
ity Foun- nector
variable SECUDIR, which you need for the usage of the SAP
dry Cryptographic Library (SAPCRYPTOLIB). See Initial Configu-
2.11.2
ration (RFC).
• On Linux, the RPM (Red Hat Package Manager) now en-
sures that the configuration of the interaction with the SAP
Host Agent (used for the Solution Manager integration) is
adjusted. See Configure Solution Management Integration.
• The Cloud Connector shadow instance now provides a config-
uration option for the connection and request timeout that
may occur during health check against the master instance.
See Master and Shadow Administration.
Con- Integra- Neo Cloud Fixes of Cloud Connector 2.11.2 Chang 2018-0
nectiv- tion Con- ed 8-16
Cloud • In a high availability setup, the switch from the master in-
ity Foun- nector stance to the shadow instance occasionally caused commu-
dry 2.11.2 nication errors towards on-premise systems. This issue has
now been fixed.
• You can now import multiple certificates with the same sub-
ject to the trust store. Details about expiration date and is-
suer are displayed in the tool tip. See Set Up Trust, section
Trust Store.
• You can now configure also the MOC (Multiple Origin Compo-
sition) OData service paths as resources.
• The Location header is now adjusted correctly according
to your access control settings in case of a redirect.
• Principal propagation now also works with SAML assertions
that contain an empty attribute element.
• SAP Cloud Platform applications occasionally got an HTTP
500 (internal server error) response when an HTTP connec-
tion was closed. The applications are now always informed
properly.
Con- Integra- Neo HttpDe The SAP HttpDestination library (available in the SDK and Chang 2018-0
nectiv- tion stina- cloud runtime "Java EE 6 Web Profile") now creates Apache ed 8-16
ity tion Li- HttpClient instances which work with strict SNI (Server Name
brary Indication) servers.
Use cases with strict SNI configuration on the server side will
no longer get the error message Failure reason: "peer not authen-
ticated", that was raised either at runtime or while performing a
connection test via the SAP Cloud Platform cockpit Destinations
editor (Check Connection function).
New
The destination service (Beta) is available in the Cloud Foundry environment. See Consuming the Destination Service
[page 243].
Enhancement
Cloud Connector
• The URLs of HTTP requests can now be longer than 4096 bytes.
• SAP Solution Manager can be integrated with one click of a button if the host agent is installed on a Cloud
Connector machine. See the Solution Management section in Monitoring [page 664].
• The limitation that only 100 subaccounts could be managed with the administration UI has been removed. See
Managing Subaccounts [page 404].
Fix
Cloud Connector
• The regression of 2.10.0 has been fixed, as principal propagation now works for RFC.
• The cloud user store works with group names that contain a backslash (\) or a slash (/).
• Proxy challenges for NT LAN Manager (NTLM) authentication are ignored in favor of Basic authentication.
• The back-end connection monitor works when using a JVM 7 as a runtime of Cloud Connector.
Enhancement
Cloud Connector
Fix
Cloud Connector
• The is no longer a bottleneck that could lengthen the processing times of requests to exposed back-end systems,
after many hours under high load when using principal propagation, connection pooling, and many concurrent
sessions.
• Session management is no longer terminating early active sessions in principal propagation scenarios.
• On Windows 10 hardware metering in virtualized environments shows hard disk and CPU data.
New
In case the remote server supports only TLS 1.2, use this property to ensure that your scenario will work. As TLS 1.2 is
more secure than TLS 1.1, the default version used by HTTP destinations, consider switching to TLS 1.2.
Enhancement
The release of SAP Cloud Platform Cloud Connector 2.9.1 includes the following improvements:
• UI renovations based on collected customer feedback. The changes include rounding offs, fixes of wrong/odd
behaviors, and adjustments of controls. For example, in some places tables were replaced by sap.ui.table.Table for
better experience with many entries.
• You can trigger the creation of a thread dump from the Log and Trace Files view.
• The connection monitor graphic for idle connections was made easier to understand.
Fix
• When configuring authentication for LDAP, the alternate host settings are no longer ignored.
• The email configuration for alerts is processing correctly the user and password for access to the email server.
• Some servers used to fail to process HTTP requests when using the HTTP proxy approach (HTTP Proxy for
On-Premise Connectivity) on the SAP Cloud Platform side.
• A bottleneck was removed that could lengthen the processing times of requests to exposed back-end systems
under high load when using principal propagation.
• The Cloud Connector accepts passwords that contain the '§' character when using authentication-mode password.
Enhancement
Update of JCo runtime for SAP Cloud Platform. See Connectivity [page 4].
• 2016
• 2015
• 2014
• 2013
1.1.3 Administration
Manage destinations and authentication for applications in the Cloud Foundry environment.
Task Description
Managing Destinations [page 59] Manage HTTP destinations for Cloud Foundry applications in
the SAP BTP cockpit.
HTTP Destinations [page 86] You can choose from a broad range of authentication types
for HTTP destinations, to meet the requirements of your
specific communication scenario.
RFC Destinations [page 157] Use RFC destinations to communicate with an on-premise
ABAP system via the RFC protocol.
Principal Propagation [page 168] Use principal propagation (user propagation) to securely for-
ward cloud users to a back-end system (single sign-on).
Set up Trust Between Systems [page 172] Download and configure X.509 certificates as a prerequisite
for user propagation from the Cloud Foundry environment
to the Neo environment or to a remote system outside SAP
BTP, like S/4HANA Cloud, C4C, Success Factors, and oth-
ers.
Multitenancy in the Connectivity Service [page 203] Manage destinations for multitenancy-enabled applications
that require a connection to a remote service or on-premise
application.
Create and Bind a Connectivity Service Instance [page 206] To use the Connectivity service in your application, you must
first create and bind an instance of the service.
Create and Bind a Destination Service Instance [page 209] To use the Destination service in your application, you must
first create and bind an instance of the service.
Destination Fragments [page 213] Use destination fragments to override and/or extend desti-
nation properties as part of the “Find Destination” call.
To manage destinations for your application, choose a procedure that fits best your requirements.
There are various ways to manage destinations. Each method is characterized by different prerequisites and
limitations. Before choosing a method, you should evaluate them and decide which one is the most appropriate
for your particular scenario. The following table compares the available methods:
Using the Yes Yes Yes Yes Yes Yes Yes Yes
Destina-
tions Editor
in the Cock-
pit [page
60]
Use the Destinations editor in the SAP BTP cockpit to configure HTTP, RFC or mail destinations in the Cloud
Foundry environment.
The Destinations editor lets you manage destinations on subaccount or service instance level.
• Connect your Cloud Foundry application to the Internet (via HTTP), as well as to an on-premise system
(via HTTP or RFC).
• Send and retrieve e-mails, configuring a mail destination.
• Create a destination for subscription-based scenarios, pointing to your service instance. For more
information, see Destinations Pointing to Service Instances [page 71].
Prerequisites
Restrictions
• A destination name must be unique for the current application. It must contain only alphanumeric
characters, underscores, and dashes. The maximum length is 200 characters.
• The currently supported destination types are HTTP, RFC and MAIL.
• HTTP Destinations [page 86] - provide data communication via the HTTP protocol and are used for
both Internet and on-premise connections.
• RFC Destinations [page 157] - make connections to ABAP on-premise systems via RFC protocol using
the Java Connector (JCo) as API.
• Mail destinations - specify an e-mail provider for sending and retrieving e-mails.
Tasks
Related Information
Access the Destinations Editor in the SAP BTP cockpit to create and manage destinations in the Cloud Foundry
environment.
• Subaccount level
• Service instance level
On subaccount level, you can specify a destination for the entire subaccount, defining the used communication
protocol and more properties, like authentication method, proxy type and URL.
On service instance level, you can reuse this destination for a specific space and adjust the URL if required. You
can also create a new destination only on service instance level that is specific to the selected service instance
and its assigned applications.
Prerequisites
Procedure
1. In the cockpit, select your Global Account and your subaccount name from the Subaccount menu in the
breadcrumbs.
Note
To perform these steps, you must have created a Destination service instance in your space, see Create and
Bind a Destination Service Instance [page 209]. On service instance level, you can set destinations only for
Destination service instances.
1. In the cockpit, choose your Global Account from the Region Overview and select a Subaccount.
Note
5. On the Destinations screen, you can create new destinations or edit existing ones.
See also section Create and Bind a Service Instance from the Cockpit in Create and Bind a Destination Service
Instance [page 209].
Related Information
Use the Destinations editor in the SAP BTP cockpit to configure destinations from scratch.
Configuring destinations from scratch provides the complete set of editing functions. While this requires
deeper technical knowledge of the scenario and the required connection configuration, it is the most flexible
procedure and lets you create any type of supported destination.
To Create Destinations from a Template [page 71], in contrast, you do not need this deeper knowledge for
configuration, but edting options are limited by the available templates.
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Note
In section Destination Configuration, do not change the default tab Blank Template, unless you want
to create a destination for a specific service instance in a subscription-based scenario. For more
information, see Destinations Pointing to Service Instances [page 71].
Note
7. From the <Authentication> dropdown box, select the authentication type you need for the connection.
Note
If you set an HTTPS destination, you need to also add a Trust Store. For more information, see Use
Destination Certificates [page 76].
8. (Optional) If you are using more than one Cloud Connector for your subaccount, you must enter the
<Location ID> of the target Cloud Connector.
See also Managing Subaccounts [page 404] (section Procedure, step 4).
9. (Optional) You can enter additional properties.
a. In the Additional Properties panel, choose New Property.
b. Enter a key (name) or choose one from the dropdown menu and specify a value for the property. You
can add as many properties as you need.
Note
For a detailed description of specific properties for SAP Business Application Studio (formerly known
as SAP Web IDE), see Connecting to External Systems.
Related Information
How to create RFC destinations in the Destinations editor (SAP BTP cockpit).
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
In section Destination Configuration, do not change the default tab Blank Template. Tab Service
Instance only applies for HTTP destinations.
Note
Using <Proxy Type> Internet , you can connect your application to any target service that
is exposed to the Internet. <Proxy Type> OnPremise requires the Cloud Connector to access
resources within your on-premise network.
Direct Connections
For a detailed description of RFC-specific properties (JCo properties), see RFC Destinations [page 157].
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
In section Destination Configuration, do not change the default tab Blank Template. Tab Service
Instance only applies for HTTP destinations.
Note
To access a mail server located in your own network (via Cloud Connector), choose OnPremise. To
access an external mail server, choose Internet.
6. Choose the Authentication type to be used for the destination and enter the required parameters. For a
detailed parameter description, see Configuring Authentication [page 87].
7. Enter the additional property mail.smtp.host to specify the address of the target mail server.
a. In the Additional Properties panel, choose New Property.
b. Choose mail.smtp.host from the dropdown menu and specify a value for the property.
8. (Optional) You can enter more additional properties.
a. In the Additional Properties panel, choose New Property.
b. Enter a key (name) or choose one from the dropdown menu and specify a value for the property. You
can add as many properties as you need. Each key of an additional property must start with "mail.".
c. To remove a property, choose the Remove button next to it.
9. When you are done, choose Save.
Find configuration examples for HTTP and RFC destinations in the Cloud Foundry environment, using different
authentication types.
Content
User → jco.client.user
Password → jco.client.passwd
Note
For security reasons, do not use these additional properties but use the corresponding main properties'
fields.
Related Information
Use a template to configure destinations with scenario-specific input data in the SAP BTP cockpit.
If you want to create several destinations for a common scenario, you can use a template that provides
the scenario-specific input data. The Destination service uses the template to configure the destinations
accordingly.
Create a destination for subscription-based scenarios that points to your service instance.
Note
This feature is applicable for a selected set of the most commonly used services (from a Destination
service perspective). If you would like to use this feature for a service which is not yet supported, let us
know by opening a support ticket, see Connectivity Support [page 876].
In the meantime, you can follow the steps described in Create Destinations from Scratch [page 63].
Usually, in the Cloud Foundry environment, you consume service instances by binding them to your
applications. However, in subscription-based scenarios this is not always possible. If you have purchased a
subscription to an SaaS application that runs in a provider's subaccount, you cannot bind your service instance
to this application.
If you create such a destination from scratch, you must provide a service key for your instance, look up the
credentials, and enter these values in the newly created destination.
Using the Destinations Pointing to Service Instances template, you only have to select the corresponding
service instance.
Note
Prerequisites
• You have a service instance which you want to make accessible to applications you are subscribed to.
• You have the Space Developer role in the space where this service instance resides.
• You have logged in to the cockpit and opened the Destinations editor on subaccount level. See Access the
Destinations Editor [page 61].
Procedure
Result
You have a destination pointing to your service instance. If you delete this service instance or its service key, the
destination stops working.
If you delete this service instance or its service key, the destination will stop working.
How to check the availability of a destination in the Destinations editor (SAP BTP cockpit).
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Context
You can use the Check Connection button in the Destinations editor of the cockpit to verify if the URL
configured for an HTTP Destination is reachable and if the connection to the specified system is possible.
Note
For each destination, the check button is available in the destination detail view and in the destination overview
list (icon Check availability of destination connection in section Actions).
Note
The check does not guarantee that the target system or service is operational. It only verifies if a
connection is possible.
This check is supported for destinations with <Proxy Type> Internet and OnPremise:
Restriction
Backend status could not be deter- • The Cloud Connector version is • Upgrade the Cloud Connector to
mined. less than 2.7.1. version 2.7.1 or higher.
Backend is not available in the list The Cloud Connector is not configured Check the basic Cloud Connector con-
figuration steps:
of defined system mappings in Cloud properly.
Connector. Initial Configuration [page 388]
Resource is not accessible in Cloud The Cloud Connector is not configured Check the basic Cloud Connector con-
figuration steps:
Connector or backend is not reachable. properly.
Initial Configuration [page 388]
Backend is not reachable from Cloud Cloud connector configuration is ok but Check the backend (server) availability.
Connector. the backend is not reachable.
Prerequisites
You have previously created or imported an HTTP destination in the Destinations editor of the cockpit.
1. In the Destinations editor, go to the existing destination which you want to clone.
Related Information
How to edit and delete destinations in the Destinations editor (SAP BTP cockpit).
Prerequisites
You have previously created or imported an HTTP destination in the Destinations editor of the cockpit.
Procedure
• Edit a destination:
Tip
For complete consistency, we recommend that you first stop your application, then apply your
destination changes, and then start again the application. Also, bear in mind that these steps will
cause application downtime.
• Delete a destination:
To remove an existing destination, choose the button. The changes will take effect in up to five minutes.
Maintain trust store and key store certificates in the Destinations editor (SAP BTP cockpit).
Prerequisites
You have logged on to the cockpit and opened the Destinations editor. For more information, see Access the
Destinations Editor [page 61].
Context
Caution
Uploaded certificates are accessible via the REST APIs, including any private data they may contain.
You can upload, add and delete certificates for your connectivity destinations.
Procedure
Upload Certificates
1. Choose Certificates.
2. Choose Upload Certificate.
Note
You can upload a certificate during creation or editing of a destination, by clicking the Upload and Delete
Certificates link.
Caution
Certificates added through the Upload Certificate option cannot be renewed automatically.
1. Choose Certificates.
2. Choose Generate Certificate.
3. In the pop-up window, enter certificate name and type. Optionally, you can enter certificate CN and
certificate validity. (Optional) Additionally, you can select the Enable automatic renewal checkbox to renew
More Information
Prerequisites
Note
The Destinations editor allows importing destination files with extension .props, .properties, .jks,
and .txt, as well as files with no extension. Destination files must be encoded in ISO 8859-1 character
encoding.
Procedure
• If the configuration file contains valid data, it is displayed in the Destinations editor with no errors. The
Save button is enabled so that you can successfully save the imported destination.
• If the configuration file contains invalid properties or values, under the relevant fields in the
Destinations editor are displayed error messages in red which prompt you to correct them accordingly.
Related Information
Export destinations from the Destinations editor in the SAP BTP cockpit to backup or reuse a destination
configuration.
Prerequisites
Procedure
• If the destination does not contain client certificate authentication, it is saved as a single configuration
file.
• If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a JKS file.
Related Information
Destination service REST API specification for the SAP Cloud Foundry environment.
The Destination service provides a REST API that you can use to read and manage resources like destinations,
certificates and destination fragments on all available levels. This API is documented in the SAP Business
Accelerator Hub .
It shows all available endpoints, the supported operations, parameters, possible error cases and related
status codes, etc. You can also execute requests using the credentials (for example, the service key) of your
Destination service instance, see Create and Bind a Destination Service Instance [page 209].
Use the multitarget-application (MTA) descriptor to manage destinations for complex deployments.
Content
Concept
When modeling a multitarget application (MTA), you can create and update destinations from your MTA
descriptor.
For more information on MTA, see Multitarget Applications in the Cloud Foundry Environment.
Content Deployment
When modeling MTAs, you can configure content deployments (for more information, see Deploying Content
with Generic Application Content Deployment). The Destinations service supports such content deployments,
which lets you create or update destinations by modeling them in the MTA descriptor. Other operations, like
deleting a destination, are not supported by this method.
Parameters
The parameters of the content deployment have the following structure:
content:
subaccount:
existing_destinations_policy: <policy> # optional, default value is "fail".
See "Existing destinations policy" for more details
Note
Both the subaccount and instance sections are optional. They can both be present at the same time, or
only one of them. They define the level on which the resulting destination is created.
The existing_destinations_policy setting allows you to control what happens if a destination with the
same name already exists. The possible values are:
• fail: Treat it as an error situation and fail the deployment. This is the default value of the setting.
• ignore: Keep what is currently saved in the Destination service, and skip deployment for this destination.
• update: Override what is currently saved in the Destination service.
Modeling Options
The destinations section represents an array of destination descriptors. Each of these array elements is
converted to a destination and saved in the service on the respective level, based on the existing destination
policy. The following options are available for modeling a destination descriptor via content deployment. They
can be combined:
As a result, a destination is created, based on the properties in the referenced service key.
Note
This function is equivalent to the Destinations Pointing to Service Instances [page 71] template.
This feature is applicable for a selected set of the most commonly used services (from a Destination
service perspective). If you would like to use this feature for a service which is not yet supported, let us
know by opening a support ticket, see Connectivity Support [page 876].
In the meantime, you can follow the steps described in Create Destinations from Scratch [page 63].
Applicable for:
• Service bindings of type "x509" which contain both a public and private key in the service key credentials
• Service bindings of type "clientsecret"
Applicable for:
• Service bindings of type "x509" which contain only the public key in the service key credentials
As a result, a destination is created with the token service configuration based on the properties in the
referenced service key, while the URL will be the one specified when modeling the destination.
Applicable for:
• Service bindings of type "x509" which contain both a public and private key in the service key credentials
• Service bindings of type "clientsecret"
Applicable for:
• Service bindings of type "x509" which contain only the public key in the service key credentials
_schema-version: "3.2"
ID: example
version: 0.0.1
modules:
- name: myapp
path: ./myapp
type: javascript.nodejs
requires:
- name: xsuaa_service
provides:
- name: myapp-route
properties:
url: ${default-url} #generated during deployment
- name: destination-content
type: com.sap.application.content
requires:
- name: xsuaa_service
parameters:
service-key:
name: xsuaa_service-key
- name: destination-service
parameters:
content-target: true
- name: myapp-route
build-parameters:
no-source: true
parameters:
content:
subaccount:
existing_destinations_policy: update
destinations:
- Name: myappOauth
URL: ~{myapp-route/url}
Authentication: OAuth2ClientCredentials
TokenServiceInstanceName: xsuaa_service
TokenServiceKeyName: xsuaa_service-key
myAdditionalProp: myValue
- Name: workflowOauthJwtBearer
Authentication: OAuth2JWTBearer
ServiceInstanceName: workflow_service
ServiceKeyName: workflow_service-key
instance:
existing_destinations_policy: update
destinations:
- Name: workflowBasicAuthentication
Authentication: BasicAuthentication
ServiceInstanceName: workflow_service
ServiceKeyName: workflow_service-key
myAdditionalProp: myValue
resources:
- name: xsuaa_service
type: org.cloudfoundry.managed-service
parameters:
service: xsuaa
service-name: xsuaa_service
service-plan: application
config:
xsappname: "myApp"
- name: workflow_service
type: org.cloudfoundry.managed-service
parameters:
service: workflow
service-name: workflow_service
service-plan: lite
- name: destination-service
The MTA descriptor lets you create service instances and provide a JSON configuration for this operation. You
can use this functionality to create a Destination service instance with a JSON, and include the required data to
create or update destinations.
For more details, see Use a Config.JSON to Create or Update a Destination Service Instance [page 211].
Use a JSON to create or update a destination when creating a Destination service instance.
When creating or updating a service instance of the Destination service, you can provide a JSON object
with various configurations. One of the sections of this JSON lets you create or update destinations. Other
operations, like deleting a destination, are not supported by this method.
For more information, see Use a Config.JSON to Create or Update a Destination Service Instance [page 211].
Find information about HTTP destinations for Internet and on-premise connections (Cloud Foundry
environment).
Destination Levels
The runtime tries to resolve a destination in the order: Subaccount Level → Service Instance Level.
In subscription-based scenarios, it is not always possible to consume a service instance by binding it to your
application. In this case, you must create a destination pointing to your service instance. For more information,
see Destinations Pointing to Service Instances [page 71].
Proxy Types
• Internet - The application can connect to an external REST or SOAP service on the Internet.
• OnPremise - The application can connect to an on-premise backend system through the Cloud Connector.
• PrivateLink - Selected SAP BTP services can establish a private connection with selected services in your
own IaaS provider accounts.
For more information, see What Is SAP Private Link Service?.
The proxy type used for a destination is specified by the destination property ProxyType. The default value is
Internet.
If you work in your local development environment behind a proxy server and want to use a service from the
Internet, you need to configure your proxy settings on JVM level. To do this, proceed as follows:
1. On the Servers view, double-click the added server and choose Overview to open the editor.
2. Click the Open Launch Configuration link.
3. Choose the (x)=Arguments tab page.
4. In the VM Arguments box, add the following row:
-Dhttp.proxyHost=yourproxyHost -Dhttp.proxyPort=yourProxyPort
-Dhttps.proxyHost=yourproxyHost -Dhttps.proxyPort=yourProxyPort
5. Choose OK.
6. Start or restart your SAP HANA Cloud local runtime.
Configuring Authentication
When creating an HTTP destination, you can use different authentication types for access control:
By default, the Destination service does not use URL-associated queries and header parameters.
For most authentication types however, you can add them as custom parameters to the URL of a destination.
Tip
You can use tools like the transparent proxy for Kubernetes to process those attributes automatically at
runtime.
For more information, see Using the Transparent Proxy [page 808].
When using the Destinations editor for destination configuration, you can add these parameters as additional
properties:
Replace HEADER_KEY and QUERY_KEY with the name of the headers or query parameters and set the
respective values.
In the example above, the destination has a language query parameter with the value EN and an apiKey
header with the value <my-api-key>.
Those additional headers and query parameters are added for every communication with the given destination.
Related Information
Create and configure a Server Certificate destination for an application in the Cloud Foundry environment.
Context
The server certificate validation is applicable to all destinations with proxy type Internet and PrivateLink
that use the HTTPS protocol.
Note
TLS 1.2 became the default TLS version of HTTP destinations. If an HTTP destination is consumed by a java
application the change will be effective after restart. All HTTP destinations that use the HTTPS protocol and
have ProxyType=Internet can be affected. Previous TLS versions can be used by configuring an additional
property TLSVersion=TLSv1.0 or TLSVersion=TLSv1.1.
Properties
Property Description
TLSVersion Optional property. Can be used to specify the preferred TLS version to be used by
the current destination. Since TLS 1.2 is not enabled by default on the older java
versions this property can be used to configure TLS 1.2 in case this is required by
the server configured in this destination. It is usable only in HTTP destinations.
Example: TLSVersion=TLSv1.2 .
TrustStoreLocation Path to the keystore file which contains trusted certificates (Certificate Authori-
1. When used in local environment ties) for authentication against a remote client.
2. When used in cloud environment
To find the allowed keystore file formats, see Use Destination Certificates [page
76].
1. The relative path to the keystore file. The root path is the server's location on
the file system.
2. The name of the keystore file.
Note
If the TrustStoreLocation property is not specified, the JDK trust store
is used as a default trust store for the destination.
TrustStorePassword Password for the JKS trust store file. This property is mandatory in case
TrustStoreLocation is used.
TrustAll If this property is set to TRUE in the destination, the server certificate will not be
checked for SSL connections. It is intended for test scenarios only, and should
not be used in production (since the SSL server certificate is not checked, the
server is not authenticated). The possible values are TRUE or FALSE; the default
value is FALSE (that is, if the property is not present at all).
HostnameVerifier Optional property. It has two values: Strict and BrowserCompatible. This
property specifies how the server hostname matches the names stored inside
the server's X.509 certificate. This verifying process is only applied if TLS or SSL
protocols are used and is not applied if the TrustAll property is specified. The
default value (used if no value is explicitly specified) is Strict.
Note
You can upload trust store JKS files using the same command as for uploading destination configuration
property files. You only need to specify the JKS file instead of the destination configuration file.
Note
Connections to remote services which require Java Cryptography Extension (JCE) unlimited strength
jurisdiction policy are not supported.
Configuration
Forward the identity of a cloud user from a Cloud Foundry application to a backend system via HTTP to enable
single sign-on (SSO).
Context
A PrincipalPropagation destination enables single sign-on (SSO) by forwarding the identity of a cloud user to
the Cloud Connector, and from there to the target on-premise system. In this way, the cloud user's identity can
be provided without manual logon.
Note
Configuration Steps
You can create and configure a PrincipalPropagation destination by using the properties listed below, and
deploy it on SAP BTP. For more information, see Managing Destinations [page 59].
Properties
Property Description
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Note
This is a naming convention. As the call to the
target endpoint is performed on the client side, the
service only provides the configured properties. The
expectation for the client-side processing logic is to
parse and use them. If you are using higher-level
libraries and tools, please check if they support this
convention.
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Note
This is a naming convention. As the call to the
target endpoint is performed on the client side, the
service only provides the configured properties. The
expectation for the client-side processing logic is to
parse and use them. If you are using higher-level
libraries and tools, please check if they support this
convention.
Example
Name=OnPremiseDestination
Type=HTTP
URL= http://virtualhost:80
Authentication=PrincipalPropagation
ProxyType=OnPremise
Related Information
Create and configure an OAuth SAML Bearer Assertion destination for an application in the Cloud Foundry
environment.
Context
You can call an OAuth2-protected remote system/API and propagate a user ID to the remote system by using
the OAuth2SAMLBearerAssertion authentication type. The Destination service provides functionality for
automatic token retrieval and caching, by automating the construction and sending of the SAML assertion.
This simplifies application development, leaving you with only constructing the request to the remote system
by providing the token, which is fetched for you by the Destination service. For more information, see User
Propagation via SAML 2.0 Bearer Assertion Flow [page 251].
The table below lists the destination properties for OAuth2SAMLBearerAssertion authentication type. You
can find the values for these properties in the provider-specific documentation of OAuth-protected services.
Usually, only a subset of the optional properties is required by a particular service provider.
Property Description
Required
tokenServiceURL The URL of the token service, against which the token ex-
change is performed. Depending on the Token Service
URL type, this property is interpreted in different ways dur-
ing the automatic token retrieval:
• https://
authentication.us10.hana.ondemand.com
/oauth/token → https://
mytenant.authentication.us10.hana.ond
emand.com/oauth/token
• https://
{tenant}.authentication.us10.hana.ond
emand.com/oauth/token → https://
mytenant.authentication.us10.hana.ond
emand.com/oauth/token
• https://
authentication.myauthserver.com/
tenant/{tenant}/oauth/token → https://
authentication.myauthserver.com/
tenant/mytenant/oauth/token
• https://oauth.
{tenant}.myauthserver.com/token
→ https://
oauth.mytenant.myauthserver.com/token
(Deprecated) SystemUser User to be used when requesting an access token from the
OAuth authorization server. If this property is not specified,
the currently logged-in user is used.
Caution
This property is deprecated and will be removed soon.
We recommend that you work on behalf of specific
(named) users instead of working with a technical user.
Additional
nameQualifier Security domain of the user for which access token is re-
quested.
assertionRecipient Recipient of the SAML assertion. If not set, the token service
URL will be the assertion's recipient.
Sample Code
{
...
"tokenServiceURL.headers.<header-
key-1>" : "<header-value-1>",
...
"tokenServiceURL.headers.<header-
key-N>": "<header-value-N>",
}
tokenServiceURL.ConnectionTimeoutInSecon Defines the connection timeout for the token service re-
ds trieval. The minimum value allowed is 0, the maximum is
60 seconds. If the value exceeds the allowed number, the
default value (10 seconds) is used.
tokenServiceURL.SocketReadTimeoutInSecon Defines the read timeout for the token service retrieval. The
ds minimum value allowed is 0, the maximum is 600 seconds.
If the value exceeds the allowed number, the default value
(10 seconds) is used.
Sample Code
{
...
"tokenServiceURL.queries.<query-
key-1>" : "<query-value-1>",
...
"tokenServiceURL.queries.<query-
key-N>": "<query-value-N>",
}
Sample Code
{
...
"tokenService.body.<param-
key-1>" : "<param-value-1>",
...
"tokenService.body.<param-key-
N>": "<param-value-N>",
}
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
URI of the JSON web key set, containing the signing keys
x_user_token.jwks_uri
which are used to validate the JWT provided in the X-User-
Token header.
Restriction
If the value is a private endpoint (for example, localhost),
the Destination service will not be able to perform the
verification of the X-User-Token header when using the
"Find Destination" API.
skipUserAttributesPrefixInSAMLAttributes If set to true, any additional attributes taken from the OAuth
server's user information endpoint, under the user_attrib-
utes section, will be added to the assertion without the pre-
fix that the Destination service would usually add to them.
For more information, see User Propagation via SAML 2.0
Bearer Assertion Flow [page 251].
The connectivity destination below provides HTTP access to the OData API of the SuccessFactors Jam.
URL=https://demo.sapjam.com/OData/OData.svc
Name=sap_jam_odata
ProxyType=Internet
Type=HTTP
Authentication=OAuth2SAMLBearerAssertion
tokenServiceURL=https://demo.sapjam.com/api/v1/auth/token
clientKey=<unique_generated_string>
audience=cubetree.com
nameQualifier=www.successfactors.com
apiKey=<apiKey>
The response for "find destination" contains an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Bearer",
"value": "eyJhbGciOiJSUzI1NiIsInR5cC...",
"http_header": {
"key":"Authorization",
"value":"Bearer eyJhbGciOiJSUzI1NiIsInR5cC..."
}
}
]
Related Information
Find details about client authentication types for HTTP destinations in the Cloud Foundry environment.
This section lists the supported client authentication types and the relevant supported properties.
No Authentication
This authentication type is used for destinations that refer to a service on the Internet, an on-premise system,
or a Private Link endpoint that does not require authentication. The relevant property value is:
Authentication=NoAuthentication
Note
When a destination is using HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Basic Authentication
Used for destinations that refer to a service on the Internet, an on-premise system, or a Private Link endpoint
that requires basic authentication. The relevant property value is:
Authentication=BasicAuthentication
Caution
Do not use your own personal credentials in the <User> and <Password> fields. Always use a technical
user instead.
Property Description
Preemptive If this property is not set or is set to TRUE (that is, the default behavior is to use
preemptive sending), the authentication token is sent preemptively. Otherwise,
it relies on the challenge from the server (401 HTTP code). The default value
(used if no value is explicitly specified) is TRUE. For more information about
preemptiveness, see http://tools.ietf.org/html/rfc2617#section-3.3 .
When a destination is using the HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Note
The response for "find destination" contains an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Basic",
"value": "dGVzdDpwYXNzMTIzNDU=",
"http_header": {
"key":"Authorization",
"value":"Basic dGVzdDpwYXNzMTIzNDU="
}
}
]
Used for destinations that refer to a service on the Internet or a Private Link endpoint. The relevant property
value is:
Authentication=ClientCertificateAuthentication
KeyStore.Source Optional. Specifies the storage location of the certificate to be used by the client.
Supported values are:
If the property is not set, the key store is managed by the Destination service
(default).
KeyStoreLocation The name of the key store file that contains the client certificate(s) for client
certificate authentication against a remote server. This property is optional if
KeyStore.Source is set to ClientProvided.
KeyStorePassword Password for the key store file specified by KeyStoreLocation. This
property is optional if KeyStoreLocation is used in combination with
KeyStore.Source, and KeyStore.Source is set to ClientProvided.
Note
You can upload KeyStore JKS files using the same command for uploading destination configuration
property file. You only need to specify the JKS file instead of the destination configuration file.
Configuration
Related Information
SAP BTP supports applications to use the OAuth client credentials flow for consuming OAuth-protected
resources.
The client credentials are used to request an access token from an OAuth authorization server.
The retrieved access token is cached and auto-renovated. When a token is about to expire, a new token is
created shortly before the expiration of the old one.
Configuration Steps
You can create and configure an OAuth2ClientCredentials destination using the properties listed below,
and deploy it on SAP BTP. To create and configure a destination, follow the steps described in Managing
Destinations [page 59].
Properties
The table below lists the destination properties required for the OAuth2ClientCredentials authentication type.
Property Description
Required
Type Destination type. Use HTTP as value for all HTTP(S) destina-
tions.
tokenServiceURL URL of the token service, against which token retrieval is per-
formed. Depending on the tokenServiceURLType, this
property is interpreted in different ways during automatic
token retrieval:
• https://authentication.us10.hana.onde-
mand.com/oauth/token → https://mytenant.au-
thentication.us10.hana.ondemand.com/oauth/token
• https://{tenant}.authentication.us10.hana.onde-
mand.com/oauth/token → https://mytenant.authen-
tication.us10.hana.ondemand.com/oauth/token
• https://authentication.myauthserver.com/tenant/{ten-
ant}/oauth/token → https://authentication.myauth-
server.com/tenant/mytenant/oauth/token
• https://oauth.{tenant}.myauthserver.com/token →
https://oauth.mytenant.myauthserver.com/token
Additional
Sample Code
{
...
"tokenServiceURL.headers.<header-
key-1>" : "<header-value-1>",
...
"tokenServiceURL.headers.<header-
key-N>": "<header-value-N>",
}
tokenServiceURL.ConnectionTimeoutInSecon Defines the connection timeout for the token service re-
ds trieval. The minimum value allowed is 0, the maximum is
60 seconds. If the value exceeds the allowed number, the
default value (10 seconds) is used.
tokenServiceURL.SocketReadTimeoutInSecon Defines the read timeout for the token service retrieval. The
ds minimum value allowed is 0, the maximum is 600 seconds.
If the value exceeds the allowed number, the default value
(10 seconds) is used.
Sample Code
{
...
"tokenServiceURL.queries.<query-
key-1>" : "<query-value-1>",
...
"tokenServiceURL.queries.<query-
key-N>": "<query-value-N>",
}
Sample Code
{
...
"tokenService.body.<param-
key-1>" : "<param-value-1>",
...
"tokenService.body.<param-key-
N>": "<param-value-N>",
}
Note
If set to false, but tokenServiceUser /
tokenServicePassword are set,
tokenServiceUser / tokenServicePassword
are taken with priority.
• "urn:ietf:params:oauth:client-assertion-type:saml2-
bearer" => indicating a SAML Bearer assertion.
• "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"
=> indicating a JWT Bearer token.
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Note
When the OAuth authorization server is called, it accepts the trust settings of the destination, see Server
Certificate Authentication [page 89].
Sample Code
URL=https://api.{landscape-domain}/desired-service-path
Name=sapOAuthCC
ProxyType=Internet
Type=HTTP
Sample Code
URL=https://demo.sapjam.com/OData/OData.svc
Name=sap_jam_odata
ProxyType=Internet
Type=HTTP
Authentication=OAuth2ClientCredentials
tokenServiceURL=http://demo.sapjam.com/api/v1/auth/token
tokenServiceUser=tokenserviceuser
tokenServicePassword=pass
clientId=clientId
clientSecret=secret
The response for "find destination" contains an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Bearer",
"value": "eyJhbGciOiJSUzI1NiIsInR5cC...",
"http_header": {
"key":"Authorization",
"value":"Bearer eyJhbGciOiJSUzI1NiIsInR5cC..."
}
}
]
Learn about the OAuth2UserTokenExchange authentication type for HTTP destinations in the Cloud Foundry
environment: use cases, supported properties and ways to retrieve an access token in an automated way.
Content
Overview
When a user is logged into an application that needs to call another application and pass the user context, the
caller application must perform a user token exchange.
The user token exchange is a sequence of steps during which the initial user token is handed over to the
authorization server and, in exchange, another access token is returned.
The calling application first receives a refresh token out of which the actual user access token is created. The
resulting user access token contains the user and tenant context as well as technical access metadata, like
scopes, that are required for accessing the target application.
Using the OAuth2UserTokenExchange authentication type, the Destination service performs all these steps
automatically, which lets you simplify your application development in the Cloud Foundry environment.
Properties
To configure a destination of this type, you must specify all the required properties. You can create destinations
of this type via the cloud cockpit (Access the Destinations Editor [page 61]) or the Destination Service REST
API [page 80].
The following table shows the required properties along with their semantics.
Required
Token tokenServi- The URL of the token service, against which the token exchange is performed.
Service URL ceURL
Depending on the Token Service URL Type, this property is interpreted in
different ways during the automatic token retrieval:
Examples of interpreting the token service URL for the token service URL type
Common, if the call to the Destination service is on behalf of a subaccount subdo-
main with value mytenant:
• https://authentication.us10.hana.ondemand.com/oauth/
token → https://
mytenant.authentication.us10.hana.ondemand.com/oauth/
token
• https://
{tenant}.authentication.us10.hana.ondemand.com/oauth/
token → https://
mytenant.authentication.us10.hana.ondemand.com/oauth/
token
• https://authentication.myauthserver.com/tenant/
{tenant}/oauth/token
→ https://authentication.myauthserver.com/tenant/
mytenant/oauth/token
• https://oauth.{tenant}.myauthserver.com/token →
https://oauth.mytenant.myauthserver.com/token
Name Name Name of the destination. Must be unique for the destination level.
Client clientSecret OAuth 2.0 client secret to be used for the user access token exchange.
Secret
Client ID clientId OAuth 2.0 client ID to be used for the user access token exchange.
Proxy Type ProxyType You can only use proxy type Internet or OnPremise. If OnPremise is used,
the OAuth server must be accessed through the Cloud Connector.
Token tokenServi- • Choose Dedicated if the token service URL serves only a single tenant.
ceURLType
Service URL • Choose Common if the token service URL serves multiple tenants.
Type
Optional
Additional
scope The value of the OAuth 2.0 scope parameter expressed as a list of space-delimited,
case-sensitive strings.
Sample Code
{
...
"tokenServiceURL.headers.<header-key-1>" :
"<header-value-1>",
...
"tokenServiceURL.headers.<header-key-N>":
"<header-value-N>",
}
Sample Code
{
...
"tokenServiceURL.queries.<query-key-1>" :
"<query-value-1>",
...
"tokenServiceURL.queries.<query-key-N>":
"<query-value-N>",
}
tokenServic A static key prefix used as a namespace grouping of parameters which are sent
e.body.<par as part of the token request to the token service during token retrieval. For each
am-key> request, a tokenService.body prefix must be added to the parameter key,
separated by dot-delimiter. For example:
Sample Code
{
...
"tokenService.body.<param-key-1>" : "<param-
value-1>",
...
"tokenService.body.<param-key-N>": "<param-
value-N>",
}
URL.headers A static key prefix used as a namespace grouping of the URL's HTTP headers whose
.<header- values will be sent to the target endpoint. For each HTTP header's key, you must add
key> a URL.headers prefix separated by dot-delimiter. For example:
Sample Code
{
...
"URL.headers.<header-key-1>" : "<header-
value-1>",
...
"URL.headers.<header-key-N>": "<header-value-
N>",
}
Note
This is a naming convention. As the call to the target endpoint is performed on
the client side, the service only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and use them. If you are using
higher-level libraries and tools, please check if they support this convention.
URL.queries A static key prefix used as a namespace grouping of URL's query parameters whose
.<query- values will be sent to the target endpoint. For each query parameter's key, you must
key> add a URL.queries prefix separated by dot-delimiter. For example:
Sample Code
{
...
"URL.queries.<query-key-1>" : "<query-
value-1>",
...
"URL.queries.<query-key-N>": "<query-value-N>",
}
Note
This is a naming convention. As the call to the target endpoint is performed on
the client side, the service only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and use them. If you are using
higher-level libraries and tools, please check if they support this convention.
tokenServic Contains the name of the certificate configuration to be used. This property is
e.KeyStoreL required when using client certificates for authentication. See OAuth with X.509
ocation Client Certificates [page 133].
tokenServic Contains the password for the certificate configuration (if one is needed) when using
e.KeyStoreP client certificates for authentication. See OAuth with X.509 Client Certificates [page
assword 133].
tokenServic Specifies whether the client credentials should be placed in the request body of the
e.addClient token request, rather than the Authorization header. Default is true.
Credentials
InBody Note
x_user_toke Base64-encoded JSON web key set, containing the signing keys which are used to
n.jwks validate the JWT provided in the X-User-Token header.
x_user_toke URI of the JSON web key set, containing the signing keys which are used to validate
n.jwks_uri the JWT provided in the X-User-Token header.
Restriction
If the value is a private endpoint (for example, localhost), the Destination service
will not be able to perform the verification of the X-User-Token header when
using the "Find Destination" API.
The response for "find destination" contains an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Bearer",
"value": "eyJhbGciOiJSUzI1NiIsInR5cC...",
"http_header": {
"key":"Authorization",
"value":"Bearer eyJhbGciOiJSUzI1NiIsInR5cC..."
}
}
]
Related Information
Learn about the OAuth password authentication type for HTTP destinations in the Cloud Foundry environment:
use cases, supported properties and examples.
Content
Overview
SAP BTP provides support for applications to use the OAuth password grant flow for consuming OAuth-
protected resources.
The client credentials as well as the user name and password are used to request an access token from an
OAuth server, referred to as token service below. Access token retrieval is performed automatically by the
Destination service when using the "find destination" REST endpoint.
Properties
The table below lists the destination properties needed for the OAuth2Password authentication type.
Caution
Do not use your own personal credentials in the <User> and <Password> fields. Always use a technical
user instead.
Property Description
Required
Name Destination name. It must be the same as the destination name you use for the configuration tools,
that is, the console client and Destinations editor (cockpit).
ProxyType You can only use proxy type Internet or OnPremise. If OnPremise is used, the OAuth server
must be accessed through the Cloud Connector.
Additional
scope Value of the OAuth 2.0 scope parameter, expressed as a list of space-delimited, case-sensitive
strings.
tokenServiceU Static key prefix used as a namespace grouping of the tokenServiceUrl's HTTP headers. Its
RL.headers.<h
values will be sent to the token service during token retrieval. For each HTTP header's key you must
eader-key>
add a 'tokenServiceURL.headers' prefix separated by dot delimiter. For example:
Sample Code
{
...
"tokenServiceURL.headers.<header-key-1>" : "<header-
value-1>",
...
"tokenServiceURL.headers.<header-key-N>": "<header-value-
N>",
}
tokenServiceU Defines the connection timeout for the token service retrieval. The minimum value allowed is 0, the
RL.Connection maximum is 60 seconds. If the value exceeds the allowed number, the default value (10 seconds) is
TimeoutInSeco used.
nds
tokenServiceU Defines the read timeout for the token service retrieval. The minimum value allowed is 0, the maxi-
RL.SocketRead mum is 600 seconds. If the value exceeds the allowed number, the default value (10 seconds) is used.
TimeoutInSeco
nds
tokenServiceU Static key prefix used as a namespace grouping of tokenServiceUrl's query parameters. Its
RL.queries.<q
values will be sent to the token service during token retrieval. For each query paramester's key you
uery-key>
must add a 'tokenServiceURL.queries' prefix separated by dot delimiter. For example:
Sample Code
{
...
"tokenServiceURL.queries.<query-key-1>" : "<query-
value-1>",
...
"tokenServiceURL.queries.<query-key-N>": "<query-value-N>",
}
tokenService. A static key prefix used as a namespace grouping of parameters which are sent as part of the token
body.<param- request to the token service during token retrieval. For each request, a tokenService.body
key> prefix must be added to the parameter key, separated by dot-delimiter. For example:
Sample Code
{
...
"tokenService.body.<param-key-1>" : "<param-value-1>",
...
"tokenService.body.<param-key-N>": "<param-value-N>",
}
tokenService. Contains the name of the certificate configuration to be used. This property is required when using
KeyStoreLocat client certificates for authentication. See OAuth with X.509 Client Certificates [page 133].
ion
tokenService. Contains the password for the certificate configuration (if one is needed) when using client certifi-
KeyStorePassw cates for authentication. See OAuth with X.509 Client Certificates [page 133].
ord
tokenService. Specifies whether the client credentials should be placed in the request body of the token request,
addClientCred rather than the Authorization header. Default is true.
entialsInBody
clientAsserti Name of the destination that provides client assertions when using client assertion authentication
on.destinatio mechanism. Must be on the same subaccount or service instance as this destination. This is used in
nName case of automated client assertion fetching by the service.
For more information, see Client Assertion with Automated Assertion Fetching by the Service [page
154].
URL.headers.< Static key prefix used as a namespace grouping of the URL's HTTP headers whose values will be sent
header-key> to the target endpoint. For each HTTP header's key, you must add a URL.headers prefix separated
by dot-delimiter. For example:
Sample Code
{
...
"URL.headers.<header-key-1>" : "<header-value-1>",
...
"URL.headers.<header-key-N>": "<header-value-N>",
}
Note
This is a naming convention. As the call to the target endpoint is performed on the client side, the
service only provides the configured properties. The expectation for the client-side processing
logic is to parse and use them. If you are using higher-level libraries and tools, please check if they
support this convention.
URL.queries.< Static key prefix used as a namespace grouping of URL's query parameters whose values will be
query-key> sent to the target endpoint. For each query parameter's key, you must add a URL.queries prefix
separated by dot-delimiter. For example:
Sample Code
{
...
"URL.queries.<query-key-1>" : "<query-value-1>",
...
"URL.queries.<query-key-N>": "<query-value-N>",
}
Note
This is a naming convention. As the call to the target endpoint is performed on the client side, the
service only provides the configured properties. The expectation for the client-side processing
logic is to parse and use them. If you are using higher-level libraries and tools, please check if they
support this convention.
Note
When the OAuth server is called, the caller side trusts the server based on the trust settings of the
destination. For more information, see Server Certificate Authentication [page 89].
Sample Code
{
"Name": "SapOAuthPassGrant",
"Type": "HTTP",
"URL": "https://myapp.cfapps.sap.hana.ondemand.com/mypath",
"ProxyType": "Internet",
"Authentication": "OAuth2Password",
"clientId": "my-client-id",
"clientSecret": "my-client-pass",
"User": "my-username",
"Password": "my-password",
"tokenServiceURL": "https://authentication.sap.hana.ondemand.com/oauth/
token"
}
The response for "find destination" contains an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Bearer",
"value": "eyJhbGciOiJSUzI1NiIsInR5cC...",
"http_header": {
"key":"Authorization",
"value":"Bearer eyJhbGciOiJSUzI1NiIsInR5cC..."
}
}
]
Learn about the OAuth JWT bearer authentication type for HTTP destinations in the Cloud Foundry
environment: use cases, supported properties and examples.
Overview
To allow an application to call another application, passing the user context, and fetch resources, the caller
application must pass an access token. In this authorization flow, the initial user token is passed to the
OAuth server as input data. This process is performed automatically by the Destination service, which helps
simplifying the application development: You only have to construct the right request to the target URL, by
using the outcome (another access token) of the service-side automation.
Properties
To configure a destination of this authentication type, you must specify all the required properties. You can do
this via SAP BTP cockpit (see Create HTTP Destinations [page 63]), or using the Destination Service REST API
[page 80]. The following table shows the properties along with their semantics.
Field/Parameter
(Cockpit) JSON Key Description
Required
Client ID clientId OAuth 2.0 client ID to be used for the user access token exchange.
Client clientSecret OAuth 2.0 client secret to be used for the user access token exchange.
Secret
Name Name Name of the destination. Must be unique for the destination level.
Proxy Type ProxyType You can only use proxy type Internet or OnPremise. If OnPremise is used,
the OAuth server must be accessed through the Cloud Connector.
Token tokenServi- The URL of the token service, against which the token exchange is performed.
Service URL ceURL
Depending on the Token Service URL Type, this property is interpreted in
different ways during the automatic token retrieval:
Examples of interpreting the token service URL for the token service URL type
Common, if the call to the Destination service is on behalf of a subaccount subdo-
main with value mytenant:
• https://authentication.us10.hana.ondemand.com/oauth/
token → https://
mytenant.authentication.us10.hana.ondemand.com/oauth/
token
• https://
{tenant}.authentication.us10.hana.ondemand.com/oauth/
token → https://
mytenant.authentication.us10.hana.ondemand.com/oauth/
token
• https://authentication.myauthserver.com/tenant/
{tenant}/oauth/token
→ https://authentication.myauthserver.com/tenant/
mytenant/oauth/token
• https://oauth.{tenant}.myauthserver.com/token →
https://oauth.mytenant.myauthserver.com/token
Token tokenServi- • Choose Dedicated if the token service URL serves only a single tenant.
ceURLType
Service URL • Choose Common if the token service URL serves multiple tenants.
Type
Optional
Additional
scope The value of the OAuth 2.0 scope parameter, expressed as a list of space-delimited,
case-sensitive strings.
Sample Code
{
...
"tokenServiceURL.headers.<header-key-1>" :
"<header-value-1>",
...
"tokenServiceURL.headers.<header-key-N>":
"<header-value-N>",
}
tokenServic Defines the connection timeout for the token service retrieval. The minimum value
eURL.Connec allowed is 0, the maximum is 60 seconds. If the value exceeds the allowed number,
tionTimeout the default value (10 seconds) is used.
InSeconds
tokenServic Defines the read timeout for the token service retrieval. The minimum value allowed
eURL.Socket is 0, the maximum is 600 seconds. If the value exceeds the allowed number, the
ReadTimeout default value (10 seconds) is used.
InSeconds
Sample Code
{
...
"tokenServiceURL.queries.<query-key-1>" :
"<query-value-1>",
...
"tokenServiceURL.queries.<query-key-N>":
"<query-value-N>",
}
tokenServic A static key prefix used as a namespace grouping of parameters which are sent
e.body.<par as part of the token request to the token service during token retrieval. For each
am-key> request, a tokenService.body prefix must be added to the parameter key,
separated by dot-delimiter. For example:
Sample Code
{
...
"tokenService.body.<param-key-1>" : "<param-
value-1>",
...
"tokenService.body.<param-key-N>": "<param-
value-N>",
}
x_user_toke Base64-encoded JSON web key set, containing the signing keys which are used to
n.jwks validate the JWT provided in the X-User-Token header.
x_user_toke URI of the JSON web key set, containing the signing keys which are used to validate
n.jwks_uri the JWT provided in the X-User-Token header.
Restriction
If the value is a private endpoint (for example, localhost), the Destination service
will not be able to perform the verification of the X-User-Token header when
using the "Find Destination" API.
URL.headers A static key prefix used as a namespace grouping of the URL's HTTP headers whose
.<header- values will be sent to the target endpoint. For each HTTP header's key, you must add
key> a URL.headers prefix separated by dot-delimiter. For example:
Sample Code
{
...
"URL.headers.<header-key-1>" : "<header-
value-1>",
...
"URL.headers.<header-key-N>": "<header-value-
N>",
}
Note
This is a naming convention. As the call to the target endpoint is performed on
the client side, the service only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and use them. If you are using
higher-level libraries and tools, please check if they support this convention.
URL.queries A static key prefix used as a namespace grouping of URL's query parameters whose
.<query- values will be sent to the target endpoint. For each query parameter's key, you must
key> add a URL.queries prefix separated by dot-delimiter. For example:
Sample Code
{
...
"URL.queries.<query-key-1>" : "<query-
value-1>",
...
"URL.queries.<query-key-N>": "<query-value-N>",
}
Note
This is a naming convention. As the call to the target endpoint is performed on
the client side, the service only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and use them. If you are using
higher-level libraries and tools, please check if they support this convention.
tokenServic Contains the name of the certificate configuration to be used. This property is
e.KeyStoreL required when using client certificates for authentication. See OAuth with X.509
ocation Client Certificates [page 133].
tokenServic Contains the password for the certificate configuration (if one is needed) when using
e.KeyStoreP client certificates for authentication. See OAuth with X.509 Client Certificates [page
assword 133].
tokenServic Specifies whether the client credentials should be placed in the request body of the
e.addClient token request, rather than the Authorization header. Default is true.
Credentials
InBody
The response for "find destination" contains an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Bearer",
"value": "eyJhbGciOiJSUzI1NiIsInR5cC...",
"http_header": {
"key":"Authorization",
"value":"Bearer eyJhbGciOiJSUzI1NiIsInR5cC..."
}
}
]
Create and configure an SAML Assertion destination for an application in the Cloud Foundry environment.
Context
The Destination service lets you generate SAML assertions as per SAML 2.0 specification. You can retrieve
a generated SAML assertion from the Destination service by using the SAMLAssertion authentication type,
Properties
The table below lists the destination properties for the SAMLAssertion authentication type.
Property Description
Required
Additional
userIdSource When this property is set, the user ID in the NameId tag of
the generated SAML assertion is determined in accordance
to the value of this attribute. For more information, see User
Propagation via SAML 2.0 Bearer Assertion Flow [page 251].
URI of the JSON web key set, containing the signing keys
x_user_token.jwks_uri
which are used to validate the JWT provided in the X-User-
Token header.
Restriction
If the value is a private endpoint (for example, localhost),
the Destination service will not be able to perform the
verification of the X-User-Token header when using the
"Find Destination" API.
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
skipUserAttributesPrefixInSAMLAttributes If set to true, any additional attributes taken from the OAuth
server's user information endpoint, under the user_attrib-
utes section, will be added to the assertion without the pre-
fix that the Destination service would usually add to them.
For more information, see User Propagation via SAML 2.0
Bearer Assertion Flow [page 251].
Example
The connectivity destination below provides HTTP access to the OData API of the SuccessFactors Jam.
Name=destinationSamlAssertion
Type=HTTP
URL=https://myXXXXXX-api.s4hana.ondemand.com
Authentication=SAMLAssertion
ProxyType=Internet
audience=https://myXXXXXX.s4hana.ondemand.com
authnContextClassRef=urn:oasis:names:tc:SAML:2.0:ac:classes:X509
The response for "find destination" contains an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
"authTokens": [
{
"type": "SAML2.0",
"value": "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZ...",
"http_header": {
"key":"Authorization",
"value":"SAML2.0 PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZ..."
}
}
]
Related Information
Use an X.509 certificate instead of a secret to authenticate against the authentication server.
Caution
To perform mutual TLS, you can use an X.509 client certificate instead of a client secret when connecting to
the authorization server. To do so, you must create a certificate configuration containing a valid X.509 client
certificate or a keystore, and link it to the destination configuration using these properties:
Property Description
Caution
Mutual TLS with an X.509 client certificate is performed only if the tokenService.KeyStoreLocation
property is set in the destination configuration. Otherwise, the client secret is used.
Create and configure an OAuth refresh token destination for an application in the Cloud Foundry environment.
Overview
SAP BTP provides support for applications to use the OAuth2 refresh token flow for consuming OAuth-
protected resources.
Refresh tokens are a common way to maintain certain levels of access, without requiring the use of credentials
for getting a new access token. They have a longer validity compared to access tokens and can be used to fetch
brand new access tokens without performing again the original flow.
The client credentials and a refresh token are used to request an access token from an OAuth server, referred
to below as token service. This is automatically performed by the Destination service when using the "Find a
destination" REST endpoint.
Properties
The table below lists the destination properties for the OAuth refresh token authentication type.
Required
Additional
Sample Code
{
...
"tokenServiceURL.headers.<header-
key-1>" : "<header-value-1>",
...
"tokenServiceURL.headers.<header-
key-N>": "<header-value-N>",
}
Sample Code
{
...
"tokenServiceURL.queries.<query-
key-1>" : "<query-value-1>",
...
"tokenServiceURL.queries.<query-
key-N>": "<query-value-N>",
}
Sample Code
{
...
"tokenService.body.<param-
key-1>" : "<param-value-1>",
...
"tokenService.body.<param-key-
N>": "<param-value-N>",
}
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
tokenServiceURL.ConnectionTimeoutInSecon Defines the connection timeout for the token service re-
ds trieval. The minimum value allowed is 0, the maximum is
60 seconds. If the value exceeds the allowed number, the
default value (10 seconds) is used.
tokenServiceURL.SocketReadTimeoutInSecon Defines the read timeout for the token service retrieval. The
ds minimum value allowed is 0, the maximum is 600 seconds.
If the value exceeds the allowed number, the default value
(10 seconds) is used.
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Note
When the OAuth server is called, the caller side trusts the server based on the trust settings of the
destination. For more information, see Server Certificate Authentication [page 89].
Sample Code
{
"Name": "SapOAuthPassGrant",
"Type": "HTTP",
"URL": "https://myapp.cfapps.sap.hana.ondemand.com/mypath",
"ProxyType": "Internet",
"Authentication": "OAuth2RefreshToken",
"clientId": "my-client-id",
"clientSecret": "my-client-pass",
"tokenServiceURL": "https://authentication.sap.hana.ondemand.com/oauth/
token"
}
Sample Code
The response for Find Destination will contain an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Bearer",
"value": "eyJhbGciOiJSUzI1NiIsInR5cC...",
"http_header": {
"key":"Authorization",
"value":"Bearer eyJhbGciOiJSUzI1NiIsInR5cC..."
}
}
]
Create and configure an OAuth Authorization Code destination for an application in the Cloud Foundry
environment.
Overview
The OAuth Authorization Code flow is a standard mechanism for business user login. It is a two-step procedure.
In a first step, business users authenticate themselves towards an authorization server, which grants users an
authorization code. The second step exchanges the authorization code for an access token through a token
service. Applications can use this flow to access OAuth-protected resources.
Restriction
This authentication type is not yet available for destination configuration via the cockpit.
Properties
The table below lists the destination properties for the OAuth2AuthorizationCode authentication type.
Property Description
Required
Additional
Sample Code
{
...
"tokenServiceURL.headers.<header-
key-1>" : "<header-value-1>",
...
"tokenServiceURL.headers.<header-
key-N>": "<header-value-N>",
}
tokenServiceURL.ConnectionTimeoutInSecon Defines the connection timeout for the token service re-
ds trieval. The minimum value allowed is 0, the maximum is
60 seconds. If the value exceeds the allowed number, the
default value (10 seconds) is used.
tokenServiceURL.SocketReadTimeoutInSecon Defines the read timeout for the token service retrieval. The
ds minimum value allowed is 0, the maximum is 600 seconds.
If the value exceeds the allowed number, the default value
(10 seconds) is used.
Sample Code
{
...
"tokenServiceURL.queries.<query-
key-1>" : "<query-value-1>",
...
"tokenServiceURL.queries.<query-
key-N>": "<query-value-N>",
}
Sample Code
{
...
"tokenService.body.<param-
key-1>" : "<param-value-1>",
...
"tokenService.body.<param-key-
N>": "<param-value-N>",
}
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Note
When the OAuth server is called, the caller side trusts the server based on the trust settings of the
destination. For more information, see Server Certificate Authentication [page 89].
Sample Code
{
"Name": "SapOAuth2AuthorizationCodeDestination",
"Type": "HTTP",
"URL": "https://myapp.cfapps.sap.hana.ondemand.com/mypath",
"ProxyType": "Internet",
"Authentication": "OAuth2AuthorizationCode",
"clientId": "my-client-id",
"clientSecret": "my-client-pass",
"tokenServiceURL": "https://authentication.sap.hana.ondemand.com/oauth/
token"
}
When calling the destination, an X-code is a required header parameter. X-redirect-uri and X-code-verifier are
optional header parameters. They depend on the call for the authorization code fetch. If a redirect URI was
specified in that call, the same redirect URI must be used as value for the X-redirect-uri header. If a code
challenge was presented in the authorization code fetch request, a code verifier must be given as value for the
X-code-verifier header.
Sample Code
The response for Find Destination will contain an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Bearer",
"value": "eyJhbGciOiJSUzI1NiIsInR5cC...",
"http_header": {
"key":"Authorization",
"value":"Bearer eyJhbGciOiJSUzI1NiIsInR5cC..."
}
}
]
Learn about the OAuth2TechnicalUserPropagation authentication type for HTTP destinations in the Cloud
Foundry environment: use cases, supported properties and examples.
SAP BTP supports the propagation of technical users from the cloud application towards on-premise systems.
In the Destination service, an access token representing the technical user is retrieved, which can then be sent
in a header to the Connectivity service. This is similar to principal propagation, but in this case, a technical user
is propagated instead of a business user.
The retrieval of the access token performs the OAuth 2.0 client credentials flow, according to the token service
configurations in the destination. The token service is called from the Internet, not from the Cloud Connector.
Note
The retrieved access token is cached for the duration of its validity.
Restriction
This authentication type is not yet available for destination configuration via the cockpit.
Properties
The table below lists the destination properties for the OAuth2TechnicalUserPropagation authentication type.
Property Description
Required
Remember
The token service is not accessed through the Cloud
Connector, but through the Internet.
tokenServiceURL The URL of the token service, against which the token ex-
change is performed. Depending on the Token Service
URL Type, this property is interpreted in different ways
during the automatic token retrieval:
• https://
authentication.us10.hana.ondemand.com
/oauth/token → https://
mytenant.authentication.us10.hana.ond
emand.com/oauth/token
• https://
{tenant}.authentication.us10.hana.ond
emand.com/oauth/token → https://
mytenant.authentication.us10.hana.ond
emand.com/oauth/token
• https://
authentication.myauthserver.com/
tenant/{tenant}/oauth/token → https://
authentication.myauthserver.com/
tenant/mytenant/oauth/token
• https://oauth.
{tenant}.myauthserver.com/token
→ https://
oauth.mytenant.myauthserver.com/token
tokenServiceUser User for basic authentication to the OAuth server (if re-
quired).
Additional
Sample Code
{
...
"tokenServiceURL.headers.<header-
key-1>" : "<header-value-1>",
...
"tokenServiceURL.headers.<header-
key-N>": "<header-value-N>",
}
tokenServiceURL.ConnectionTimeoutInSecon Defines the connection timeout for the token service re-
ds trieval. The minimum value allowed is 0, the maximum is
60 seconds. If the value exceeds the allowed number, the
default value (10 seconds) is used.
tokenServiceURL.SocketReadTimeoutInSecon Defines the read timeout for the token service retrieval. The
ds minimum value allowed is 0, the maximum is 600 seconds.
If the value exceeds the allowed number, the default value
(10 seconds) is used.
Sample Code
{
...
"tokenServiceURL.queries.<query-
key-1>" : "<query-value-1>",
...
"tokenServiceURL.queries.<query-
key-N>": "<query-value-N>",
}
Sample Code
{
...
"tokenService.body.<param-
key-1>" : "<param-value-1>",
...
"tokenService.body.<param-key-
N>": "<param-value-N>",
}
Note
If set to false, but tokenServiceUser /
tokenServicePassword are also set,
tokenServiceUser / tokenServicePassword
will be taken with priority.
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Note
This is a naming convention. As the call to the target
endpoint is performed on the client side, the service
only provides the configured properties. The expecta-
tion for the client-side processing logic is to parse and
use them. If you are using higher-level libraries and
tools, please check if they support this convention.
Note
When the OAuth authorization server is called, it accepts the trust settings of the destination. For more
information, see Server Certificate Authentication [page 89].
Caution
When using the OAuth service of the SAP BTP Neo environment (https://api.{landscape-domain}/oauth2/
apitoken/v1?grant_type=client_credentials or oauthasservices.{landscape-domain}/oauth2/apitoken/v1?
grant_type=client_credentials) as TokenServiceURL, or any other OAuth token service which accepts
client credentials only as Authorization header, you must also set the clientId and clientSecret
values for tokenServiceUser and tokenServicePassword properties.
Sample Code
Name=technical-user-example
The response for Find Destination will contain an authTokens object in the format given below. For more
information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Sample Code
"authTokens": [
{
"type": "Bearer",
"value": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiI...",
"http_header": {
"key":"SAP-Connectivity-Technical-Authentication",
"value":"Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiI..."
}
}
]
Replace client secrets with client assertions in OAuth flows for destinations in the Cloud Foundry environment.
Client assertion is one of the possible client authentication mechanisms when requesting an access token
from an authorization server. When using client assertion, you must provide the client_assertion and
client_assertion_type parameters in the request to an authorization server that supports client assertion
for client authentication.
The Destination service supports client assertion as а client authentication mechanism. It sends the client
assertion to the authorization server whenever requesting an OAuth token. The assertion can be either
generated externally and provided to the Destination service, or it can be retrieved by the Destination service
via an additional destination.
For more information, see Provide Client Assertion Properties as Headers [page 153] and Client Assertion with
Automated Assertion Fetching by the Service [page 154].
You must ensure that the target authorization server supports client assertion as authentication mechanism
and accepts the provided client assertions after validation. This might require additional setup in the
authorization server and the assertion provider.
Provide client assertion properties as headers when using client assertion with OAuth flows for a destination.
You can provide the following headers to use client assertion authentication:
Examples:
• "urn:ietf:params:oauth:client-as-
sertion-type:saml2-bearer" => in-
dicating a SAML Bearer assertion.
• "urn:ietf:params:oauth:client-as-
sertion-type:jwt-bearer" => indi-
cating a JWT Bearer token.
Sample Code
Name=sap_Destination
Type=HTTP
URL= https://xxxx.example.com
ProxyType=Internet
Authentication=OAuth2ClientCredentials
clientId=clientId
tokenServiceURL=https://authserver.example.com/oauth/token/
To use the client assertion mechanism in the Find Destination API, you must add two mandatory headers as
shown in the following example:
Sample Code
Use the Find Destination API to fetch client assertions automatically when using client assertion with OAuth
flows for a destination.
The Find Destination API lets you fetch client assertions automatically and then use them for retrieving tokens
from authorization servers that accept client assertions as a client authentication mechanism.
To apply this mechanism, you must use the following of destination configurations:
• Destination that provides the client assertion - with specified token service that issues client assertions
• Destination that uses the client assertion - with specified token service that uses client assertion as a client
authentication mechanism
Caution
Only OAuth2ClientCredentials authentication type is allowed for destinations that provide client
assertions.The following authentication types are supported for destinations that use client assertions:
OAuth2Password, OAuth2ClientCredentials, OAuth2AuthorizationCode, OAuth2TechnicalUserPropagation.
The client assertion type must be defined as a property in the destination that provides the client assertion:
• "urn:ietf:params:oauth:client-as-
sertion-type:saml2-bearer" => in-
dicating a SAML Bearer Assertion.
• "urn:ietf:params:oauth:client-as-
sertion-type:jwt-bearer" => indi-
cating a JWT Bearer Token.
The destination that provides the client assertion can be specified in a property of the destination that uses
client assertions:
Caution
Use a Header to Specify the Destination that Provides the Client Assertion
Alternatively, the destination that provides the client assertion can also be specified in a header in the Find
Destination API.
If specified, this header overrides the property clientAssertion.destinationName in the destination that
uses client assertions.
Caution
Sample Code
Name=Provides_Client_Assertion_Destination
Type=HTTP
URL= https://xxxx.example.com
ProxyType=Internet
Authentication=OAuth2ClientCredentials
clientId=clientId
clientSecret=secret1234
tokenServiceURL=https://authserver1.example.com/oauth/token/
clientAssertion.type=urn:ietf:params:oauth:client-assertion-type:jwt-bearer
Sample Code
Name=Uses_Client_Assertion_Destination
Type=HTTP
URL= https://xxxx.example.com
ProxyType=Internet
Authentication=OAuth2ClientCredentials
clientId=clientId
tokenServiceURL=https://authserver2.example.com/oauth/token/
clientAssertion.destinationName=Provides_Client_Assertion_Destination
curl call:
Sample Code
RFC destinations provide the configuration required for communication with an on-premise ABAP system via
Remote Function Call. The RFC destination data is used by the Java Connector (JCo) version that is available
within SAP BTP to establish and manage the connection.
The RFC destination specific configuration in SAP BTP consists of properties arranged in groups, as described
below. The supported set of properties is a subset of the standard JCo properties in arbitrary environments.
The configuration data is divided into the following groups:
The minimal configuration contains user logon properties and information identifying the target host. This
means you must provide at least a set of properties containing this information.
Example
Name=SalesSystem
Type=RFC
jco.client.client=000
jco.client.lang=EN
jco.client.user=consultant
jco.client.passwd=<password>
Related Information
JCo properties that cover different types of user credentials, as well as the ABAP system client and the logon
language.
Property Description
Note
When working with the Destinations editor in the cock-
pit, enter the value in the <User> field. Do not enter it as
additional property.
Note
When working with the Destinations editor in the cock-
pit, enter the value in the <Alias User> field. Do not
enter it as additional property.
Note
Passwords in systems of SAP NetWeaver releases lower
than 7.0 are case-insensitive and can be only eight char-
acters long. For releases 7.0 and higher, passwords are
case-sensitive with a maximum length of 40.
Note
When working with the Destinations editor in the cock-
pit, enter this password in the <Password> field. Do not
enter it as additional property.
Note
When working with the Destinations editor in the cock-
pit, the <User>, <Alias User> and <Password> fields
are hidden when setting the property to 1.
WebSocket RFC
Note
For PrincipalPropagation, you
should configure the properties
jco.destination.repository.user and
jco.destination.repository.passwd in-
stead, since there are special permissions needed (for
metadata lookup in the back end) that not all business
application users might have.
Learn about the JCo properties you can use to configure pooling in an RFC destination.
Overview
This group of JCo properties covers different settings for the behavior of the destination's connection pool. All
properties are optional.
Property Description
Note
Turning on this check has performance impact
for stateless communication. This is due to an
additional low-level ping to the server, which
takes a certain amount of time for non-cor-
rupted connections, depending on latency.
Pooling Details
• Each destination is associated with a connection factory and, if the pooling feature is used, with a
connection pool.
• Initially, the destination's connection pool is empty, and the JCo runtime does not preallocate any
connection. The first connection will be created when the first function module invocation is performed.
The peak_limit property describes how many connections can be created simultaneously, if applications
allocate connections in different sessions at the same time. A connection is allocated either when a
JCo properties that allow you to define the behavior of the repository that dynamically retrieves function
module metadata.
All properties below are optional. Alternatively, you can create the metadata in the application code, using the
metadata factory methods within the JCo class, to avoid additional round-trips to the on-premise system.
Property Description
Note
When working with the Destinations editor in the cock-
pit, enter the value in the <Repository User> field. Do
not enter it as additional property.
Note
When working with the Destinations editor in the
cockpit, enter this password in the <Repository
Password> field. Do not enter it as additional property.
Learn about the JCo properties you can use to configure the target sytem information in an RFC destination
(Cloud Foundry environment).
Note
This documentation refers to SAP BTP, Cloud Foundry environment. If you are looking for information
about the Neo environment, see Target System Configuration (Neo environment).
Content
Overview
Note
When using a WebSocket connection, the target ABAP system must be exposed to the Internet.
Depending on the configuration you use, different properties are mandatory or optional.
The field <Proxy Type> lets you choose between Internet and OnPremise. When choosing OnPremise,
the RFC communication is routed over a Cloud Connector that is connected to the subaccount. When choosing
Internet, the RFC communciation is done over a WebSocket connection.
Direct Connection
To use a direct connection (connection without load balancing) to an application server over Cloud Connector,
you must set the value for <Proxy Type> to OnPremise.
Property Description
jco.client.sysnr Represents the so-called "system number" and has two dig-
its. It identifies the logical port on which the application
server is listening for incoming requests. For configurations
on SAP BTP, the property must match a virtual port entry in
the Cloud Connector Access Control configuration.
Note
The virtual port in the above access control entry must
be named sapgw<##>, where <##> is the value of
sysnr.
To use load balancing to a system over Cloud Connector, you must set the value for <Proxy Type> to
OnPremise.
Note
The virtual port in the above access control entry must
be named sapms<###>, where <###> is the value of
r3name.
WebSocket Connection
To use a direct connection over WebSocket, you must set the value for <Proxy Type> to Internet.
Prerequisites
• Your target system is an ABAP server as of S/4HANA (on-premise) version 1909, or a cloud ABAP system.
• Your SAP Java buildpack version is at least 1.26.0.
Note
We recommend that you do not use value 1 ("trust
all") in productive scenarios, but only for demo/test
purposes.
<Trust Store Location> If you don't want to use the default JDK trust store (option
1. When used in local environment Use default JDK truststore is unchecked), you must enter a
2. When used in cloud environment <Trust Store Location>. This field indicates the path to
the JKS file which contains trusted certificates (Certificate
Authorities) for authentication against a remote client.
1. The relative path to the JKS file. The root path is the
server's location on the file system.
2. The name of the JKS file.
Note
If the <Trust Store Location> is not specified, the
JDK trust store is used as a default trust store for the
destination.
<Trust Store Password> Password for the JKS trust store file. This field is mandatory
if <Trust Store Location> is used.
Note
You can upload trust store JKS files using the same command as for uploading destination configuration
property files. You only need to specify the JKS file instead of the destination configuration file.
Note
Connections to remote services which require Java Cryptography Extension (JCE) unlimited strength
jurisdiction policy are not supported.
JCo properties that allow you to control the connection to an ABAP system.
Property Description
jco.client.trace Defines whether protocol traces are created. Valid values are
1 (trace is on) and 0 (trace is off). The default value is 0.
jco.client.codepage Declares the 4-digit SAP codepage that is used when initiat-
ing the connection to the backend. The default value is 1100
(comparable to iso-8859-1). It is important to provide this
property if the password that is used contains characters
that cannot be represented in 1100.
Note
When working with the Destinations editor in the cock-
pit, enter the Cloud Connector location ID in the
<Location ID> field. Do not enter it as additional
property.
Enable single sign-on (SSO) by forwarding the identity of cloud users to a remote system or service (Cloud
Foundry environment).
The Connectivity and Destination services let you forward the identity of a cloud user to a remote system. This
process is called principal propagation (also known as user propagation or user principal propagation). It uses a
JSON Web token (JWT) as exchange format for the user information.
Two scenarios are supported: Cloud to on-premise (using the Connectivity service) and cloud to cloud (using
the Destination service).
• Scenario: Cloud to On-Premise [page 168]: The user is propagated from a cloud application to an on-
premise system using a destination configuration with authentication type PrincipalPropagation.
Note
This scenario requires the Cloud Connector to connect to your on-premise system.
• Scenario: Cloud to Cloud [page 170]: The user is propagated from a cloud application
to another remote (cloud) system using a destination configuration with authentication type
OAuth2SAMLBearerAssertion.
Forward the identity of cloud users from the Cloud Foundry environment to on-premise systems using principal
propagation.
Concept
Optionally, you can configure and use a destination configuration by setting the authentication type as
PrincipalPropagation. For more information, see Managing Destinations [page 59].
Note
This scenario is only applicable if the on-premise system is exposed to the cloud via the Cloud Connector.
1. A user logs in to the cloud application. Its identity is established by an identity provider (this can be the
default IdP for the subaccount or another trusted IdP).
2. The cloud application then uses a user exchange token (or a designated secondary header) to propagate
the user to the Connectivity service. See also Configure Principal Propagation via User Exchange Token
[page 227].
• Optionally, the application may use the Destination service to externalize the connection configuration
that points to the target on-premise system. See also Consuming the Destination Service [page 243].
• If you are using RFC as communication protocol usa with the SAP Java Buildpack, this step is already
done by the Java Connector (JCo).
Operator
Use cases:
Forward the identity of cloud users from the Cloud Foundry environment to remote systems on the Internet,
enabling single sign-on (SSO).
Concept
The Destination service provides a secure way of forwarding the identity of a cloud user to another remote
system or service using a destination configuration with authentication type OAuth2SAMLBearerAssertion.
This enables the cloud application to consume OAuth-protected APIs exposed by the target remote system.
1. A user logs in to the cloud application. Its identity is established by an identity provider (this can be the
default IdP for the subaccount or another trusted IdP).
2. When the application retrieves an OAuthSAMLBearer destination, the user is made available to the
Destination Service by means of a user exchange JWT. The service then wraps the user identity in a SAML
assertion, signs it with the subaccount's private key and sends it to the specified OAuth token service.
3. The OAuth token service accepts the SAML assertion and returns an OAuth access token. In turn, the
Destination service returns both the destination and the access token to the requesting application.
4. The application uses the destination properties and the access token to consume the remote API.
You can set up user propagation for connections to applications in different cloud systems or environments.
Operator
Download and configure X.509 certificates as a prerequisite for user propagation from the Cloud Foundry
environment.
Setting up a trust scenario for user propagation requires the exchange of public keys and certificates between
the affected systems, as well as the respective trust configuration within these systems. This enables you to
use an HTTP destination with authentication type OAuth2SAMLBearerAssertion for the communication.
A trust scenario can include user propagation from the Cloud Foundry environment to another SAP BTP
environment, to another Cloud Foundry subaccount, or to a remote system outside SAP BTP, like S/4HANA
Cloud, C4C, Success Factors, and others.
Set Up a Certificate
Download and save locally the identifying X509 certificate of the subaccount in the Cloud Foundry
environment.
5. Configure the downloaded X.509 certificate in the target system to which you want to propagate the user.
If the X.509 certificate validity is about to expire, you can renew the certificate and extend its validity by
another 2 years.
5. Choose the Download Trust button and save locally the X.509 certificate that identifies this subaccount.
6. Configure the renewed X.509 certificate in the target system to which you want to propagate the user.
Rotate Certificates
You can rotate the identifying X.509 certificate of the subaccount. Rotation is done by creating a passive X.509
certificate for the subaccount, configuring it in the target system to which you want to propagate the user, and
rotating it with the active one. After rotation is performed, the active X.509 certificate becomes passive and the
passive one active.
Note
The passive X.509 certificate and the certificate rotation can be managed only via the Destination service
REST API. For more information, see Destination Service REST API [page 80].
Procedure
Related Information
Configure user propagation (single sign-on), using OAuth communication from the SAP BTP Cloud Foundry
environment to S/4HANA Cloud. As OAuth mechanism, you use the OAuth 2.0 SAML Bearer Assertion Flow.
Steps
Scenario
As a customer, you own an SAP BTP global account and have created at least one subaccount therein. Within
the subaccount, you have deployed a Web application. Authentication against the Web application is based on a
trusted identity provider (IdP) that you need to configure for the subaccount.
On the S/4HANA Cloud side, you own an S/4HANA ABAP tenant. Authentication against the S/4HANA ABAP
tenant is based on the trusted IdP which is always your Identity Authentication Service (IAS) tenant. Typically,
you will configure this S/4HANA Cloud Identity tenant to forward authentication requests to your corporate
IdP.
• You have an S/4HANA Cloud tenant and a user with the following business catalogs assigned:
SAP_BCR_CORE_EXT Extensibility
• You have administrator permission for the configured S/4HANA Cloud IAS tenant.
• You have a subaccount and PaaS tenant in the SAP BTP Cloud Foundry environment.
Next Step
Perform these steps to set up user propagation between S/4HANA Cloud and the SAP BTP Cloud Foundry
environment.
Tasks
1. Configure Single Sign-On between S/4HANA Cloud and the Cloud Foundry Organization on SAP BTP [page
176]
2. Configure OAuth Communication [page 176]
3. Configure Communication Settings in S/4HANA Cloud [page 177]
4. Configure Communication Settings in SAP BTP [page 180]
5. Consume the Destination and Execute the Scenario [page 182]
Configure Single Sign-On between S/4HANA Cloud and the Cloud Foundry
Organization on SAP BTP
To configure SSO with S/4HANA you must configure trust between the S/4HANA IAS tenant and theCloud
Foundry organization, see Manually Establish Trust and Federation Between SAP Authorization and Trust
Management Service and Identity Authentication.
Download the certificate from your Cloud Foundry subaccount on SAP BTP.
1. From the SAP BTP cockpit, choose Cloud Foundry environment your global account .
2. Choose or create a subaccount, and from your left-side subaccount menu, go to Connectivity
Destinations .
3. Press the Download Trust button.
4. Enter the host name. This is your Cloud Foundry region, for example: cf.eu10.hana.ondemand.com
for Europe (Frankfurt).
Note
5. In the Outbound Services section, go to Launch SAP Web IDE and uncheck the Active checkbox of the
field <Service Status>.
1. From the SAP BTP cockpit, choose Cloud Foundry environment your global account .
2. Choose your subaccount, and from the left-side subaccount menu, go to Connectivity Destinations .
3. Press the New Destination button.
4. Enter the following parameters for your destination:
Type HTTP
Authentication OAuth2SAMLBearerAssertion
Note
This URL does not contain my300117-api, but only
my300117.
Client Key The name of the communication user you have in the SAP
S/4HANA ABAP tenant, e.g VIKTOR.
Token Service URL For this field, you need the part of the URL
before /sap/... that you copied before from
Communications Arrangements service URL/service in-
terface:
https://my300117-
api.s4hana.ondemand.com/sap/bc/sec/
oauth2/token
Token Service User The same user as for the Client Key parameter.
System User This parameter is not used, leave the field empty.
authnContextClassRef urn:oasis:names:tc:SAML:2.0:ac:classes:
X509
To perform the scenario and execute the request from the source application towards the target application,
proceed as follows:
1. Decide on where the user identity will be located when calling the Destination service. For details, see
User Propagation via SAML 2.0 Bearer Assertion Flow [page 251]. This will determine how exactly you will
perform step 2.
2. Execute a "find destination" request from the source application to the Destination service. For details, see
Consuming the Destination Service [page 243] and the REST API documentation .
3. From the Destination service response, extract the access token and URL, and construct your request to
the target application. See "Find Destination" Response Structure [page 259] for details on the structure of
the response from the Destination service.
Configure user propagation from the SAP BTP Cloud Foundry environment to SAP SuccessFactors.
Steps
Create and Consume a Destination for the Cloud Foundry Application [page 186]
Scenario
• From an application in the SAP BTP Cloud Foundry environment, you want to consume OData APIs
exposed by SuccessFactors modules.
• To enable single sign-on, you want to propagate the identity of the application's logged-in user to
SuccessFactors.
Prerequisites
Concept Overview
A user logs in to the Cloud Foundry application. Its identity is established by an Identity Provider (IdP). This
could be the default IdP for the Cloud Foundry subaccount or a trusted IdP, for example the SuccessFactors
IdP.
When the application retrieves an OAuth2SAMLBearer destination, the user is made available to the Cloud
Foundry Destination service by means of a user exchange token, represented by a JSON Web Token (JWT).
The service then wraps the user identity in a SAML assertion, signs it with the Cloud Foundry subaccount
private key and sends it to the token endpoint of the SuccessFactors OAuth server.
To accept the SAML assertion and return an access token, a trust relationship must be set up between
SuccessFactors and the Cloud Foundry subaccount public key. You can achieve this by providing the Cloud
Foundry subaccount X.509 certificate when creating the OAuth client in SuccessFactors.
Users that are propagated from the Cloud Foundry application, are verified by the SuccessFactors OAuth server
before granting them access tokens. This means, users that do not exist in the SuccessFactors user store will
be rejected.
Next Steps
Create an OAuth client in SuccessFactors for user propagation from the SAP BTP Cloud Foundry environment.
3. Press the Register Client Application button on the right. In the <Application Name> field, provide
some arbitrary descriptive name for the client. For <Application URL>, enter the Cloud Foundry API
endpoint of the client SAP BTP subaccount, followed by the subaccount GUID, for example https://
api.cf.stagingaws.hanavlab.ondemand.com/17d146c3-bc6c-4424-8360-7d56ee73bd32. This
information is available in the cloud cockpit under subaccount details:
4. In the field <X.509 Certificate>, paste the certificate that you downloaded in step 1.
Next Step
• Create and Consume a Destination for the Cloud Foundry Application [page 186]
Create and consume an OAuth2SAMLBearerAssertion destination for your Cloud Foundry application.
1. In the cloud cockpit, navigate to your Cloud Foundry subaccount and from the left-side subaccount menu,
choose Connectivity Destinations . Choose New Destination and enter a name Then provide the
following settings:
To perform the scenario and execute the request from the source application towards the target application,
proceed as follows:
1. Decide on where the user identity will be located when calling the Destination service. For details, see
User Propagation via SAML 2.0 Bearer Assertion Flow [page 251]. This will determine how exactly you will
perform step 2.
2. Execute a "find destination" request from the source application to the Destination service. For details, see
Consuming the Destination Service [page 243] and the REST API documentation .
3. From the Destination service response, extract the access token and URL, and construct your request to
the target application. See "Find Destination" Response Structure [page 259] for details on the structure of
the response from the Destination service.
Propagate the identity of a user between Cloud Foundry applications that are located in different subaccounts
or regions.
Steps
Scenario
Prerequisites
• You have two applications (application 1 and application 2) deployed in Cloud Foundry spaces in different
subaccounts in the same region or even in different regions.
• You have an instance of the Destination service bound to application 1.
• You have a user JWT (JSON Web Token) in application 1 where the call to application 2 is performed.
The identity of a user logged in to application 1 is established by an identity provider (IdP) of the respective
subaccount (subaccount 1).
Note
You can use the default IdP for the Cloud Foundry subaccount or a custom-configured IdP.
When the application retrieves an OAuthSAMLBearer destination, the user is made available to the Cloud
Foundry Destination service by means of a user exchange JWT. See also User Propagation via SAML 2.0 Bearer
Assertion Flow [page 251].
The service then wraps the user identity in a SAML assertion, signs it with subaccount 1's private key (which
is part of the special key pair for the subaccount, maintained by the Destination service) and sends it to the
authentication endpoint of subaccount 2, which hosts application 2.
To make the authentication endpoint accept the SAML assertion and return an access token, you must set up
a trust relationship between the two subaccounts, by using subaccount 1's public key. You can achieve this by
assembling the SAML IdP metadata, using subaccount 1's public key and setting up a new trust configuration
for subaccount 2, which is based on that metadata.
This way, users propagated from application 1 can be verified by subaccount 2's IdP before granting them
access tokens with their respective scopes in the context of subaccount 2.
The authentication endpoint accepts the SAML assertion and returns an OAuth access token. In turn,
the Destination service returns both the destination configuration and the access token to the requesting
application (application 1). Application 1 then uses the destination properties and the access token to call
application 2.
Procedure
1. Download the X.509 certificate of subaccount 1. For instructions, see Set up Trust Between Systems [page
172]. The content of the file is shown as:
Sample Code
Sample Code
<ns3:EntityDescriptor
ID="cfapps.${S1_LANDSCAPE_DOMAIN}/${S1_SUBACCOUNT_ID}"
entityID="cfapps.${S1_LANDSCAPE_DOMAIN}/${S1_SUBACCOUNT_ID}"
Note
Additionally, you must add users to this new trust configuration and assign appropriate scopes to them.
Sample Code
Property Value
Name Choose any name for your destination. You will use this
name to request the destination from the Destination
service.
Type HTTP
Authentication OAuth2SAMLBearerAssertion
Audience ${S2_AUDIENCE}
Token Service User The clientid of the XSUAA instance in subaccount 2. Can
be acquired via a binding or service key.
authnContextClassRef urn:oasis:names:tc:SAML:2.0:ac:classes:
PreviousSession
Additional Properties
Property Value
nameIdFormat urn:oasis:names:tc:SAML:1.1:nameid-
format:emailAddress
Example
7. Choose Save.
1. Decide on where the user identity will be located when calling the Destination service. For details, see
User Propagation via SAML 2.0 Bearer Assertion Flow [page 251]. This will determine how exactly you will
perform step 2.
2. Execute a "find destination" request from application 1 to the Destination service. For details, see
Consuming the Destination Service [page 243] and the REST API documentation .
3. From the Destination service response, extract the access token and URL, and construct your request to
application 2. See "Find Destination" Response Structure [page 259] for details on the structure of the
response from the Destination service.
Propagate the identity of a user from a Cloud Foundry application to a Neo application.
Steps
1. Configure a Local Service Provider for the Neo Subaccount [page 198]
2. Establish Trust between Cloud Foundry and Neo Subaccounts [page 199]
3. Create an OAuth Client for the Neo Application [page 200]
4. Create an OAuth2SAMLBearerAssertion Destination for the Cloud Foundry Application [page 201]
5. Consume the Destination and Execute the Scenario [page 203]
Scenario
Prerequisites
Concept
The identity of a user logged in to the Cloud Foundry application is established by an identity provider (IdP).
Note
You can use the default IdP of the Cloud Foundry subaccount or any trusted IdP, for example, the Neo
subaccount IdP.
When the application retrieves an OAuthSAMLBearer destination, the user is made available to Cloud Foundry
Destination service by means of a user exchange JWT (JSON Web Token).
The service then wraps the user identity in a SAML assertion, signs it with the Cloud Foundry subaccount
private key and sends it to the token endpoint of the OAuth service for the Neo application.
To make the Neo application accept the SAML assertion, you must set up a trust relationship between the Neo
subaccount and the Cloud Foundry subaccount public key. You can achieve this by adding the Cloud Foundry
subaccount X.509 certificate as trusted IdP in the Neo subaccount. Thus, the Cloud Foundry application starts
acting as an IdP and any users propagated by it are accepted by the Neo application, even users that do not
exist in the IdP.
The OAuth service accepts the SAML assertion and returns an OAuth access token. In turn, the Destination
service returns both the destination and the access token to the requesting application. The application then
uses the destination properties and the access token to consume the remote API.
Procedure
1. In the cockpit, navigate to your Neo subaccount, choose Security Trust from the left menu, and go to
tab Local Service Provider on the right. For <Configuration Type>, select Custom and choose Generate
Key Pair.
2. Save the configuration.
IMPORTANT: When you choose Custom for the Local Service Provider type, the default IdP (SAP ID service)
will no longer be available. If your scenario requires login to the SAP ID service as well, you can safely skip
this step and leave the default settings for the Local Service Provider.
In the <Signing Certificate> field, paste the X.509 certificate you downloaded in step 1. Make sure
you remove the BEGIN CERTIFICATE and END CERTIFICATE strings. Then check Only for IDP-Initiated SSO
and save the configuration:
Note
Make sure you remember the secret, because it will not be visible later.
6. <Redirect URI> is irrelevant for the OAuth SAML Bearer Assertion flow, so you can provide any URL in
the Cloud Foundry application.
1. In the cockpit, navigate to the Cloud Foundry subaccount, choose Connectivity Destinations from
the left menu, select the Client tab and press New Destination.
2. Enter a <Name> for the destination, then provide:
• <URL>: the URL of the Neo application/API you want to consume.
• <Authentication>: OAuth2SAMLBearerAssertion
• <Client Key>: the ID of the OAuth client for the Neo application
• <Token Service URL>: can be taken from the Branding tab in the Neo subaccount (choose
Security OAuth from the left menu):
• <Token Service User>: again the ID of the OAuth client for the Neo application.
• <Token Service Password>: the OAuth client secret.
• authnContextClassRef: urn:oasis:names:tc:SAML:2.0:ac:classes:PreviousSession
• nameIdFormat:
• urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified, if the user ID is propagated to
the Neo application, or
• urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress, if the user email is propagated
to the Neo application.
To perform the scenario and execute the request from the source application towards the target application,
proceed as follows:
1. Decide on where the user identity will be located when calling the Destination service. For details, see
User Propagation via SAML 2.0 Bearer Assertion Flow [page 251]. This will determine how exactly you will
perform step 2.
2. Execute a "find destination" request from the source application to the Destination service. For details, see
Consuming the Destination Service [page 243] and the REST API documentation .
3. From the Destination service response, extract the access token and URL, and construct your request to
the target application. See "Find Destination" Response Structure [page 259] for details on the structure of
the response from the Destination service.
Using multitenancy for Cloud Foundry applications that require a connection to a remote service or on-premise
application.
Endpoint Configuration
Applications that require a connection to a remote service can use the Connectivity service to configure HTTP
or RFC endpoints. In a provider-managed application, such an endpoint can either be once defined by the
application provider (Provider-Specific Destination [page 204]), or by each application subscriber (Subscriber-
Specific Destination [page 205]).
Note
This connectivity type is fully applicable also for on-demand to on-premise connectivity.
Destination Levels
You can configure destinations simultaneously on two levels: subaccount and service instance. This means that
it is possible to have one and the same destination on more than one configuration level. For more information,
see Managing Destinations [page 59].
Level Lookup
Service instance level Lookup via particular service instance (in provider or sub-
scriber subaccount associated with this service instance).
When the application accesses the destination at runtime, the Connectivity service does the following:
Provider-Specific Destination
Subscriber-Specific Destination
Related Information
To use the Connectivity service in your application, you need an instance of the service.
Prerequisites
When using service plan “lite”, quota management is no longer required for this service. From any subaccount
you can consume the service using service instances without restrictions on the instance count.
Previously, access to service plan “lite” has been granted via entitlement and quota management of the
application runtime. It has now become an integral service offering of SAP BTP to simplify its usage. See also
Entitlements and Quotas.
Procedure
You have two options for creating a service instance – using the CLI or using the SAP BTP cockpit:
• Create and Bind a Service Instance from the CLI [page 206]
• Example [page 207]
• Result [page 208]
• Create and Bind a Service Instance from the Cockpit [page 207]
• Result [page 208]
Use the following CLI commands to create a service instance and bind it to an application:
1. cf marketplace
2. cf create-service connectivity <service-plan> <service-name>
3. cf bind-service <app-name> <service-name> [-c <config json>]
To bind an instance of the Connectivity service "lite" plan to application "myapp", use following commands on
the Cloud Foundry command line:
Assuming that you have already deployed your application to the platform, follow these steps to create a
service instance and bind it to an application:
7. On the next page of the wizard, select the Create new instance radio button.
8. Leave <Plan> lite selected and choose Next.
Result
When the binding is created, the application gets the corresponding connectivity credentials in its environment
variables:
Sample Code
"VCAP_SERVICES": {
"connectivity": [
{
"credentials": {
"onpremise_proxy_host": "10.0.85.1",
"onpremise_proxy_port": "20003",
"onpremise_proxy_http_port": "20003",
"clientid": "sb-connectivity-app",
"clientsecret": "KXqObiN6d9gLA4cS2rOVAahPCX0=",
"token_service_url": "<token_service_url>",
},
"label": "connectivity",
"name": "conn-lite",
"plan": "default",
"provider": null,
"syslog_drain_url": null,
"tags": [
"connectivity",
"conn",
"connsvc"
],
"volume_mounts": []
}
],
Note
X.509 Bindings
Older instances of the service do not support the X.509 credentials binding type and an attempt to create
one will result in an error. To overcome this, you just need to update the service instance, so it picks up the
X.509 support.
The service supports X.509 bindings as described in Retrieving Access Tokens with Mutual Transport Layer
Security (mTLS). This lets you choose if your binding / service key will be with a client secret X.509 certificate
generated by SAP or an X.509 certificate provided by you. To do so, add a config JSON when creating your
binding:
{
"xsuaa":
{
<X.509 properties>
},
<other config parameters>
}
• Parameters for X.509 Certificates Managed by SAP Authorization and Trust Management Service
• Parameters for Self-Managed X.509 Certificates
Note
By default, without providing a config JSON, a binding with client secret will be created.
To use the Destination service in your application, you need an instance of the service.
Concept
To consume the Destination service, you must provide the appropriate credentials through a service instance
and a service binding/service key. The Destination service is publicly visible and cross-consumable from
In all environments, the Destination service lets you provide a configuration JSON during instance creation or
update.
You can pass a configuration JSON during instance creation or update to modify some of the default settings of
the instance and/or provide some content to be created during the operation.
The detailed structure of the configuration JSON is described in Use a Config.JSON to Create or Update a
Destination Service Instance [page 211].
Troubleshooting
• A provided destination or certificate with the same name already exists in this subaccount. Solution: set
policies on subaccount level to update or ignore in case of conflicts.
"existing_destinations_policy": "update|fail|ignore"
"existing_certificates_policy: "fail|ignore"
• A client input error occurred. Solution: check the input, apply correction and try again.
• An internal server error occurred. In this case, please try again later or report a support incident.
Note
Older instances of the service do not support the X.509 credentials binding type and an attempt to create
one will result in an error. To overcome this, you just need to update the service instance, so it picks up the
X.509 support.
The service supports X.509 bindings as described in Retrieving Access Tokens with Mutual Transport Layer
Security (mTLS). This lets you choose if your binding / service key will be with a client secret X.509 certificate
generated by SAP or an X.509 certificate provided by you. To do so, add a config JSON when creating your
binding:
{
"xsuaa":
{
<X.509 properties>
},
<other config parameters>
}
• Parameters for X.509 Certificates Managed by SAP Authorization and Trust Management Service
• Parameters for Self-Managed X.509 Certificates
Note
By default, without providing a config JSON, a binding with client secret will be created.
Configure specific parameters in a config.json file to create or update a Destination service instance.
When creating or updating a Destination service instance, you can configure the following settings, which are
part of the config.json input file (see Open Service Broker API ), both via SAP BTP cockpit or Cloud
Foundry command line interface (CLI):
Sample Code
{
"HTML5Runtime_enabled" : true,
"init_data" : {
"subaccount" : {
"existing_destinations_policy": "update|fail|ignore",
"existing_certificates_policy": "update|fail|ignore",
"destinations" : [
{
...
}
],
"certificates" : [
{
...
}
]
},
"instance" : {
"existing_destinations_policy": "update|fail|ignore",
"existing_certificates_policy": "update|fail|ignore",
"destinations" : [
{
...
}
],
"certificates" : [
{
...
}
]
}
}
}
Related Information
To create a backup of your destination configurations, choose the procedure Export Destinations [page 80].
Subscribe to the SAP Alert Notification service to receive notifications and alerts for the Destination service on
SAP BTP, Cloud Foundry environment.
You can subscribe for notifications and alerts for different Destination service events via SAP Alert Notification
Service for SAP BTP.
For more information, see What Is SAP Alert Notification Service for SAP BTP?.
For details on Destinaton service events, see SAP BTP Destination Service Events.
Destination fragments are objects used to override and extend destination properties through the Destination
service REST API.
You can use destination fragments to override and/or extend destination properties as result of the “Find a
destination” REST API request. Destination fragments are key-value based objects which contain a name and
additional configurable properties.
Property Description
FragmentName Name of the fragment. Must be unique for the level on which
it is stored/maintained.
Restriction
Managing destination fragments for your application is supported only by the Destination service REST API.
This API is documented in the SAP Business Accelerator Hub .
Related Information
Consume the Connectivity service and the Destination service from an application in the Cloud Foundry
environment.
Task Description
Consuming the Connectivity Service [page 214] Connect your Cloud Foundry application to an on-premise
system.
Consuming the Destination Service [page 243] Retrieve and store externalized technical information about
the destination that is required to consume a target remote
service from your application.
Invoking ABAP Function Modules via RFC [page 277] Call a remote-enabled function module (RFM) in an on-
premise or cloud ABAP server from your Cloud Foundry ap-
plication, using the RFC protocol.
Note
To use the Connectivity service with a protocol other than HTTP, see
Tasks
Developer 218]
2. Provide the Destination Information [page 218]
3. Set up the HTTP Proxy for On-Premise Connectivity
[page 219]
Overview
Using the Connectivity service, you can connect your Cloud Foundry application to an on-premise system
through the Cloud Connector. To achieve this, you must provide the required information about the target
system (destination), and set up an HTTP proxy that lets your application access the on-premise system.
Prerequisites
• You must be a Global Account member to connect through the Connectivity service with the Cloud
Connector. See Add Members to Your Global Account.
Also Security Administrators (which must be either Global Account members or Cloud Foundry
Organization/Space members) can do it. See Managing Security Administrators in Your Subaccount
[Feature Set A].
Note
To connect a Cloud Connector to your subaccount, you must currently be a Security Administrator.
• You have installed and configured a Cloud Connector in your on-premise landscape for to the scenario you
want to use. See Installation [page 349] and Configuration [page 387].
Note
Currently, the only supported protocol for connecting the Cloud Foundry environment to an on-premise
system is HTTP. HTTPS is not needed, since the tunnel used by the Cloud Connector is TLS-encrypted.
Caution
There is a limit of 8192 bytes for the size of the HTTP lines (for example, request line or header) that you
send via the Connectivity service. If this limit is exceeded, you receive an HTTP error of type 4xx. This issue
is usually caused by the size of the path + query string of the request.
Basic Steps
To consume the Connectivity service from your Cloud Foundry application, perform the following basic steps:
Consuming the Connectivity service requires credentials from the xsuaa and Connectivity service instances
which are bound to the application. By binding the application to service instances of the xsuaa and
Connectivity service as described in the prerequisites, these credentials become part of the environment
variables of the application. You can access them as follows:
Sample Code
Note
If you have multiple instances of the same service bound to the application, you must perform additional
filtering to extract the correct credential from jsonArr. You must go through the elements of jsonArr and
find the one matching the correct instance name.
This code stores a JSON object in the credentials variable. Additional parsing is required to extract the value for
a specific key.
Note
We refer to the result of the above code block as connectivityCredentials, when called for
connectivity, and xsuaaCredentials for xsuaa.
• The endpoint in the Cloud Connector (virtual host and virtual port) and accessible URL paths on it
(destinations). See Configure Access Control (HTTP) [page 459].
• The required authentication type for the on-premise system. See HTTP Destinations [page 86].
• Depending on the authentication type, you may need a username and password for accessing the on-
premise system. For more details, see Client Authentication Types for HTTP Destinations [page 101].
• (Optional) You can use a location Id. For more details, see section Specify a Cloud Connector Location ID
[page 221].
We recommend that you use the Destination service (see Consuming the Destination Service [page 243]) to
procure this information. However, using the Destination service is optional. You can also provide (look up) this
information in another appropriate way.
Proxy Setup
The Connectivity service provides a standard HTTP proxy for on-premise connectivity that is accessible by any
application. Proxy host and port are available as the environment variables <onpremise_proxy_host> and
<onpremise_proxy_http_port>. You can set up the on-premise HTTP proxy like this:
Sample Code
Authorization
To make calls to on-premise services configured in the Cloud Connector through the HTTP proxy, you must
authorize at the HTTP proxy. For this, the OAuth Client Credentials flow is used: applications must create
an OAuth access token using using the parameters clientid and clientsecret that are provided by the
Connectivity service in the environment, as shown in the example code below. When the application has
retrieved the access token, it must pass the token to the connectivity proxy using the Proxy-Authorization
header.
• org.springframework.security:spring-security-oauth2-core
• org.springframework.security:spring-security-oauth2-client
Sample Code
import
org.springframework.security.authentication.AbstractAuthenticationToken;
import
org.springframework.security.oauth2.client.ClientCredentialsOAuth2AuthorizedCl
ientProvider;
import org.springframework.security.oauth2.client.OAuth2AuthorizationContext;
import
org.springframework.security.oauth2.client.OAuth2AuthorizedClientProvider;
import
org.springframework.security.oauth2.client.registration.ClientRegistration;
import org.springframework.security.oauth2.core.AuthorizationGrantType;
import org.springframework.security.oauth2.core.OAuth2AccessToken;
...
OAuth2AuthorizationContext xsuaaContext =
OAuth2AuthorizationContext.withClientRegistration(clientRegistration).
principal(new AbstractAuthenticationToken(null) {
@Override
public Object getPrincipal() {
return null;
@Override
public Object getCredentials() {
return null;
}
@Override
public String getName() {
return "dummyPrincipalName"; // There is no principal in the
client credentials authorization grant but a non-empty name is still required.
}
}).build();
Note
Depending on the required authentication type for the desired on-premise resource, you may have to set an
additional header in your request. This header provides the required information for the authentication process
against the on-premise resource. See Authentication to the On-Premise System [page 223].
This is an advanced option when using more than one Cloud Connector for a subaccount. For more
information how to set the location ID in the Cloud Connector, see Managing Subaccounts [page 404],
step 4 in section Subaccount Dashboard.
You can connect multiple Cloud Connectors to a subaccount if their location ID is different. Using
the header SAP-Connectivity-SCC-Location_ID you can specify the Cloud Connector over which the
connection should be opened. If this header is not specified, the connection is opened to the Cloud Connector
that is connected without any location ID. For example:
Sample Code
To consume the Connectivity service from an SaaS application in a multitenant way, the only requirement is
that the SaaS application returns the Connectivity service as a dependent service in its dependencies list.
For more information about the subscription flow, see Develop the Multitenant Business Application.
Related Information
Procedure
For each authentication type, you must provide specific information in the request to the virtual host:
Note
For the principal propagation scenario, the SAP-Connectivity-Authentication header is only required
if you do not use the user exchange token flow, see Configure Principal Propagation via User Exchange
Token [page 227].
Applications must propagate the user JWT token (userToken) using the SAP-Connectivity-
Authentication header. This is required for the Connectivity service to open a tunnel to the subaccount
for which a configuration is made in the Cloud Connector. The following example shows you how to do this
using the Spring framework:
Sample Code
No Authentication
If the on-premise system does not need to identify the user, you should use this authentication type. It requires
no additional information to be passed with the request.
Principal Propagation
When you open the application router to access your cloud application, you are prompted to log in. Doing so
means that the cloud application now knows your identity. Principal propagation forwards this identity via the
Cloud Connector to the on-premise system. This information is then used to grant access without additional
input from the user. To achieve this, you do not need to send any additional information from your application,
but you must set up the Cloud Connector for principal propagation. See Configuring Principal Propagation
[page 419].
Note
As of Cloud Connector version 2.15, consumers of the Connectivity service can propagate technical users
from the cloud application towards the on-premise systems. To make use of this feature, provide a JWT
(usually obtained via client_credentials OAuth flow), representing the technical user, via the SAP-
Connectivity-Technical-Authentication header. This is similar to principal propagation, but in this
case, a technical user is propagated instead of a business user.
urlConnection.setRequestProperty("SAP-Connectivity-Technical-Authentication",
"Bearer " + technicalUserToken);
Note
Basic Authentication
If the on-premise system requires username and password to grant access, the cloud application must provide
these data using the Authorization header. The following example shows how to do this:
Sample Code
Related Information
Configure Principal Propagation via Corporate IdP Embedded Token [page 225]
Configure Principal Propagation via User Exchange Token [page 227]
Configure Principal Propagation via IAS Token [page 231]
Configure Principal Propagation via OIDC Token [page 232]
Configure a corporate IdP embedded token for principal propagation (user propagation) from your Cloud
Foundry application to an on-premise system.
Tasks
Operator
Developer
For a Cloud Foundry application that uses the Connectivity service, you want the currently logged-in user to be
propagated to an on-premise system via token from your trusted corporate IdP.
Prerequisites
Solutions
You have two options to implement the user propagation via embedded corporate IdP token:
1. Recommended. The application sends one header containing the user exchange token to the Connectivity
proxy:
• The application sends one HTTP header Proxy-Authorization.
For more information on how to obtain a user exchange access token with an embedded IAS corporate
IdP token, see SAP Authorization and Trust Management Service in the Cloud Foundry Environment and
Generate the Authentication Token [page 229].
2. The application sends two headers to the Connectivity proxy:
• The application sends two HTTP headers: SAP-Connectivity-Authentication and Proxy-
Authorization.
• The embedded corporate token which contains the user details must be present in the user token
provided via SAP-Connectivity-Authentication.
• The access token is provided via Proxy-Authorization.
The Cloud Connector validates the token, extracts the available user data, and enables further processing
through a configured subject pattern for the resulting short-lived X.509 client certificate.
Configure a user exchange token for principal propagation (user propagation) from your Cloud Foundry
application to an on-premise system.
Tasks
Scenario
For a Cloud Foundry application that uses the Connectivity service, you want the currently logged-in user to be
propagated to an on-premise system. For more information, see Principal Propagation [page 168].
Note
When performing a token exchange, the resulting token may lose some of its fields.
Solutions
1. Recommended: The application sends one header containing the user exchange token to the Connectivity
proxy:
This option is described in detail in Generate the Authentication Token [page 229] below.
2. The application sends two headers to the Connectivity proxy:
Note
To propagate a user to an on-premise system, you must call the Connectivity proxy using a special JWT (JSON
Web token). This token is obtained by exchanging a valid user token following the OAuth2 JWT Bearer grant
type .
Note
When using this type of exchange, some of the attributes inside the original user token may not be present
in the created JWT.
• Example: Obtaining a User Token Following the JWT Bearer Grant Type [page 229]
• Example: Calling the Connectivity Proxy with the Exchanged User Token [page 231]
Example: Obtaining a User Token Following the JWT Bearer Grant Type
Request:
Sample Code
Response:
Sample Code
The JWT in access_token, also referred to as user exchange token, now contains the user details. It is used to
consume the Connectivity service.
In case of a Java application, you can use a library that implements the user exchange OAuth flow. Here is an
example of how the userExchangeAcessToken can be obtained using the XSUAA Token Client and Token
Flow API :
Caution
<dependency>
<groupId>com.sap.cloud.security.xsuaa</groupId>
<artifactId>token-client</artifactId>
<version><latest version (e.g.: 2.7.7)></version>
</dependency>
Remember
The XSUAA Token Client library works with multiple HTTP client libraries. Make sure you have one as Maven
dependency.
Sample Code
// use the XSUAA client library to ease the implementation of the user token
exchange flow
XsuaaTokenFlows tokenFlows = new XsuaaTokenFlows(new
DefaultOAuth2TokenService(), new XsuaaDefaultEndpoints(xsUaaUri.toString()),
new ClientCredentials(connectivityServiceClientId,
connectivityServiceClientSecret));
String userExchangeAcessToken =
tokenFlows.userTokenFlow().token(<jwtToken_to_exchange>).execute().getAccessTo
ken();
For more information about caching, see also XSUAA Token Client and Token Flow API - Cache .
After obtaining the userExchangeAcessToken, you can use it to consume the Connectivity service.
Example: Calling the Connectivity Proxy with the Exchanged User Token
As a prerequisite to this step, you must configure the Connectivity proxy to be used by your client, see Set up
the HTTP Proxy for On-Premise Connectivity [page 219].
Once the application has retrieved the user exchange token, it must pass the token to the Connectivity proxy
via the Proxy-Authorization header. In this example, we use urlConnection as a client.
Sample Code
Note
Sample Code
For a Cloud Foundry application that uses the Connectivity service, you want the currently logged-in user to be
propagated via the Cloud Connector to an on-premise system. This user is represented in the cloud application
by an IAS token.
For more information, see Principal Propagation [page 168] and Getting Started with the Identity Service of
SAP BTP.
Prerequisites
Solution
More Information
Identity Authentication
Configure an OpenID-Connect (OIDC) token for principal propagation (user propagation) from your Cloud
Foundry application to an on-premise system.
Developer
Scenario
For a Cloud Foundry application that uses the Connectivity service, you want the currently logged-in user to be
propagated via the Cloud Connector to an on-premise system. This user is represented in the cloud application
by an OIDC token. For more information, see Principal Propagation [page 168] and the OIDC specification .
Solution
The Cloud Connector supports principal propagation for OIDC tokens. If on the cloud application side the user
is represented by an OIDC token, the application can send the user principal to the Connectivity service (thus
reaching the Cloud Connector), using the SAP-Connectivity-Authentication HTTP header.
The Cloud Connector validates the token, extracts the available user data, and enables further processing
through a configured subject pattern for the resulting short-lived X.509 client certificate.
By default, the user principal is identified by one of the following JWT (JSON web token) attributes:
• user_name
• email
This list specifies the priority (in descending order from top to bottom) for the default value of ${name} in the
subject pattern of the X.509 client certificate. If a token has more than one of the above claims, the value of
${name} is extracted from the claim with the highest priority by default.
For the example token below, the default value of ${name} is [email protected]:
Sample Code
{
"aud": "111111111111-2222-3333-444444444444",
"iss": "https://issuer.com",
"exp": 2091269073,
"iat": 1601901108,
"jti": "111222333444555666777888999000",
"sub": "test",
"email": "[email protected]"
}
The Cloud Connector administrator can control the exact value to be used as user principal for the subject CN
of the X.509 client certificate by configuring a subject pattern. For more information, see Configure Subject
Patterns for Principal Propagation [page 443].
Access on-premise systems from a Cloud Foundry application via TCP-based protocols, using a SOCKS5 Proxy.
Content
Concept
The SOCKS5 proxy host and port are accessible through the environment variables, which are generated
after binding an application to a Connectivity service instance. For more information, see Consuming the
Connectivity Service [page 214].
You can access the host under onpremise_proxy_host, and the port through
onpremise_socks5_proxy_port, obtained from the Connectivity service instance.
Authentication to the SOCKS5 proxy is mandatory. It involves the usage of a JWT (JSON Web token) access
token (for more information, see IETF RFC 7519 ). The JWT can be retrieved through the client_id and
client_secret, obtained from the Connectivity service instance. For more information, see Set up the HTTP
Proxy for On-Premise Connectivity [page 219], section Authorization.
The value of the SOCKS5 protocol authentication method is defined as 0x80 (defined as X'80' in IETF, refer to
the official specification SOCKS Protocol Version 5 ). This value should be sent as part of the authentication
method's negotiation request (known as Initial Request in SOCKS5). The server then confirms with a response
containing its decimal representation (either 128 or -128, depending on the client implementation).
After a successful SOCKS5 Initial Request, the authentication procedure follows the standard SOCKS5
authentication sub-procedure, that is SOCKS5 Authentication Request. The request bytes, in sequence, should
look like this:
Bytes Description
The Cloud Connector location ID identifies Cloud Connector instances that are deployed in various locations of
a customer's premises and connected to the same subaccount. Since the location ID is an optional property,
you should include it in the request only if it has already been configured in the Cloud Connector. For more
information, see Set up Connection Parameters and HTTPS Proxy [page 391] (Step 4).
If not set in the Cloud Connector, the byte representing the length of the location ID in the Authentication
Request should have the value 0, without including any value for the Cloud Connector location ID
(sccLocationId).
Example
The following code snippet demonstrates an example based on the Apache Http Client library and Java
code, which represents a way to replace the standard socket used in the Apache HTTP client with one that is
responsible for authenticating with the Connectivity SOCKS5 proxy:
Sample Code
@Override
public void connect(SocketAddress endpoint, int timeout) throws IOException {
super.connect(getProxyAddress(), timeout);
executeSOCKS5AuthenticationRequest(outputStream); // 2. Negotiate
authentication sub-version and send the JWT (and optionally the Cloud
Connector Location ID)
executeSOCKS5ConnectRequest(outputStream, (InetSocketAddress)
endpoint); // 3. Initiate connection to target on-premise backend system
}
Sample Code
assertServerInitialResponse();
}
byteArraysStream.write(ByteBuffer.allocate(4).putInt(jwtToken.getBytes().lengt
h).array());
byteArraysStream.write(jwtToken.getBytes());
byteArraysStream.write(ByteBuffer.allocate(1).put((byte)
sccLocationId.getBytes().length).array());
byteArraysStream.write(sccLocationId.getBytes());
return byteArraysStream.toByteArray();
} finally {
byteArraysStream.close();
}
}
assertAuthenticationResponse();
}
In version 4.2.6 of the Apache HTTP client, the class responsible for connecting the socket is
DefaultClientConnectionOperator. By extending the class and replacing the standard socket with the
complete example code below, which implements a Java Socket, you can handle the SOCKS5 authentication
with ID 0x80. It is based on a JWT and supports the Cloud Connector location ID.
Sample Code
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.Socket;
import java.net.SocketAddress;
import java.net.SocketException;
import java.nio.ByteBuffer;
@Override
public void connect(SocketAddress endpoint, int timeout) throws
IOException {
super.connect(getProxyAddress(), timeout);
executeSOCKS5InitialRequest(outputStream);
executeSOCKS5AuthenticationRequest(outputStream);
executeSOCKS5ConnectRequest(outputStream, (InetSocketAddress)
endpoint);
}
assertServerInitialResponse();
}
assertAuthenticationResponse();
}
byteArraysStream.write(ByteBuffer.allocate(4).putInt(jwtToken.getBytes().lengt
h).array());
byteArraysStream.write(jwtToken.getBytes());
byteArraysStream.write(ByteBuffer.allocate(1).put((byte)
sccLocationId.getBytes().length).array());
byteArraysStream.write(sccLocationId.getBytes());
return byteArraysStream.toByteArray();
} finally {
byteArraysStream.close();
}
}
assertConnectCommandResponse();
}
byteArraysStream.write(SOCKS5_COMMAND_ADDRESS_TYPE_DOMAIN_BYTE);
byteArraysStream.write(ByteBuffer.allocate(1).put((byte)
host.getBytes().length).array());
byteArraysStream.write(host.getBytes());
}
byteArraysStream.write(ByteBuffer.allocate(2).putShort((short)
port).array());
return byteArraysStream.toByteArray();
} finally {
byteArraysStream.close();
}
}
readRemainingCommandResponseBytes(inputStream);
}
String commandConnectStatusTranslation;
switch (commandConnectStatus) {
case 1:
commandConnectStatusTranslation = "FAILURE";
break;
case 2:
commandConnectStatusTranslation = "FORBIDDEN";
break;
case 3:
commandConnectStatusTranslation = "NETWORK_UNREACHABLE";
break;
case 4:
commandConnectStatusTranslation = "HOST_UNREACHABLE";
return parsedHostName;
}
Troubleshooting
If the handshake with the SOCKS5 proxy server fails, a SOCKS5 protocol error is returned, see IETF RFC
1928 . The table below shows the most common errors and their root cause in the scenario you use:
Retrieve and store externalized technical information about the destination to consume a target remote service
from your Cloud Foundry application.
Tasks
Developer 245]
2. Generate a JSON Web Token (JWT) [page 246]
3. Call the Destination Service [page 247]
Overview
The Destination service lets you find the destination information that is required to access a remote service or
system from your Cloud Foundry application.
• For the connection to an on-premise system, you can optionally use this service, together with (i.e. in
addition to) the Connectivity service, see Consuming the Connectivity Service [page 214].
• For the connection to any other Web application (remote service), you can use the Destination service
without the Connectivity service.
Consuming the Destination Service includes user authorization via a JSON Web Token (JWT) that is provided
by the xsuaa service.
Prerequisites
• To manage destinations and certificates on service instance level (all CRUD operations), you must be
assigned to one of the following roles: OrgManager, SpaceManager or SpaceDeveloper.
Note
The role SpaceAuditor has only Read permission for destinations and certificates.
• To consume the Destination service from an application, you must create a service instance and bind it to
the application. See Create and Bind a Destination Service Instance [page 209].
• To generate the required JSON Web Token (JWT), you must bind the application to an instance of the
xsuaa service using the service plan 'application'. The xsuaa service instance acts as an OAuth 2.0 client
Steps
To consume the Destination service from your application, perform the following basic steps:
The Destination service stores its credentials in the environment variables. To consume the service, you require
the following information:
• The value of clientid, clientsecret and uri from the Destination service credentials.
• From the CLI, the following command lists the environment variables of <app-name>:
cf env <app-name>
• From within the application, the service credential can be accessed as described in Consuming the
Connectivity Service [page 214].
Note
Your application must create an OAuth client using the attributes clientid and clientsecret, which are
provided by the Destination service instance. Then, you must retrieve a new JWT from UAA and pass it in the
Authorization HTTP header.
Java:
For a Java application, you can use a library that implements the client credentials OAuth flow. Here is an
example of how the clientCredentialsTokenFlow can be obtained using the XSUAA Token Client and
Token Flow API :
Caution
<dependency>
<groupId>com.sap.cloud.security.xsuaa</groupId>
<artifactId>token-client</artifactId>
<version><latest version (e.g.: 2.7.7)></version>
</dependency>
The XSUAA Token Client library works with multiple HTTP client libraries. Make sure you have one as Maven
dependency.
Sample Code
// use the XSUAA client library to ease the implementation of the user token
exchange flow
XsuaaTokenFlows tokenFlows = new XsuaaTokenFlows(new
DefaultOAuth2TokenService(), new XsuaaDefaultEndpoints(xsUaaUri.toString()),
new ClientCredentials(clientid, clientsecret));
String jwtToken =
tokenFlows.clientCredentialsTokenFlow().execute().getAccessToken();
For more information about caching, see also XSUAA Token Client and Token Flow API - Cache .
cURL:
Sample Code
curl -X POST \
<xsuaa-url>/oauth/token \
-H 'authorization: Basic <<clientid>:<clientsecret> encoded with Base64>' \
-H 'content-type: application/x-www-form-urlencoded' \
-d 'client_id=<clientid>&grant_type=client_credentials'
When calling the Destination service, use the uri attribute, provided in VCAP_SERVICES, to build the request
URLs.
Read a Destination by only Specifying its Name ("Find Destination") [page 248]
This lets you provide simply a name of the destination while the service will search for it. First, the service
searches the destinations that are associated with the service instance. If none of the destinations match the
requested name, the service searches the destinations that are associated with the subaccount.
• Path: /destination-configuration/v1/destinations/<destination-name>
• Example of a call (cURL):
Sample Code
curl "<uri>/destination-configuration/v1/destinations/<destination-name>" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
• Example of a response (this is a destination found when going through the subaccount destinations):
Sample Code
{
"owner":
{
"SubaccountId":<id>,
"InstanceId":null
},
"destinationConfiguration":
{
"Name": "demo-internet-destination",
"URL": "http://www.google.com",
"ProxyType": "Internet",
"Type": "HTTP",
"Authentication": "NoAuthentication"
}
}
Note
The response from this type of call contains not only the configuration of the requested destination, but
also some additional data. See "Find Destination" Response Structure [page 259].
This lets you retrieve the configurations of a destination that is defined within a subaccount, by providing the
name of the destination.
• Path: /destination-configuration/v1/subaccountDestinations/<destination-name>
Sample Code
curl "<uri>/destination-configuration/v1/subaccountDestinations/
<destination name>" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
• Example of a response:
Sample Code
{
"Name": "demo-internet-destination",
"URL": "http://www.google.com",
"ProxyType": "Internet",
"Type": "HTTP",
"Authentication": "NoAuthentication"
}
This lets you retrieve the configurations of all destinations that are defined within a subaccount.
• Path: /destination-configuration/v1/subaccountDestinations
Sample Code
curl "<uri>/destination-configuration/v1/subaccountDestinations" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
• Example of a response:
Sample Code
[
{
"Name": "demo-onpremise-destination1",
"URL": "http:/virtualhost:1234",
"ProxyType": "OnPremise",
"Type": "HTTP",
"Authentication": "NoAuthentication"
},
{
"Name": "demo-onpremise-destination2",
"URL": "http:/virtualhost:4321",
"ProxyType": "OnPremise",
"Type": "HTTP",
"Authentication": "BasicAuthentication",
"User": "myname123",
"Password": "123456"
}
Response Codes
When calling the Destination service, you may get the following response codes:
The JSON object that serves as the response of a successful request (value of the
destinationConfiguration property for "Find destination") can have different attributes, depending on
the authentication type and proxy type of the corresponding destination. See HTTP Destinations [page 86].
Related Information
User Propagation via SAML 2.0 Bearer Assertion Flow [page 251]
Destination Service REST API [page 80]
Exchanging User JWTs via OAuth2UserTokenExchange Destinations [page 263]
Use Cases [page 280]
Multitenancy in the Destination Service [page 265]
Destination Java APIs [page 267]
Extending Destinations with Fragments [page 273]
Learn about the process for automatic token retrieval, using the OAuth2SAMLBearerAssertion
authentication type for HTTP destinations.
Tasks
Prerequisites
• You have configured an OAuth2SAMLBearerAssertion destination. See OAuth SAML Bearer Assertion
Authentication [page 94].
• Unless using the destination property SystemUser, the user’s identity should be represented by a JSON
Web token (JWT).
Caution
The SystemUser property is deprecated and will be removed soon. We recommend that you work on
behalf of specific (named) users instead of working with a technical user.
As an alternative for technical user communication, we strongly recommend that you use one of these
authentication types:
• Basic Authentication (see Client Authentication Types for HTTP Destinations [page 101])
• Client Certificate Authentication (see Client Authentication Types for HTTP Destinations [page
101])
• OAuth Client Credentials Authentication [page 104]
To extend an OAuth access token's validity, consider using an OAuth refresh token.
Though actually not being a strict requirement, it is likely that you need a user JWT to get the relevant
information. See SAP Authorization and Trust Management Service.
• If you are using custom user attributes to determine the user, the JWT representing the user (that is
passed to the Destination service) must have the user_attributes scope.
For an OAuth2SAMLBearerAssertion destination, you can use the automated token retrieval functionality
that is available via the "find destination" endpoint. See Destination Service REST API [page 80].
There are currently three sources that can provide the propagated user ID. They are prioritized, meaning that
the lookup always starts from the top-priority source and goes down the list. If the propagated user ID is not
found at a given level, the next level is checked. If not found on any level, the operation would fail.
Find the available sources in the table below, in order of their priority.
Field in the JWT In this case, the Destination service looks for the user
ID as a field in the provided JWT. When you make the
HTTP call to the Destination service, you must provide the
Authorization header. The value must be a JWT in its
encoded form (see RFC 7519 ). The procedure is as fol-
lows:
Custom User Attribute Like Field in the JWT, this source must use the
Authorization header. In this case, its value is used
to retrieve the custom user attributes from the Identity Pro-
vider (XSUAA). One of those attributes can be used as the
propagated user ID. The access token from the header must
be a user JWT with the user_attributes scope. Oth-
erwise the custom attributes cannot be retrieved, and the
operation results in an error.
You can read additional user attributes from the identity provider (XSUAA), and propagate them as SAML
attributes in the generated SAML bearer assertion.
These attributes are similar to the ones returned by the Cloud Foundry UAA user info endpoint. However, they
may differ depending on the XSUAA behavior, and are specific to the identity provider you use.
For more details about these attributes and possible values, see https://docs.cloudfoundry.org/api/uaa/
version/74.15.0/index.html#user-info .
Note
• All root elements, except for user_attributes, are added "as is", that is, the attribute name and
value are displayed as seen in the source (user info response structure).
• Elements under the user_attribute key are parsed and added as attributes prefixed
with 'user_attributes.'. For example, having {"user_attributes": { "my_param":
"my_value" }} will result in an attribute called 'user_attributes.my_param' with value
'my_value' in the SAML assertion. If you want to avoid this user_attributes. prefix, you can set the
skipUserAttributesPrefixInSAMLAttributes additional property of the destination to true. If
you do so, the above example will result in an attribute called my_param with value my_value in the
SAML assertion.
In addition to identity provider (XSUAA) user info attributes, there are some more attributes which are read
from the passed JWT. They are located via predefined JsonPath expressions and cannot be controlled by the
end user:
• $.['xs.system.attributes']['xs.saml.groups']
• $.['user_attributes']['xs.saml.groups']
Note
The 'xs.saml.groups' attribute, read from the passed JWT, is renamed to 'Groups' in the resulting
SAML assertion. See also Settings for Default SAML Federation Attributes of Identity Providers for Business
Users.
Scenarios
Refer to the table below to find the JWT requirements for a specific scenario:
ing the SystemUser property of an • via the client credentials of the Destination service in-
OAuth2SAMLBearerAssertion destination maintained stance (bound to the application).
in the subscriber subaccount, and used by the provider • using the subscriber's tenant-specific Token
application.
Service URL.
ing the SystemUser property of an • via the client credentials of the Destination service in-
OAuth2SAMLBearerAssertion destination maintained stance (bound to the application).
in the same subaccount where the application is deployed. • using the provider's tenant-specific Token Service
URL.
Propagate a business user principal, using an The JWT, previously retrieved from the application
OAuth2SAMLBearerAssertion destination maintained • by exchanging the JWT (that represents the user) to an-
in the subscriber subaccount where the application is de- other user access token via the client credentials of the
ployed. Destination service instance (bound to the application).
The business user is represented by a JWT that was issued • using the subscriber's tenant-specific Token
by the subscriber. Service URL.
Propagate a business user principal, using an The JWT, previously retrieved by the application
OAuth2SAMLBearerAssertion destination maintained • by exchanging the JWT (that represents the user) to an-
in the same subaccount where the application is deployed. other user access token via the client credentials of the
The business user is represented by a JWT that was issued Destination service instance (bound to the application).
by the provider. • using the provider's tenant-specific Token Service
URL.
Destination service REST API specification for the SAP Cloud Foundry environment.
The Destination service provides a REST API that you can use to read and manage resources like destinations,
certificates and destination fragments on all available levels. This API is documented in the SAP Business
Accelerator Hub .
It shows all available endpoints, the supported operations, parameters, possible error cases and related
status codes, etc. You can also execute requests using the credentials (for example, the service key) of your
Destination service instance, see Create and Bind a Destination Service Instance [page 209].
Prerequisites and steps to get access to the Destination service REST API.
Prerequisites
To call the Destination service REST API, you must have a Destination service instance inside your subaccount.
If you:
• don't have such instance, follow the next step to get it up and running.
• have such instance, you can skip the next step and go to Get Credentials [page 257].
To create a Destination service instance inside your subaccount, see Creating Service Instances about creating
service instances in the BTP cockpit, or from the Cloud Foundry CLI.
Note
When creating a Destination service instance, you can refer to the following yaml segment for the basic
information of the instance.
• Plan: lite
Currently, the Destination service offers only this plan.
• Runtime Environment: Cloud Foundry
Cloud Foundry must be enabled for your subaccount.
• Space: <space_name>
Choose the space in which the Destination service instance will reside.
• Instance Name: <instance_name>
Enter a name of your choice here for the instance.
The yaml segment above contains Cloud Foundry-specific information. Other configurations are possible,
for instance when running on Kyma. The important part is to deploy your instance in the target runtime and
get the service key / binding credentials to access the Destination service.
2. Get Credentials
To access the Destination service REST API, you need an access token. To generate this token, you need
credentials contained in a service key of the Destination service instance.
If you don't have any service keys in your Destination service instance, see Creating Service Keys about
creating a service key for an instance in the BTP cockpit or from the Cloud Foundry CLI.
Once you have a service key for your Destination service instance, you open it and extract the following
information:
• clientid: "<value_to_extract>"
The client ID that will be used for authentication in the next step.
• clientsecret: "<value_to_extract>"
The client secret that will be used for authentication in the next step.
• url: "<value_to_extract>"
The authentication endpoint from which you get an access token for the Destination service.
• uri: "<value_to_extract>"
The URL of the Destination service.
Note
For demonstration purposes, we are using the clientid and clientsecret for authentication. We
recommend that you use mTLS which uses an X.509 client certificate because it is a more secure way of
authenticating.
3. Get an AccessToken
In this step, you will get an access token from XSUAA which you can then use to authenticate towards the
Destination service REST API. For this step, you must use the values you extracted for clientid, clientsecret and
url from the previous step.
curl -X POST \
"<url>/oauth/token" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=client_credentials" --data-urlencode
"client_id=<client_id>" --data-urlencode "client_secret=<client_secret>"
where:
The access token is shown in the access_token key in the response JSON. Make sure you save it because you
will need it in the next step.
Now that you have an access token for the Destination service, you can call one of the Destination service REST
API endpoints. To get the full list of available endpoints in the Destination service REST API, see the Destination
Service REST API reference on SAP Business Accelerator Hub.
Sample Code
curl -X GET \
"<uri>/destination-configuration/v1/<endpoint>" \
-H "Authorization: Bearer <access_token>"
where:
If, for example, you want to make a GET call towards the /subaccountDestinations endpoint, the call would look
like:
Sample Code
curl -X GET \
"<uri>/destination-configuration/v1/subaccountDestinations" \
-H "Authorization: Bearer <access_token>"
[
{
"Name": "no-authentication-destination",
"Type": "HTTP",
"URL": "https://sap.com",
"Authentication": "NoAuthentication",
"ProxyType": "Internet"
},
{
"Name": "basic-authentication-destination",
"Type": "HTTP",
"URL": "https://sap.com",
"Authentication": "BasicAuthentication",
"ProxyType": "Internet",
"User": "my-user",
"Password": "my-password"
}
]
Overview of data that are returned by the Destination service for the call type "find destination".
Response Structure
When you use the "find destination" call (read a destination by only specifying its name), the structure of the
response includes four parts:
Each of these parts is represented in the JSON object as a key-value pair and their values are JSON objects, see
Example [page 262].
Destination Owner
• Key: owner
Sample Code
"owner": {
"SubaccountId": "9acf4877-5a3d-43d2-b67d-7516efe15b11",
"InstanceId": null
}
Destination Configuration
• Key: destinationConfiguration
The JSON object that represents the value of this property contains the actual properties of the
destination. To learn more about the available properties, see HTTP Destinations [page 86].
If a destination fragment was specified in the “Find Destination” call, the value of this property is
represented by the JSON object that contains the actual properties of the destination, merged with the
JSON object that contains the properties of the destination fragment.
For more information on how to extend a destination with a destination fragment, see Extending
Destinations with Fragments [page 273].
• Example:
Sample Code
"destinationConfiguration": {
"Name": "TestBasic",
"Type": "HTTP",
"URL": "http://sap.com",
"Authentication": "BasicAuthentication",
"ProxyType": "OnPremise",
"User": "test",
"Password": "pass12345"
}
Authentication Tokens
This property is only applicable to destinations that use the following authentication types:
BasicAuthentication, OAuth2SAMLBearerAssertion, OAuth2ClientCredentials, OAuthUserTokenExchange,
OAuth2JWTBearer, OAuth2Password, SAMLAssertion, OAuth2RefreshToken, OAuth2AuthorizationCode,
OAuth2TechnicalUserPropagation.
Restriction
The section will contain an error if the tokenServiceUrl is a private endpoint (like localhost) and
ProxyType is Internet, as the automation cannot be performed on the server side in this case.
Note
This section will not be present if automatic token retrieval is skipped by setting the query parameter
$skipTokenRetrieval=true.
• Key: authTokens
The JSON array that represents the value of this property contains tokens that are required for
authentication. These tokens are represented by JSON objects with these properties (expect more new
properties to be added in the future):
• type: the type of the token.
• value: the actual token.
• http_header: JSON object containing the prepared token in the correct format. The <key> field
contains the key of the HTTP header. The <value> field contains the value of the header.
• expires_in (only in OAuth2 destinations): The lifetime in seconds of the access token. For example,
the value "3600" denotes that the access token will expire in one hour from the time the response was
generated.
• error (optional): if the retrieval of the token fails, the value of both type and value is an empty string
and this property shows an error message, explaining the problem.
• scope (optional) (only in OAuth2 destinations): The scopes issued with the token. The value of
the scope parameter is expressed as a list of space-delimited strings. For example, read write
execute.
• refresh_token (optional) (only in OAuth2 destinations): A refresh token, returned by the OAuth
service. It can be used to renew the access token via OAuth Refresh Token Authentication [page 134].
• Example:
Sample Code
"authTokens": [
{
"type": "Basic",
"value": "dGVzdDpwYXNzMTIzNDU=",
"http_header": {
"key":"Authorization",
"value":"Basic dGVzdDpwYXNzMTIzNDU="
}
}
]
Certificates
Note
This property is only applicable to destinations that use mTLS for token retrieval and automatic token
retrieval is skipped, or destinations with the following authentication types: ClientCertificateAuthentication,
OAuth2SAMLBearerAssertion (when default JDK trust store is not used).
• Key: certificates
The JSON array that represents the value of this property contains the certificates, specified in the
destination configuration. These certificates are represented by JSON objects with these properties
(expect more new properties to be added in the future):
• type
• content: the encoded content of the certificate.
• name: the name of the certificate, as specified in the destination configuration.
• Example:
Sample Code
"certificates": [
{
"Name": "keystore.jks",
"Content": "<value>"
"Type": "CERTIFICATE"
},
{
"Name": "truststore.jks",
"Content": "<value>"
"Type": "CERTIFICATE"
}
]
Example
Sample Code
{
"owner": {
"SubaccountId": "9acf4877-5a3d-43d2-b67d-7516efe15b11",
Automatic token retrieval using the OAuth2UserTokenExchange authentication type for HTTP destinations.
Content
Prerequisites
You have configured an OAuth2UserTokenExchange destination. See OAuth User Token Exchange
Authentication [page 112].
The token to be exchanged must have the uaa.user scope to enable the token exchange. See SAP
Authorization and Trust Management Service for more details.
For destinations of authentication type OAuth2UserTokenExchange, you can use the automated token
retrieval functionality via the "find destination" endpoint, see Call the Destination Service [page 247].
If you provide the user token exchange header with the request to the Destination service and its value is not
empty, it is used instead of the Authorization header to specify the user and the tenant subdomain. It will be
the token for which token exchange is performed.
• The header value must be a user JWT (JSON Web token) in encoded form, see RFC 7519 .
• If the user token exchange header is not provided with the request to the Destination Service or it is
provided, but its value is empty, the token from the Authorization header is used instead. In this
case, the JWT in the Authorization header must be a user JWT in encoded form, otherwise the token
exchange does not work.
For information about the response structure of this request, see "Find Destination" Response Structure [page
259].
Scenarios
To achieve specific token exchange goals, you can use the following headers and values when calling the
Destination service:
Concept
When developing a provider application (SaaS application) that consumes the Destination service, you can
choose between the following destination levels:
Note
The term level is used here to represent an area or visibility scope. The higher the level, the broader is the
visibility scope.
If you, as an application provider, want to create a destination that is used at runtime only by the subscriber and
that should be visible and accessible only to the subscriber, you can create a subscription-level destination for
each subscriber subaccount (tenant):
1. Retrieve an OAuth token from the subscriber token service URL using the OAuth client credentials from the
destination service instance, for example:
2. Use the retrieved token from step 1 to create (POST) a subscription-level destination in the Destination
service, see Destinations on service instance (subscription) level in the REST API specification .
1. Retrieve an OAuth token from the subscriber token service URL using the OAuth client credentials from the
destination service instance, for example:
2. Use the token from step 1 to retrieve (GET) the destination stored on subscription level from the
Destination service via:
• Find a destination in the REST API specification
• Destinations on service instance (subscription) level in the REST API specification
Use Destination service Java APIs to optimize application development in the Cloud Foundry environment.
When running your cloud application with SAP Java Buildpack, you can use the following Java APIs to optimize
the application development:
<dependency>
<groupId>com.sap.cloud.connectivity.apiext</groupId>
<artifactId>com.sap.cloud.connectivity.apiext</artifactId>
<version>${connectivity-apiext.version}</version>
<scope>provided</scope>
</dependency>
For more information on SAP Java Buildpack, see Developing Java in the Cloud Foundry Environment.
Overview
The ConnectivityConfiguration API is visible by default from the web applications hosted on SAP Java
Buildpack. You can access it via a JNDI lookup.
Besides managing destination configurations, you can also allow your applications to use their own managed
HTTP clients. The ConnectivityConfiguration API provides you with direct access to the destination
configurations of your applications.
Procedure
1. To consume a connectivity configuration using JNDI, you must define the ConnectivityConfiguration
API as a resource in the context.xml file.
Example of a ConnectivityConfiguration resource named connectivityConfiguration, which is
described in the context.xml file:
src/main/webapp/META-INF/context.xml
Sample Code
<Context>
<Resource name="connectivityConfiguration" auth="Container"
type="com.sap.core.connectivity.api.configuration.ConnectivityConfiguration
"
factory="com.sap.core.connectivity.api.jndi.ServiceObjectFactory"/>
</Context>
2. You also need to enable Connectivity ApiExt with an environment variable and bind the appropriate
services in manifest.yml:
manifest.yml
Sample Code
applications:
- ...
env:
USE_CONNECTIVITY_APIEXT: true
services:
- xsuaa-instance
- destination-instance
- connectivity-instance
...
3. In your servlet code, you can look up the ConnectivityConfiguration API from the JNDI registry as
follows:
Sample Code
import javax.naming.Context;
import javax.naming.InitialContext;
import
com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
...
Sample Code
Note
If you have two destinations with the same name, for example, one configured on subaccount level and
the other on service instance/subscription level, the getConfiguration() method will return the
destination on instance/subscription level.
1. Instance/subscription level
2. Subaccount level
5. If a trust store and a key store are defined in the corresponding destination, you can access them by using
the methods getKeyStore and getTrustStore.
Sample Code
// create sslcontext
TrustManagerFactory tmf =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(trustStore);
KeyManagerFactory keyManagerFactory =
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
Use the AuthenticationHeaderProvider API for applications in the Cloud Foundry environment to retrieve
prepared authentication headers that are ready to be used to towards a remote target system.
Overview
The AuthenticationHeaderProvider API is visible by default from the web applications hosted on SAP
Java Buildpack. You can access it via a JNDI lookup.
This API lets your applications use their own managed HTTP clients, as it provides them with automated
authentication token retrieval, making it easy to implement single sign-on (SSO) towards the remote target.
Procedure
1. To consume the authentication header provider API using JNDI, you need to define the
AuthenticationHeaderProvider API as a resource in the context.xml file.
Example of an AuthenticationHeaderProvider resource named myAuthHeaderProvider, which is
described in the context.xml file:
Sample Code
<Context>
<Resource name="myAuthHeaderProvider" auth="Container"
type="com.sap.core.connectivity.api.authentication.AuthenticationHeaderProv
ider"
factory="com.sap.core.connectivity.api.jndi.ServiceObjectFactory"/>
</Context>
Sample Code
import javax.naming.Context;
import javax.naming.InitialContext;
import
com.sap.core.connectivity.api.authentication.AuthenticationHeaderProvider;
...
The Connectivity service supports a mechanism to enable SSO using the so-called principal propagation
authentication type of a destination configuration. To propagate the logged-in user, the application can use the
AuthenticationHeaderProvider API to retrieve a prepared HTTP header, which then embeds in the HTTP
request to the on-premise system.
Prerequisites
To connect to on-premise systems and perform single sign-on, you must bind a Connectivity service instance
to the cloud application.
References
Example
Sample Code
The Destination service provides support for applications to use the SAML Bearer assertion flow for consuming
OAuth-protected resources. In this way, applications do not need to deal with some of the complexities of
OAuth and can reuse user data from existing identity providers.
Users are authenticated by using a SAML assertion against the configured and trusted OAuth token service.
The SAML assertion is then used to request an access token from an OAuth token service. This access token is
returned by the API and can be injected in the HTTP request to access the remote OAuth-protected resources
via SSO.
Tip
Тhe access tokens are cached by AuthenticationHeaderProvider and are auto-updated: When a
token is about to expire, a new token is created shortly before the expiration of the old one.
The AuthenticationHeaderProvider API provides the following method to retrieve such headers:
List<AuthenticationHeader>
getOAuth2SAMLBearerAssertionHeaders(DestinationConfiguration
destinationConfiguration);
For more information, see also Principal Propagation SSO Authentication for HTTP [page 92].
Client Credentials
The Destination service provides support for applications to use the OAuth client credentials flow for
consuming OAuth-protected resources.
The client credentials are used to request an access token from an OAuth token service.
Тhe access tokens are cached by AuthenticationHeaderProvider and are auto-updated: When a
token is about to expire, a new token is created shortly before the expiration of the old one.
The AuthenticationHeaderProvider API provides the following method to retrieve such headers:
Related Information
Use the “Find Destination” API to extend your destination with a destination fragment.
This extension involves merging the JSON object of the destination properties, together with the JSON object
of the fragment properties.
Note
If any fragment property uses the same key name as a destination property, the combined JSON object will
use the value of the fragment property.
The combined JSON of the destination and fragment will be returned in the response body as the value of
the “destinationConfiguration” property. Additionally, this combined JSON will be used for retrieving tokens
from authorization servers, if applicable.
Note
The combination of the destination JSON and the fragment JSON happens in the context of the “Find
Destination” request. The actual destination stored in Destination service is not modified.
Note
For more information on how to create and manage resources like destinations and destination fragments,
see SAP Business Accelerator Hub .
Caution
Only one X-Fragment-Name header may be present in the “Find Destination” request.
Destination:
Sample Code
Name=destination
Type=HTTP
URL=https://xxxx.example.com
ProxyType=Internet
Authentication=NoAuthentication
Destination Fragment:
Sample Code
FragmentName=fragment
example-property=example-value
Sample Code
Response:
Sample Code
{
"owner": {
"SubaccountId": <subaccount_id>,
"InstanceId": null
},
"destinationConfiguration": {
"Name": "destination",
"Type": "HTTP",
"URL": "https://xxxx.example.com",
"Authentication": "NoAuthentication",
"ProxyType": "Internet",
"FragmentName": "fragment",
"example-property": "example-value"
}
}
Destination:
Sample Code
Name=destination
Type=HTTP
URL=https://xxxx.example.com
ProxyType=Internet
Authentication=OAuth2ClientCredentials
clientId=clientId
clientSecret=secret1234
tokenServiceURL=https://authserver1.example.com/oauth/token/
Destination Fragment:
FragmentName=fragment
clientId=clientId-2
clientSecret=secret2345
tokenServiceURL=https://authserver2.example.com/oauth/token/
Sample Code
Response:
Sample Code
{
"owner": {
"SubaccountId": <subaccount_id>,
"InstanceId": null
},
"destinationConfiguration": {
"Name": "destination",
"Type": "HTTP",
"URL": "https://xxxx.example.com",
"Authentication": "OAuth2ClientCredentials",
"ProxyType": "Internet",
"FragmentName": "fragment",
"clientId": "clientId-2",
"clientSecret": "secret2345",
"tokenServiceURL": "https://authserver2.example.com/oauth/token/"
},
"authTokens": [
{
"type": "Bearer",
"value": "eyJhbGciOiJSUzI1NiIsInR5cC...",
"http_header": {
"key":"Authorization",
"value":"Bearer eyJhbGciOiJSUzI1NiIsInR5cC..."
}
}
]
}
Call a remote-enabled function module in an on-premise or cloud ABAP server from your Cloud Foundry
application, using the RFC protocol.
Find the tasks and prerequisites that are required to consume an ABAP function module via RFC, using the
Java Connector (JCo) API as a built-in feature of SAP BTP.
Tasks
Operator
Prerequisites
Before you can use RFC communication for an SAP BTP application, you must configure:
About JCo
To learn in detail about the SAP JCo API, see the JCo 3.0 documentation on SAP Support Portal .
Note
• Architecture: CPIC is only used in the last mile from your Cloud Connector to an on-premise ABAP
backend. From SAP BTP to the Cloud Connector, TLS-protected communication is used.
• Installation: SAP BTP runtimes already include all required artifacts.
• Customizing and Integration: On SAP BTP, the integration is already done by the runtime. You can
concentrate on your business application logic.
• Server Programming: The programming model of JCo on SAP BTP does not include server-side RFC
communication.
• IDoc Support for External Java Applications: Currently, there is no IDocLibrary for JCo available on
SAP BTP
• For connections to an on-premise ABAP backend, you have downloaded the Cloud Connector installation
archive from SAP Development Tools for Eclipse and connected the Cloud Connector to your subaccount.
• You have downloaded and set up your Eclipse IDE and the Eclipse Tools for Cloud Foundry .
• You have downloaded the Cloud Foundry CLI, see Tools.
You can call a service from a fenced customer network using an application that consumes a remote-enabled
function module.
Invoking function modules via RFC is enabled by a JCo API that is comparable to the one available in SAP
NetWeaver Application Server Java (version 7.10), and in JCo standalone 3.0. If you are an experienced JCo
developer, you can easily develop a Web application using JCo: you simply consume the APIs like you do in
other Java environments. Restrictions that apply in the cloud environment are mentioned in the Restrictions
section below.
Find a sample Web application in Invoke ABAP Function Modules in On-Premise ABAP Systems [page 280].
Restrictions
Note
You must use the Tomcat or TomEE runtime offered by the build pack to make JCo work correctly. You
cannot bring a container of your own.
• Your application must not bundle the JCo 3.1 standalone Java archives nor the native library. JCo is already
embedded properly in the build pack.
• JCoServer functionality cannot be used within SAP BTP.
• Environment embedding, such as offered by JCo standalone 3.1, is not possible. This is, however, similar
to SAP NetWeaver AS Java.
• A stateful sequence of function module invocations must be done in a single HTTP request/response
cycle.
• Logon authentication only supports user/password credentials (basic authentication) and principal
propagation. See Authentication to the On-Premise System [page 223].
• The supported set of configuration properties is restricted. For details, see RFC Destinations [page 157].
Find instructions for typical RFC end-to-end scenarios that use the Connectivity service and/or the Destination
service (Cloud Foundry environment).
Invoke ABAP Function Modules in On-Premise ABAP Sys- Call a function module in an on-premise ABAP system via
tems [page 280] RFC, using a sample Web application (Cloud Foundry envi-
ronment).
Invoke ABAP Function Modules in Cloud ABAP Systems Call a function module in a cloud ABAP system via RFC, us-
[page 305] ing a sample Web application (Cloud Foundry environment).
Multitenancy for JCo Applications (Advanced) [page 323] Learn about the required steps to make your Cloud Foundry
JCo application tenant-aware.
Configure Principal Propagation for RFC [page 334] Enable single sign-on (SSO) via RFC by forwarding the iden-
tity of cloud users from the Cloud Foundry environment to
an on-premise system.
Call a function module in an on-premise ABAP system via RFC, using a sample Web application (Cloud Foundry
environment).
This scenario performs a remote function call to invoke an ABAP function module, by using the Connectivity
service and the Destination service in the Cloud Foundry environment, as well as a Cloud Foundry application
router.
Tasks
Developer
Developer
Operator
Scenario Overview
Control Flow for Using the Java Connector (JCo) with Basic Authentication
Note
AppRouter is only required you want to use multitenancy or perform user-specific service calls. In all
other cases, JCo uses cloud-security-xsuaa-integration with ClientCredentialFlow.
2. Redirect to XSUAA for login. JSON Web Token (JWT1) is sent to AppRouter and cached there.
3. AppRouter calls Web app and sends JWT1 with credentials.
4. Buildpack/XSUAA interaction:
1. Buildpack requests JWT2 to access the Destination service instance (JCo call).
2. Buildpack requests JWT3 to access the Connectivity service instance.
5. Buildpack requests destination configuration (JWT2).
6. Buildpack sends request to the Connectivity service instance (with JWT3 and Authorization Header).
7. Connectivity service forwards request to the Cloud Connector.
8. Cloud Connector sends request to on-premise system.
Since token exchanges are handled by the buildpack, you must only create and bind the service instances, see
Create and Bind Service Instances [page 289].
Different user roles are involved in the cloud to on-premise connectivity end-to-end scenario. The particular
steps for the relevant roles are described below:
IT Administrator
Application Developer
1. Installs Eclipse IDE, the Cloud Foundry Plugin for Eclipse and the Cloud Foundry CLI.
2. Develops a Java EE application using the JCo APIs.
3. Configures connectivity destinations via the SAP BTP cockpit.
4. Deploys and tests the Java EE application on SAP BTP.
Account Operator
Deploys Web applications, creates application routers, creates and binds service instances, conducts tests.
Scenario steps:
• You have downloaded the Cloud Connector installation archive from SAP Development Tools for Eclipse
and connected the Cloud Connector to your subaccount.
• You have downloaded and set up your Eclipse IDE and the Eclipse Tools for Cloud Foundry .
• You have downloaded the Cloud Foundry CLI, see Tools.
Next Steps
Related Information
Steps
1. In the Project Explorer view, right-click on the project jco-demo and choose Configure Convert to
Maven Project .
2. In the dialog window, leave the default settings unchanged and choose Finish.
3. Open the pom.xml file and include the following dependency:
<dependencies>
<dependency>
<groupId>com.sap.cloud</groupId>
<artifactId>neo-java-web-api</artifactId>
<version>[3.71.8,4.0.0)</version>
<scope>provided</scope>
</dependency>
</dependencies>
1. From the jco_demo project node, choose New Servlet in the context menu.
2. Enter com.sap.demo.jco as the <> and ConnectivityRFCExampleJava as the <Class name>.
Choose Next.
3. Choose Finish to create the servlet and open it in the Java editor.
4. Replace the entire servlet class to make use of the JCo API. The JCo API is visible by default for cloud
applications. You do not need to add it explicitly to the application class path.
Sample Code
package com.sap.demo.jco;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.conn.jco.AbapException;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoFunction;
import com.sap.conn.jco.JCoParameterList;
import com.sap.conn.jco.JCoRepository;
/**
* Sample application that uses the Connectivity
service. In particular, it is
* making use of the capability to invoke a function module in an ABAP
system
* via RFC
JCoParameterList imports =
stfcConnection.getImportParameterList();
imports.setValue("REQUTEXT", "SAP BTP Connectivity runs with
JCo");
stfcConnection.execute(destination);
JCoParameterList exports =
stfcConnection.getExportParameterList();
String echotext = exports.getString("ECHOTEXT");
String resptext = exports.getString("RESPTEXT");
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter.println("<h1>Executed STFC_CONNECTION in system
JCoDemoSystem</h1>");
responseWriter.println("<p>Export parameter ECHOTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(echotext);
responseWriter.println("<p>Export parameter RESPTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(resptext);
responseWriter.println("</body></html>");
} catch (AbapException ae) {
// just for completeness: As this function module does not
have an exception
// in its signature, this exception cannot occur. But you
should always
// take care of AbapExceptions
} catch (JCoException e) {
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter
.println("<h1>Exception occurred while executing
STFC_CONNECTION in system JCoDemoSystem</h1>");
responseWriter.println("<pre>");
e.printStackTrace(responseWriter);
responseWriter.println("</pre>");
responseWriter.println("</body></html>");
}
}
}
5. Save the Java editor and make sure that the project compiles without errors.
You must create and bind several service instances, before you can use your application.
Procedure
Connectivity Service
Destination Service
Note
The instance name must match the one defined in the manifest file.
Sample Code
{
"xsappname" : "jco-demo-p1234",
"tenant-mode": "dedicated",
"scopes": [
{
"name": "$XSAPPNAME.all",
"description": "all"
}
],
"role-templates": [
{
"name": "all",
"description": "all",
"scope-references": [
"$XSAPPNAME.all"
]
}
]
}
Next Steps
Deploy your Cloud Foundry application to call an ABAP function module via RFC.
Prerequisites
You have created and bound the required service instances, see Create and Bind Service Instances [page 289].
Procedure
1. To deploy your Web application, you can use the following two alternative procedures:
• Deploying from the Eclipse IDE
• Deploying from the CLI, see Developing Java in the Cloud Foundry Environment
2. In the following, we publish it with the CLI.
3. To do this, create a manifest.yml file. The key parameter is USE_JCO: true, which must be set to
include JCo into the buildpack during deployment.
Note
JCo supports the usage of X.509 secrets for communication with the Destination/Connectivity service.
If you want to use it, you must specify the binding accordingly.
For more information, see Binding Parameters of SAP Authorization and Trust Management Service.
manifest.yml
Sample Code
---
applications:
- name: jco-demo-p1234
buildpacks:
- sap_java_buildpack
env:
USE_JCO: true
# This is necessary only if more than one instance is bound
xsuaa_connectivity_instance_name: "xsuaa_jco"
connectivity_instance_name: "connectivity_jco"
destination_instance_name: "destination_jco"
Caution
The client libraries (java-security, spring-xsuaa, and container security api for node.js as of version
3.0.6) have been updated. When using these libraries, setting the parameter SAP_JWT_TRUST_ACL has
become obsolete. This update comes with a change regarding scopes:
• For a business application A calling an application B, it is now mandatory that application B grants
at least one scope to the calling business application A.
• Business application A must accept these granted scopes or authorities as part of the application
security descriptor.
For more information, see Application Security Descriptor Configuration Syntax, specifically the
sections Referencing the Application and Authorities.
Note
If you have more than one instance of those three services bound to your application, you must specify
which one JCo should use with the respective env parameters:
• xsuaa_connectivity_instance_name
• connectivity_instance_name
• destination_instance_name.
Next Steps
Configure a role that enables your user to access your Web application.
To add and assign roles, navigate to the subaccount view of the cloud cockpit and choose Security Role
Collections .
7. You should now be able to click Assign Role Collection. Choose role collection all and assign it.
Next Steps
For authentication purposes, configure and deploy an application router for your test application.
Note
AppRouter is only required if you want to use multitenancy or perform user-specific service calls. In all
other cases, JCo uses cloud-security-xsuaa-integration with ClientCredentialFlow.
1. To set up an application router, follow the steps in Application Router or use the demo file approuter.zip
(download).
2. For deployment, you need a manifest file, similar to this one:
Sample Code
---
applications:
- name: approuter-jco-demo-p1234
path: ./
buildpacks:
- nodejs_buildpack
memory: 120M
routes:
- route: approuter-jco-demo-p1234.cfapps.eu10.hana.ondemand.com
env:
NODE_TLS_REJECT_UNAUTHORIZED: 0
destinations: >
[
{"name":"dest-to-example", "url" :"https://jco-
demo-p1234.cfapps.eu10.hana.ondemand.com/ConnectivityRFCExample",
"forwardAuthToken": true }
]
services:
- xsuaa_jco
Note
• The routes and destination URLs need to fit your test application.
• In this example, we already bound our XSUAA instance to the application router. Alternatively, you
could also do this via the cloud cockpit.
Note
<dependency>
<groupId>com.sap.cloud.security</groupId>
<artifactId>java-security</artifactId>
</dependency>
or any of its dependencies such as java-api with scope compile directly or transitively with any
other jar.
If you are using an Application Router and it is mandatory for you to call JCo APIs from a different thread than
the one which is executing your servlet function, make sure the thread local information of the cloud-security-
xsuaa-integration API , used by JCo internally, is set again within your newly created thread.
<dependency>
<groupId>com.sap.cloud.security</groupId>
<artifactId>java-api</artifactId>
<version>2.7.7</version>
<scope>provided</scope>
</dependency>
Adjust your code from the step Develop a Sample Web Application [page 284] in the following way:
Sample Code
package com.sap.demo.jco;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.cloud.security.token.SecurityContext;
import com.sap.cloud.security.token.Token;
import com.sap.conn.jco.AbapException;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoFunction;
import com.sap.conn.jco.JCoParameterList;
import com.sap.conn.jco.JCoRepository;
/**
*
* Sample application that uses the connectivity service. In particular, it is
* making use of the capability to invoke a function module in an ABAP system
* via RFC
*
* Note: The JCo APIs are available under <code>com.sap.conn.jco</code>.
*/
@WebServlet("/ConnectivityRFCExample/*")
public class ConnectivityRFCExample extends HttpServlet {
private static final long serialVersionUID = 1L;
protected void doGet(HttpServletRequest request, HttpServletResponse
response)
throws ServletException, IOException {
PrintWriter responseWriter = response.getWriter();
// access the token from the thread which is executing the servlet
Token token = SecurityContext.getToken();
Thread runThread = new Thread(() -> {
// set the information in the newly created thread
SecurityContext.setToken(token);
try {
// access the RFC Destination "JCoDemoSystem"
JCoDestination destination =
JCoDestinationManager.getDestination("JCoDemoSystem_Normal");
// make an invocation of STFC_CONNECTION in the backend
Note
If you want to use thread pools , make sure that in your thread pool implementation this information is
set correctly in the thread which is about to be (re)used, and removed as soon as the thread is put back into
the pool.
Configure an RFC destination on SAP BTP that you can use in your Web application to call the on-premise
ABAP system.
To configure the destination, you must use a virtual application server host name (abapserver.hana.cloud)
and a virtual system number (42) that you will expose later in the Cloud Connector. Alternatively, you could use
a load balancing configuration with a message server host and a system ID.
Name=JCoDemoSystem
Type=RFC
jco.client.ashost=abapserver.hana.cloud
jco.client.sysnr=42
jco.destination.proxy_type=OnPremise
jco.client.user=<DEMOUSER>
jco.client.passwd=<Password>
jco.client.client=000
jco.client.lang=EN
jco.destination.pool_capacity=5
4. This means that the Cloud Connector denied opening a connection to this system. As a next step, you
must configure the system in your installed Cloud Connector.
Next Steps
Related Information
Configure the system mapping and the function module in the Cloud Connector.
Steps
The Cloud Connector only allows access to trusted backend systems. To configure this, follow the steps below:
Note
The values must match with the ones of the destination configuration in the cloud cockpit.
Example:
6. Summary (example):
4. Call again the URL that references the cloud application in the Web browser. The application should now
throw a different exception:
5. This means that the Cloud Connector denied invoking STFC_CONNECTION in this system. As a final step,
you must provide access to this function module.
The Cloud Connector only allows access to explicitly allowed resources (which, in an RFC scenario, are defined
on the basis of function module names). To configure the function module, follow the steps below:
1. Optional: In the Cloud Connector administration UI, you can check under Monitor Audit whether
access has been denied:
Denying access for user DEMOUSER to resource STFC_CONNECTION on system
abapserver.hana.cloud:sapgw42 [connectionId=609399452]
2. In the Cloud Connector administration UI, choose again Cloud To On-Premise from your Subaccount menu,
and go to tab Access Control.
3. For the specified internal system referring to abapserver.hana.cloud, add a new resource. To do this, select
the system in the table.
4. Add a new function name under the list of exposed resources. In section Resources Accessible On
abapserver.hana.cloud:sapgw42, choose the Add button and specify STFC_CONNECTION as accessible
resource, as shown in the screenshot below. Make sure that you have selected the Exact Name option to
only expose this specific function module.
Monitor the state and logs of your Web application deployed on SAP BTP, using the Application Logging
service.
For this purpose, create an instance of the Application Logging service (as you did for the Destination and
Connectivity service) and bind it to your application, see Create and Bind Service Instances [page 289].
To activate JCo logging, set the following property in the env section of your manifest file:
Now you can see and open the logs in the cloud cockpit or in the Kibana Dashboard in the tab Logs, if you are
within your application.
Call a function module in a cloud ABAP system via RFC, using a sample Web application (Cloud Foundry
environment).
This scenario performs a remote function call to invoke an ABAP function module, by using the Destination
service in the Cloud Foundry environment, as well as a Cloud Foundry application router.
Tasks
Developer
Developer
Operator
Operator
Scenario Overview
Control Flow for Using the Java Connector (JCo) with Basic Authentication
Note
AppRouter is only required you want to use multitenancy or perform user-specific service calls. In all
other cases, JCo uses cloud-security-xsuaa-integration with ClientCredentialFlow.
2. Redirect to XSUAA for login. JSON Web Token (JWT1) is sent to AppRouter and cached there.
3. AppRouter calls Web app and sends JWT1 with credentials.
4. Buildpack/XSUAA interaction: Buildpack requests JWT2 to access the Destination service instance (JCo
call).
5. Buildpack requests destination configuration (JWT2).
6. Buildpack sends request to the cloud ABAP system (with JWT2 and Authorization Header).
Since token exchanges are handled by the buildpack, you must only create and bind the service instances, see
Create and Bind Service Instances [page 289].
Used Values
Different user roles are involved in the cloud-to-cloud connectivity scenario. The particular steps for the
relevant roles are described below:
Application Developer
1. Installs Eclipse IDE, the Cloud Foundry Plugin for Eclipse and the Cloud Foundry CLI.
2. Develops a Java EE application using the JCo APIs.
3. Configures connectivity destinations via the SAP BTP cockpit.
4. Deploys and tests the Java EE application on SAP BTP.
Deploys Web applications, creates application routers, creates and binds service instances, conducts tests.
Scenario steps:
Installation Prerequisites
• You have downloaded and set up your Eclipse IDE and the Eclipse Tools for Cloud Foundry .
• You have downloaded the Cloud Foundry CLI, see Tools.
Next Steps
Related Information
1. In the Project Explorer view, right-click on the project jco-demo and choose Configure Convert to
Maven Project .
2. In the dialog window, leave the default settings unchanged and choose Finish.
3. Open the pom.xml file and include the following dependency:
<dependencies>
<dependency>
<groupId>com.sap.cloud</groupId>
<artifactId>neo-java-web-api</artifactId>
<version>[3.71.8,4.0.0)</version>
<scope>provided</scope>
</dependency>
</dependencies>
1. From the jco_demo project node, choose New Servlet in the context menu.
2. Enter com.sap.demo.jco as the <package> and ConnectivityRFCExampleJava as the <Class
name>. Choose Next.
3. Choose Finish to create the servlet and open it in the Java editor.
4. Replace the entire servlet class to make use of the JCo API. The JCo API is visible by default for cloud
applications. You do not need to add it explicitly to the application class path.
Sample Code
package com.sap.demo.jco;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.conn.jco.AbapException;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoFunction;
import com.sap.conn.jco.JCoParameterList;
import com.sap.conn.jco.JCoRepository;
/**
* Sample application that makes use of the capability to invoke a
function module in an ABAP system
* via RFC
*
* Note: The JCo APIs are available under <code>com.sap.conn.jco</code>.
JCoParameterList imports =
stfcConnection.getImportParameterList();
imports.setValue("REQUTEXT", "SAP BTP Connectivity runs with
JCo");
stfcConnection.execute(destination);
JCoParameterList exports =
stfcConnection.getExportParameterList();
String echotext = exports.getString("ECHOTEXT");
String resptext = exports.getString("RESPTEXT");
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter.println("<h1>Executed STFC_CONNECTION in system
JCoDemoSystem</h1>");
responseWriter.println("<p>Export parameter ECHOTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(echotext);
responseWriter.println("<p>Export parameter RESPTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(resptext);
responseWriter.println("</body></html>");
} catch (AbapException ae) {
// just for completeness: As this function module does not
have an exception
// in its signature, this exception cannot occur. But you
should always
// take care of AbapExceptions
} catch (JCoException e) {
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter
.println("<h1>Exception occurred while executing
STFC_CONNECTION in system JCoDemoSystem</h1>");
responseWriter.println("<pre>");
e.printStackTrace(responseWriter);
responseWriter.println("</pre>");
responseWriter.println("</body></html>");
}
}
}
5. Save the Java editor and make sure that the project compiles without errors.
You must create and bind several service instances, before you can use your application.
Procedure
Note
The instance name must match the one defined in the manifest file.
Sample Code
{
"xsappname" : "jco-demo-p1234",
"tenant-mode": "dedicated",
"scopes": [
{
"name": "$XSAPPNAME.all",
"description": "all"
}
],
"role-templates": [
{
"name": "all",
"description": "all",
"scope-references": [
"$XSAPPNAME.all"
]
}
]
}
Deploy your Cloud Foundry application to call an ABAP function module via RFC.
Prerequisites
You have created and bound the required service instances, see Create and Bind Service Instances [page 289].
Procedure
1. To deploy your Web application, you can use the following two alternative procedures:
• Deploying from the Eclipse IDE
• Deploying from the CLI, see Developing Java in the Cloud Foundry Environment
2. In the following, we publish it with the CLI.
3. To do this, create a manifest.yml file. The key parameter is USE_JCO: true, which must be set to
include JCo into the buildpack during deployment.
Note
JCo supports the usage of X.509 secrets for communication with the Destination/Connectivity service.
If you want to use it, you must specify the binding accordingly.
For more information, see Binding Parameters of SAP Authorization and Trust Management Service.
manifest.yml
Sample Code
---
applications:
- name: jco-demo-p1234
buildpacks:
Caution
The client libraries (java-security, spring-xsuaa, and container security api for node.js as of version
3.0.6) have been updated. When using these libraries, setting the parameter SAP_JWT_TRUST_ACL has
become obsolete. This update comes with a change regarding scopes:
• For a business application A calling an application B, it is now mandatory that application B grants
at least one scope to the calling business application A.
• Business application A must accept these granted scopes or authorities as part of the application
security descriptor.
For more information, see Application Security Descriptor Configuration Syntax, specifically the
sections Referencing the Application and Authorities.
Note
If you have more than one instance of those three services bound to your application, you must specify
which one JCo should use with the respective env parameters:
• xsuaa_connectivity_instance_name
• connectivity_instance_name
• destination_instance_name.
Next Steps
Configure a role that enables your user to access your Web application.
To add and assign roles, navigate to the subaccount view of the cloud cockpit and choose Security Role
Collections .
7. You should now be able to click Assign Role Collection. Choose role collection all and assign it.
Next Steps
Related Information
For authentication purposes, configure and deploy an application router for your test application.
Note
AppRouter is only required if you want to use multitenancy or perform user-specific service calls. In all
other cases, JCo uses cloud-security-xsuaa-integration with ClientCredentialFlow.
1. To set up an application router, follow the steps in Application Router or use the demo file approuter.zip
(download).
2. For deployment, you need a manifest file, similar to this one:
Sample Code
---
applications:
- name: approuter-jco-demo-p1234
path: ./
buildpacks:
- nodejs_buildpack
memory: 120M
routes:
- route: approuter-jco-demo-p1234.cfapps.eu10.hana.ondemand.com
env:
NODE_TLS_REJECT_UNAUTHORIZED: 0
destinations: >
[
{"name":"dest-to-example", "url" :"https://jco-
demo-p1234.cfapps.eu10.hana.ondemand.com/ConnectivityRFCExample",
"forwardAuthToken": true }
]
services:
- xsuaa_jco
Note
• The routes and destination URLs need to fit your test application.
• In this example, we already bound our XSUAA instance to the application router. Alternatively, you
could also do this via the cloud cockpit.
5. When choosing the application route, you are requested to login. Provide the credentials known by the IdP
you configured in Roles & Trust.
6. After successful login, you are routed to the test application which is then executed.
7. If the application issues an exception, saying that the JCoDemoSystem destination has not yet been
specified, you must configure the JCoDemoSystem destination first.
Note
<dependency>
<groupId>com.sap.cloud.security</groupId>
<artifactId>java-security</artifactId>
</dependency>
or any of its dependencies such as java-api with scope compile directly or transitively with any other jar.
If you are using an Application Router and it is mandatory for you to call JCo APIs from a different thread than
the one which is executing your servlet function, make sure the thread local information of the cloud-security-
xsuaa-integration API , used by JCo internally, is set again within your newly created thread.
<dependency>
<groupId>com.sap.cloud.security</groupId>
<artifactId>java-api</artifactId>
<version>2.7.7</version>
<scope>provided</scope>
</dependency>
Adjust your code from the step Develop a Sample Web Application [page 308] in the following way:
Sample Code
package com.sap.demo.jco;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.cloud.security.token.SecurityContext;
import com.sap.cloud.security.token.Token;
import com.sap.conn.jco.AbapException;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoFunction;
import com.sap.conn.jco.JCoParameterList;
import com.sap.conn.jco.JCoRepository;
/**
*
* Sample application that uses the connectivity service. In particular, it is
* making use of the capability to invoke a function module in an ABAP system
* via RFC
*
* Note: The JCo APIs are available under <code>com.sap.conn.jco</code>.
*/
@WebServlet("/ConnectivityRFCExample/*")
public class ConnectivityRFCExample extends HttpServlet {
private static final long serialVersionUID = 1L;
protected void doGet(HttpServletRequest request, HttpServletResponse
response)
throws ServletException, IOException {
PrintWriter responseWriter = response.getWriter();
// access the token from the thread which is executing the servlet
Token token = SecurityContext.getToken();
Thread runThread = new Thread(() -> {
// set the information in the newly created thread
SecurityContext.setToken(token);
try {
// access the RFC Destination "JCoDemoSystem"
JCoDestination destination =
JCoDestinationManager.getDestination("JCoDemoSystem_Normal");
// make an invocation of STFC_CONNECTION in the backend
Note
If you want to use thread pools , make sure that in your thread pool implementation this information is
set correctly in the thread which is about to be (re)used, and removed as soon as the thread is put back into
the pool.
Configure an RFC destination on SAP BTP that you can use in your Web application to call the cloud ABAP
system.
To configure the destination, you must use a WebSocket application server host name
(<id>.abap.eu10.hana.ondemand.com) and a WebSocket port (443).
Name=JCoDemoSystem
Type=RFC
jco.client.wshost= <id>.abap.eu10.hana.ondemand.com
jco.client.wsport=443
jco.client.alias_user=<DEMOUSER>
jco.client.passwd=<Password>
jco.client.client=100
jco.client.lang=EN
jco.destination.pool_capacity=5
jco.destination.proxy_type=Internet
Next Steps
Related Information
Monitor the state and logs of your Web application deployed on SAP BTP, using the Application Logging
service.
For this purpose, create an instance of the Application Logging service (as you did for the Destination service)
and bind it to your application, see Create and Bind Service Instances [page 289].
To activate JCo logging, set the following property in the env section of your manifest file:
Now you can see and open the logs in the cloud cockpit or in the Kibana Dashboard in the tab Logs, if you are
within your application.
For error analysis, you can activate more detailed JCo debug tracing:
Learn about the required steps to make your Cloud Foundry JCo application tenant-aware.
Using this procedure, you can enable the sample JCo application created in Invoke ABAP Function Modules in
On-Premise ABAP Systems [page 280] or Invoke ABAP Function Modules in Cloud ABAP Systems [page 305],
for multitenancy.
Steps
Prerequisites
• Your runtime environment uses SAP Java Buildpack version 1.9.0 or higher.
• You have successfully completed one of these precedures:
Invoke ABAP Function Modules in On-Premise ABAP Systems [page 280]
Invoke ABAP Function Modules in Cloud ABAP Systems [page 305]
• You have created a second subaccount (in the same global account), that is used to subscribe to your
application.
Sample Code
manifest.yml
---
applications:
- name: approuter-jco-demo-p42424242
path: ./
buildpacks:
- nodejs_buildpack
memory: 120M
routes:
- route: approuter-jco-demo-p42424242.cfapps.eu10.hana.ondemand.com
env:
TENANT_HOST_PATTERN: "^(.*).approuter-jco-demo-
p42424242.cfapps.eu10.hana.ondemand.com"
NODE_TLS_REJECT_UNAUTHORIZED: 0
destinations: >
[
{"name":"dest-to-example", "url" :"https://jco-
demo-p42424242.cfapps.eu10.hana.ondemand.com/ConnectivityRFCExample",
"forwardAuthToken": true }
]
services:
- xsuaa_jco
To call the XSUAA in a tenant-aware way, you must adjust the configuration JSON file. The tenant mode must
now have the value "shared". Also, you must allow calling the previously defined REST APIs (callbacks).
Sample Code
{
"xsappname" : "jco-demo-p42424242",
"tenant-mode": "shared",
"scopes": [
{
"name":"$XSAPPNAME.Callback",
"description":"With this scope set, the callbacks for tenant
onboarding, offboarding and getDependencies can be called.",
"grant-as-authority-to-apps":[
"$XSAPPNAME(application,sap-provisioning,tenant-onboarding)"
]
},
{
"name": "$XSAPPNAME.access",
"description": "app access"
},
{
"name": "uaa.user",
"description": "uaa.user"
}
],
"role-templates": [
{
"name":"MultitenancyCallbackRoleTemplate",
"description":"Call callback-services of applications",
"scope-references":[
"$XSAPPNAME.Callback"
]
},
{
"name": "UAAaccess",
"description": "UAA user access",
"scope-references": [
"uaa.user",
"$XSAPPNAME.access"
]
}
]
}
Add Roles
1. In the cloud cockpit, navigate to the subaccount view and go the tab Role Collections under Security (see
step Configure Roles and Trust [page 294] from the previous procedure).
2. Click on the role collection name.
3. Choose Add Role.
4. In the popup window, select the demo application as <Application Identifier>.
Firstly, in order to make the application subscribable, it must provide at least the following REST APIs:
In our sample application, we implement new servlets for each of these APIs.
Sample Code
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.5</version>
</dependency>
<dependency>
<groupId>com.unboundid.components</groupId>
<artifactId>json</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>javax.ws.rs</groupId>
<artifactId>javax.ws.rs-api</artifactId>
<version>2.1.1</version>
</dependency>
GET Dependencies
The current JCo dependencies are the Connectivity and Destination service. Thus, the GET API must return
information about these two services:
Sample Code
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.json.JSONException;
import org.json.JSONObject;
import com.google.gson.Gson;
@WebServlet("/callback/v1.0/dependencies")
public class GetDependencyServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
response.setStatus(200);
response.setContentType(MediaType.APPLICATION_JSON);
DependantServiceDto.java
EnvironmentVariableAccessor.java
import java.text.MessageFormat;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
/**
* Methods for extracting configurations from the environment variables
*/
public final class EnvironmentVariableAccessor
{
public static final String BEARER_WITH_TRAILING_SPACE="Bearer ";
private static final String VCAP_SERVICES=System.getenv("VCAP_SERVICES");
private static final String VCAP_SERVICES_CREDENTIALS="credentials";
private static final String VCAP_SERVICES_NAME="name";
private static final String
PROP_XSUAA_CONNECTIVITY_INSTANCE_NAME="xsuaa_connectivity_instance_name";
private static final String DEFAULT_XSUAA_CONNECTIVITY_INSTANCE_NAME="conn-
xsuaa";
private EnvironmentVariableAccessor()
{
}
/**
* Returns service credentials for a given service from VCAP_SERVICES
*
* @see <a href= "https://docs.run.pivotal.io/devguide/deploy-apps/
environment-variable.html#VCAP-SERVICES">VCAP_SERVICES</a>
*/
public static JSONObject getServiceCredentials(String serviceName) throws
JSONException
{
return new
JSONObject(VCAP_SERVICES).getJSONArray(serviceName).getJSONObject(0).getJSONObjec
t(VCAP_SERVICES_CREDENTIALS);
}
/**
* Returns service credentials for a given service instance from
VCAP_SERVICES
*
* @see <a href= "https://docs.run.pivotal.io/devguide/deploy-apps/
environment-variable.html#VCAP-SERVICES">VCAP_SERVICES</a>
*/
public static JSONObject getServiceCredentials(String serviceName, String
serviceInstanceName) throws JSONException
This API is called whenever a tenant is subscribing. In our example, we just read the received JSON, and
return the tenant-aware URL of the application router which points to our application. Also, if a tenant wants to
unsubscribe, DELETE does currently nothing.
Sample Code
SubscribeServlet
import java.io.IOException;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.google.gson.Gson;
@Override
protected void doDelete(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
super.doDelete(req, resp);
}
}
PayloadDataDto.java
import java.util.Map;
public class PayloadDataDto {
private String subscriptionAppName;
private String subscriptionAppId;
private String subscribedTenantId;
private String subscribedSubdomain;
private String subscriptionAppPlan;
private long subscriptionAppAmount;
private String[] dependantServiceInstanceAppIds = null;
private Map<String, String> additionalInformation;
public PayloadDataDto() {}
public PayloadDataDto(String subscriptionAppName, String subscriptionAppId,
String subscribedTenantId, String subscribedSubdomain, String
subscriptionAppPlan,
Map<String, String> additionalInformation) {
this.subscriptionAppName = subscriptionAppName;
this.subscriptionAppId = subscriptionAppId;
this.subscribedTenantId = subscribedTenantId;
this.subscribedSubdomain = subscribedSubdomain;
this.subscriptionAppPlan = subscriptionAppPlan;
this.additionalInformation = additionalInformation;
}
public String getSubscriptionAppName() {
return subscriptionAppName;
}
public void setSubscriptionAppName(String subscriptionAppName) {
this.subscriptionAppName = subscriptionAppName;
}
public String getSubscriptionAppId() {
return subscriptionAppId;
}
public void setSubscriptionAppId(String subscriptionAppId) {
this.subscriptionAppId = subscriptionAppId;
}
public String getSubscribedTenantId() {
return subscribedTenantId;
}
public void setSubscribedTenantId(String subscribedTenantId) {
this.subscribedTenantId = subscribedTenantId;
For the subscription of other tenants, your application must have a bound SAAS provisioning service instance.
You can do this using the cockpit:
Sample Code
{
"xsappname" : "jco-demo-p42424242",
"appName" : "JCo-Demo",
"appUrls": {
"getDependencies": "https://jco-demo-
p42424242.cfapps.eu10.hana.ondemand.com/callback/v1.0/dependencies",
"onSubscription" : "https://jco-demo-
p42424242.cfapps.eu10.hana.ondemand.com/callback/v1.0/tenants/{tenantId}"
}
}
5. Choose Next, and select the sample application jco-demo-p42424242 in the drop-down menu to assign
the SAAS service to it.
6. Choose Next, insert an instance name, for example, saas_jco, and confirm the creation by pressing
Finish.
1. To subscribe the new application from a different subaccount, go to Subscriptions in the cockpit:
4. If the subscription was successful, your window should look like that:
1. To call the application with a new tenant, you must create a new route (URL). In the cockpit, choose
Routes New Route :
Enable single sign-on (SSO) via RFC by forwarding the identity of cloud users from the Cloud Foundry
environment to an on-premise system.
You have set up the Cloud Connector and the relevant backend system for principal propagation. For more
informatioon, see Configuring Principal Propagation [page 419].
Procedure
1.1.5 Security
Enable single sign-on by forwarding the identity of cloud Principal Propagation [page 168]
users to a remote system or service.
Set up and run the Cloud Connector according to the highest Security Guidelines [page 717]
security standards.
More Information
SAP BTP Security Recommendations collects information that lets you secure the configuration and operation
of SAP BTP services in your landscape.
If you encounter an issue with this service, we recommend to follow the procedure below:
For more information about selected platform incidents, see Root Cause Analyses.
In the SAP Support Portal, check the Guided Answers section for SAP BTP. You can find solutions for
general SAP BTP issues as well as for specific services there.
You can report an incident or error through the SAP Support Portal .
To find the relevant component for your for SAP BTP Connectivity incident, see Connectivity Support [page
876] (section SAP Support Information).
More Information
Monitor the Cloud Connector from the SAP BTP cockpit and Monitoring [page 664]
from the Cloud Connector administration UI.
Troubleshoot connection problems and view different types Troubleshooting [page 700]
of logs and traces in the Cloud Connector.
Detailed support information for SAP Connectivity service Connectivity Support [page 876]
and the Cloud Connector.
While SAP strives to ensure the highest possible availability of the provided services, true resilience actually is
a two-way collaboration between client and server. Thus, it is important to implement any applications using
the Connectivity and/or Destination services (or any other services for that matter) in a resilient way. This
page gives suggestions on measures to take in order to endure short disruptions, network issues, slowdowns
or other abnormal situations that might arise. By doing this, these application can withstand such disruptions,
ensuring business continuity in the face of underlying issues in the platform.
Note
When using client libraries like the BTP security library , the Cloud SDK , and so on, many of
these recommendations may already be included. However, we recommend that you double-check these
features, as they might require additional configuration.
Caching
Caching is an important pillar of resilience. It is the act of storing data (acquired from an external resource or as
a result of a heavy computation) for future use. Caching operations must be done within a reasonable timespan
to avoid the risk of data becoming invalid or outdated. So, caching lets you reduce the number of risky or heavy
operations that go over the network towards an external resource. By doing this, the risk of failure is reduced,
and caching additionally improves the performance of the application.
Note
Be careful what kind of data you cache and for how long. It is especially important to consider the handling
of sensitive data (personal data, security objects, and so on).
Caution
Make sure you limit the size of the caches to avoid memory issues. There are different options when the
maximum number of entities in the cache is reached. For example, you can drop the oldest entity from the
cache in favor of the new one.
Token Caching
Caching access tokens to the services is highly recommended for the period of their validity. Access tokens to
the service include the timestamp on which the token is no longer valid (as per JWT specification). Thus the
tokens can be reused at any time before said timestamp. Keep in mind that tokens are issued for the specific
clients and in the context of the specific tenant. Thus this should be taken into account when designing the
caching mechanism.
Destination/Certificate Caching
If you do not expect time critical changes in the destination configuration, you can set the caching time even
higher (for example, an hour). Also, we recommend that you refresh the cache on-demand instead of calling
the Destination service every 3-5 minutes.
By caching applications, they can withstand temporary issues with the service or the network. We also
recommend that in cases where the cache has expired but retrieval of the updated entities does not succeed to
still use the cached value as a further resilience measure.
Retry Logic
When requests (whether it is the token call, the service call or the business call) fail, it is always a good idea to
retry the call over a reasonable time. It also makes sense to have some delay between retries, for example, at
least 100 milliseconds. You can even make this delay increase with each subsequent retry. By doing this, you
can handle short intermittent issues without having business impact.
Pagination
The APIs of the Destination service that return multiple entities support pagination. We highly recommended
that you make use of pagination if you use these APIs and have a high number of entities (more than 100
entities). By doing this, you ease the pressure on the service but also make processing lighter on client side as
the list can be handled in batches. See more about pagination in the documentation of the Destination Service
REST API [page 80].
Timeouts
Connect and read timeouts help you keep your application resources from getting stuck in abnormally long
processing. In some cases, one attempt to get data might get stuck, but abandoning it and trying again might
immediately succeed.
We recommend you use appropriate connect and read timeouts when communicating with the services. The
connect timeout is recommended to be on the low end - a couple of seconds. The read timeout would depend
on the semantic of the call being made. A call to the Destination service REST API would not require a high read
timeout. Around half a minute would be sufficient. The same is true for token retrieval calls. For business calls,
including calls to on-premise systems via the Connectivity service, the timeout would be based on the type of
the call and is heavily scenario-specific.
The circuit-breaker architecture pattern can be a powerful tool for handling a misbehaving dependency. It
reduces the effort done by your application when communicating to a service which is identified as not working
or unstable, while waiting for the dependency to come back online.
The connectivity-proxy Kyma module enables secure tunneling between the Kyma environment and on-
premise systems. It supports both on-premise-to-cloud and cloud-to-on-premise scenarios. It requires the
btp-operator module and Istio sidecar proxy injection for proper functionality.
Prerequisites
• The service plan "connectivity_proxy" of the "connectivity" service is assigned to your subaccount as an
entitlement.
For more information, see Configure Entitlements and Quotas for Subaccounts.
Note
For subaccounts created after the February 15, 2024, this entitlement is assigned automatically.
Context
The connectivity-proxy Kyma module installs the needed components to establish a secure tunnel between the
Kyma environment and the systems in your on-premise network, exposed via Cloud Connector [page 343]. The
The connectivity proxy is a standard Kyma module. You can enable the module as described in Add and Delete
a Kyma Module.
The Kubernetes resource that facilitates this installation in the background is a ConnectivityProxy
resource. This is a custom resource, defined by the module. It holds the configuration for the connectivity
proxy components.
Note
Not all configuration options of the connectivity proxy are supported by the Kyma module.
Do not create additional resources of type ConnectivityProxy. Only one connectivity proxy installation
per cluster is supported.
Result
The module will be enabled and will result in an installation of the Connectivity Proxy for Kubernetes [page 738]
and its supporting workloads.
As part of this installation, a ServiceInstance and a ServiceBinding resource will be created in the
cluster. These are BTP Operator resources which result in the creation of a Connectivity service instance
with service plan connectivity_proxy, and in a service binding in your subaccount. This is needed to pair the
connectivity proxy with the SAP BTP Connectivity service.
Caution
Do not interact with the created service instance and service binding, as they are only meant for usage by
the connectivity proxy. Any modification may cause a loss of functionality.
The Ingress endpoint is propagated transparently for cloud-to-on-premise scenarios. For on-premise-to-cloud
connections, you must look it up manually. You can find it in the created Gateway resource of the kyma-system
namespace.
Capabilities
If no settings are modified, the following specifics and features are enabled in the connectivity_proxy Kyma
module:
Limitations
The following features of the connectivity proxy are not available via the connectivity-proxy Kyma module:
As part of the installation, a special ConfigMap will be created in the kyma-system namespace, called
connectivity-proxy-info. It contains the host of the connectivity proxy and all its proxy ports.
For more information, see Using the Connectivity Proxy [page 794].
Caution
Every workload using the connectivity proxy to call the on-premise system must have the Istio sidecar
proxy injection enabled. Otherwise, the connectivity proxy does not work correctly.
Learn more about the Cloud Connector: features, scenarios and setup.
Note
This documentation refers to SAP BTP, Cloud Foundry environment. If you are looking for information
about the Neo environment, see Connectivity for the Neo Environment.
In this Topic
Hover over the elements for a description. Click an element for more information.
In this Guide
Hover over the elements for a description. Click an element for more information.
Context
• Lets you use the features that are required for business-critical enterprise scenarios.
• Recovers broken connections automatically.
• Provides audit logging of inbound traffic and configuration changes.
• Can be run in a high-availability setup.
Caution
The Cloud Connector must not be used to connect to products other than SAP BTP or S/4HANA Cloud.
Advantages
Compared to the approach of opening ports in the firewall and using reverse proxies in the DMZ to establish
access to on-premise systems, the Cloud Connector offers the following benefits:
• You don't need to configure the on-premise firewall to allow external access from SAP BTP to internal
systems. For allowed outbound connections, no modifications are required.
• The Cloud Connector supports HTTP as well as additional protocols. For example, the RFC protocol
supports native access to ABAP systems by invoking function modules.
• You can use the Cloud Connector to connect on-premise databases or BI tools to SAP HANA databases in
the cloud.
Basic Scenarios
Note
This section refers to the Cloud Connector installation in a standard on-premise network. Find setup
options for other system environments in Extended Scenarios [page 348].
1. Install and configure the Cloud Connector: Installation [page 349], Initial Configuration [page 388],
Managing Subaccounts [page 404]
2. Access HANA databases on SAP BTP: Configure a Service Channel for an SAP HANA Database
3. Connect on-premise database or BI tools to a HANA database on SAP BTP: Connect DB Tools to SAP
HANA via Service Channels [page 612]
Note
Extended Scenarios
Besides the standard setup: SAP BTP - Cloud Connector - on-premise system/network, you can also use
the Cloud Connector to connect SAP BTP applications to other cloud-based environments, as long as they
are operated in a way that is comparable to an on-premise network from a functional perspective. This is
particularly true for infrastructure (IaaS) hosting solutions.
Here's an overview of all environments in which you can or cannot set up the Cloud Connector:
Can be set up in: Customer on-premise network (see Ba- SAP ERP, SAP S/4HANA
sic Scenarios [page 346])
Note
SAP Hosting SAP HANA Enterprise Cloud (HEC)
Within extended scenarios that al-
Third-party IaaS providers (hosting) Amazon Web Services (AWS), Microsoft
low a Cloud Connector setup, spe-
Azure, Google Cloud
cial procedures may apply for con-
figuration. If so, they are mentioned
in the corresponding configuration
steps.
Cannot be set up in: SAP SaaS solutions SAP SuccessFactors, SAP Concur, SAP
Ariba
Basic Tasks
The following steps are required to connect the Cloud Connector to your SAP BTP subaccount:
Follow the SAP BTP Release Notes to stay informed about Cloud Connector and Connectivity updates.
Related Information
1.2.1 Installation
On Microsoft Windows and Linux, two installation modes are available: a portable version and an
installer version. On Mac OS X, only the portable version is available.
• Portable version: can be installed easily, by extracting a compressed archive into an empty directory. It
does not require administrator or root privileges for the installation, and you can run multiple instances on
the same host.
Restrictions:
• You cannot run it in the background as a Windows Service or Linux daemon (with automatic start
capabilities at boot time).
• The portable version does not support an automatic upgrade procedure. To update a portable
installation, you must delete the current one, extract the new version, and then re-do the configuration.
• Portable versions are meant for non-productive scenarios only.
• The environment variable JAVA_HOME is relevant when starting the instance, and therefore must be set
properly.
Note
We strongly recommend that you use this variant for a productive setup.
Caution
Tomcat is a well-known and well-documented server, whose configuration can be adapted to one's
own needs. However, we strongly recommend that you do not modify its configuration files like
conf\server.xml with a text editor, because the Cloud Connector is making assumptions about the
content of those files.
If you still want to do custom modifications, you do so at your own risk. In this case, we can no longer
guarantee that the Cloud Connector keeps working as expected. For any issues that can be traced back to
such changes, we cannot provide support, in particular, if such changes cause trouble during upgrade to a
newer version.
Prerequisites
• There are some general prerequisites you must fulfill to successfully install the Cloud Connector, see
Prerequisites [page 351].
• For OS-specific requirements and procedures, see section Tasks below.
Tasks
Related Information
Content
Section Description
Connectivity Restrictions [page 351] General information about SAP BTP and connectivity restric-
tions.
JDKs [page 352] Java Development Kit (JDK) versions that you can use.
Product Availability Matrix [page 353] Availability of operating systems/versions for specific Cloud
Connector versions.
Network [page 354] Required Internet connection to SAP BTP hosts per region.
Note
For additional system requirements, see also System Requirements [page 367].
Connectivity Restrictions
For general information about SAP BTP restrictions, see Prerequisites and Restrictions.
For specific information about all Connectivity restrictions, see Connectivity: Restrictions [page 6].
Hardware
Minimum Recommended
CPU Single core 3 GHz, x86-64 architecture Dual core 2 GHz, x86-64 architecture
compatible compatible
Memory (RAM) 2 GB 4 GB
Software
• You have downloaded the Cloud Connector installation archive from SAP Development Tools for Eclipse.
• A full JDK must be installed. Lightweight JRE installations are not sufficient. You can download a fitting
up-to-date SAP JVM from SAP Development Tools for Eclipse.
Caution
Do not use Apache Portable Runtime (APR) on the system on which you use the Cloud Connector.
If you cannot avoid this restriction and want to use APR at your own risk, you must manually adapt the
server.xml configuration file in directory <scc_installation_folder>/conf. To do so, follow the
steps in HTTPS port configuration for APR.
JDKs
SUSE Linux Enterprise Server 12, Red x86_64 2.5.1 and higher
Hat Enterprise Linux 7
SUSE Linux Enterprise Server 12, SUSE ppc64le 2.13.0 and higher
Linux Enterprise Server 15, Red Hat En-
terprise Linux 7, Red Hat Enterprise Li-
nux 8
Windows 11, Red Hat Enterprise Linux 9 x86_64 2.15.0 and higher
You must have Internet connection at least to the following Connectivity service hosts (depending on the
region), to which you can connect your Cloud Connector. All connections to the hosts are TLS-based and
connect to port 443.
Remember
For some solutions of the BTP portfolio, you must include additional hosts to set up an on-premise
connectivity scenario with the Cloud Connector. This applies, for example, to: SAP Data Intelligence,
SAP HANA Cloud, Business Appilcation Studio, and SAP Build Apps. Check the respective solution
documentation for details.
Note
For general information on IP ranges per region, see Regions (Cloud Foundry and ABAP environment) or
Regions and Hosts Available for the Neo Environment. Find detailed information about the region status
and planned network updates on Platform Updates and Notifications.
Note
In the Cloud Foundry environment, IPs are controlled by the respective IaaS provider - Amazon Web Services (AWS),
Microsoft Azure (Azure), or Google Cloud. IPs may change due to network updates on the provider side. Any planned
changes will be announced at least 4 weeks before they take effect. See also Regions.
Caution 2024):
18.159.31.22, 3.69.186.98,
Additional IP addresses
3.77.195.119
were added to this re-
gion.
Additonal IP addresses
(valid after March 31,
2024):
18.159.31.22, 3.69.186.98,
3.77.195.119
connectivitytunnel.cf.eu10-002.hana.ondemand.com 3.64.227.236,
3.126.229.22,
18.193.180.19
Additonal IP addresses
(valid after March 31,
2024):
18.153.123.11, 3.121.37.195,
3.73.215.90
connectivitytunnel.cf.eu10-003.hana.ondemand.com 3.127.77.3,
3.64.196.58,
18.156.151.247
Additonal IP addresses
(valid after March 31,
2024):
18.197.252.154, 3.79.137.29,
52.58.93.50
connectivitytunnel.cf.eu10-004.hana.ondemand.com 3.65.185.47,
3.70.38.218,
18.196.206.8
Additonal IP addresses
(valid after March 31,
2024):
3.73.109.100, 3.73.8.210,
52.59.18.183
Additonal IP addresses
(valid after March 31,
2024):
3.66.26.249, 3.72.216.204,
3.74.99.245
Additonal IP addresses
(valid after March 31,
2024):
18.213.242.208,
3.214.110.153, 34.205.56.51
connectivitytunnel.cf.us10-001.hana.ondemand.com 3.220.114.17,
3.227.182.44,
52.86.131.53
Additonal IP addresses
(valid after March 31,
2024):
44.218.82.203,
44.219.57.163,
50.16.106.103
connectivitytunnel.cf.us10-002.hana.ondemand.com 34.202.68.0,
54.234.152.59,
107.20.66.86
Additonal IP addresses
(valid after March 31,
2024):
3.214.116.95,
54.144.230.36,
54.226.37.104
connectivitytunnel.cf.br10.hana.ondemand.com 18.229.91.150,
52.67.135.4,
54.232.179.204
Additonal IP addresses
(valid after March 31,
2024):
18.228.53.198,
52.67.149.240,
54.94.179.209
Additonal IP addresses
(valid after March 31,
2024):
18.178.155.134, 57.180.140.5,
57.180.145.179
Additonal IP addresses
(valid after March 31,
2024):
13.55.188.95, 3.105.212.249,
3.106.45.106
Additonal IP addresses
(valid after March 31,
2024):
13.229.158.122,
18.140.228.217, 52.74.215.89
connectivitytunnel.cf.ap12.hana.ondemand.com 3.35.255.45,
3.35.106.215,
3.35.215.12
Additonal IP addresses
(valid after March 31,
2024):
13.209.236.215,
43.201.194.105,
43.202.204.5
connectivitycertsigning.cf.ap21.hana.ondemand.com 20.184.61.122
connectivitytunnel.cf.ap21.hana.ondemand.com 20.184.61.122
(cf.ap21.hana.ondema
nd.com)
Additonal IP addresses
(valid after March 31,
2024):
15.157.88.166, 3.98.202.222,
52.60.210.33
ABAP Environment
Note
To enable the full scenario for the SAP BTP ABAP environment, the DNS names below are needed in addition to the IPs
of the Connectivity service mentioned above. If you can only configure allowlists for IP addresses in your firewall, please
note that these addresses may change at any time and that you need to validate the values with a DNS service of your
choice regularly.
For more information, see Regions and API Endpoints for the ABAP Environment.
Neo Environment
connectivitytunnel.hana.ondemand.com 155.56.210.84
(eu1.hana.ondemand.c
om)
connectivitytunnel.eu2.hana.ondemand.com
157.133.205.233
connectivitytunnel.us1.hana.ondemand.com
130.214.181.134
(us3.hana.ondemand.c connectivitycertsigning.us3.hana.ondemand.com
130.214.33.92
om)
connectivitytunnel.us3.hana.ondemand.com
130.214.33.119
(ap1.hana.ondemand.c connectivitycertsigning.ap1.hana.ondemand.com
157.133.97.27
om)
connectivitytunnel.ap1.hana.ondemand.com
157.133.97.46
connectivitycertsigning.jp1.hana.ondemand.com
(jp1.hana.ondemand.c 130.214.112.212
om)
connectivitytunnel.jp1.hana.ondemand.com
130.214.112.164
connectivitytunnel.ca1.hana.ondemand.com
130.214.174.236
(br1.hana.ondemand.c connectivitycertsigning.br1.hana.ondemand.com
130.214.96.195
om)
connectivitytunnel.br1.hana.ondemand.com
130.214.96.173
connectivitytunnel.ae1.hana.ondemand.com
130.214.80.182
connectivitytunnel.sa1.hana.ondemand.com
130.214.209.191
Note
In the Cloud Foundry environment, IPs are controlled by the respective IaaS provider (AWS, Azure, or Google Cloud).
IPs may change due to network updates on the provider side. Any planned changes will be announced several weeks
before they take effect. See also Regions.
connectivitytunnel.cf.eu10.hana.ondemand.com 3.124.222.77,
3.122.209.241,
3.124.208.223
connectivitytunnel.cf.us10.hana.ondemand.com 52.23.189.23,
52.4.101.240,
52.23.1.211
Note
If you install the Cloud Connector in a network segment that is isolated from the backend systems, make
sure the exposed hosts and ports are still reachable and open them in the firewall that protects them:
• for HTTP, the ports you chose for the HTTP/S server.
• for LDAP, the port of the LDAP server.
• for RFC, it depends on whether you use an SAProuter or not and whether load balancing is used:
• if you use an SAProuter, it is typically configured as visible in the network of the Cloud Connector
and the corresponding routtab is exposing all the systems that should be used.
• without SAProuter, you must open the application server hosts and the corresponding gateway
ports (33##, 48##). When using load balancing for the connection, you must also open the
message server host and port.
Note
For more information about the used ABAP server ports, see: Ports of SAP NetWeaver Application Server
ABAP.
For more information about additional IP addresses for SAP Business Application Studio, see SAP Business
Application Studio Inbound IP Addresses.
Additional system requirements for installing and running the Cloud Connector.
Supported Browsers
The browsers you can use for the Cloud Connector Administration UI are the same as those currently
supported by SAPUI5. See: Browser and Platform Support.
The minimum free disk space required to download and install a new Cloud Connector server is as follows:
• Size of downloaded Cloud Connector installation file (ZIP, TAR, MSI files): 50 MB
• Newly installed Cloud Connector server: 70 MB
• Total: 120 MB as a minimum
The Cloud Connector writes configuration files, audit log files and trace files at runtime. We recommend that
you reserve between 1 and 20 GB of disk space for those files.
Trace and log files are written to <scc_dir>/log/ within the Cloud Connector root directory.
The scc_core.trc file contains traces in general, communication payload traces are stored in
tunnel_traffic_*.trc and snc_traffic_*.trc. These files may be used by SAP Support to analyze
potential issues. The default trace level is Information, where the amount of written data is generally only a
few KB per day. You can turn off these traces to save disk space. However, we recommend that you don't turn
off this trace completely, but that you leave it at the default settings, to allow root cause analysis if an issue
Note
Regularly back up or delete written trace files to clean up the used disk space.
To be compliant with the regulatory requirements of your organization and the regional laws, the audit log files
must be persisted for a certain period of time for traceability purposes. Therefore, we recommend that you
back up the audit log files regularly from the Cloud Connector file system and keep the backup for the length of
time required.
Related Information
A customer network is usually divided into multiple network zones or subnetworks according to the security
level of the contained components. For example, the DMZ that contains and exposes the external-facing
services of an organization to an untrusted network, usually the Internet, and there are one or more other
network zones which contain the components and services provided in the company’s intranet.
You can generally choose the network zone in which to set up the Cloud Connector:
• Internet access to the SAP BTP region host, either directly or via HTTPS proxy.
• Direct access to the internal systems it provides access to, which means that there is transparent
connectivity between the Cloud Connector and the internal system.
The Cloud Connector can be set up either in the DMZ and operated centrally by the IT department, or set up in
the intranet and operated by the appropriate line of business.
Note
The internal network must allow access to the required ports; the specific configuration depends on the
firewall software used.
The default ports are 80 for HTTP and 443 for HTTPS. For RFC communication, you need to open a
gateway port (default: 33+<instance number> and an arbitrary message server port. For a connection to
a HANA Database (on SAP BTP) via JDBC, you need to open an arbitrary outbound port in your network.
Mail (SMTP) communication is not supported.
When installing a Cloud Connector, the first thing you need to decide is the sizing of the installation.
This section gives some basic guidance what to consider for this decision. The provided information includes
the shadow instance, which should always be added in productive setups. See also Install a Failover Instance
for High Availability [page 655].
Note
The following recommendations are based on current experiences. However, they are only a rule of thumb
since the actual performance strongly depends on the specific environment. The overall performance of a
Cloud Connector is impacted by many factors (number of hosted subaccounts, bandwidth, latency to the
attached regions, network routers in the corporate network, used JVM, and others).
Restrictions
Note
Up until now, you cannot perform horizontal scaling directly. However, you can distribute the load statically
by operating multiple Cloud Connector installations with different location IDs for all involved subaccounts.
In this scenario, you can use multiple destinations with virtually the same configuration, except for the
location ID. See also Managing Subaccounts [page 404], step 4. Alternatively, each of the Cloud Connector
instances can host its own list of subaccounts without any overlap in the respective lists. Thus, you can
handle more load, if a single installation risks to be overloaded.
Related Information
How to choose the right sizing for your Cloud Connector installation.
Regarding the hardware, we recommend that you use different setups for master and shadow. One dedicated
machine should be used for the master, another one for the shadow. Usually, a shadow instance takes over the
If the master instance is available again after a downtime, we recommend that you switch back to the actual
master.
Note
The sizing recommendations refer to the overall load across all subaccounts that are connected via the
Cloud Connector. This means that you need to accumulate the expected load of all subaccounts and
should not only calculate separately per subaccount (taking the one with the highest load as basis).
Related Information
Learn more about the basic criteria for the sizing of your Cloud Connector master instance.
For the master setup, keep in mind the expected load for communication between the SAP BTP and on-
premise systems. The setups listed below differ in a mostly qualitative manner, without hard limits for each of
them.
Note
The mentioned sizes are considered as minimal configuration, larger ones are always ok. In general, the
more applications, application instances, and subaccounts are connected, the more competition will exist
for the limited resources on the machine.
Particularly the heap size is critical. If you size it too low for the load passing the Cloud Connector, at
some point the Java Virtual Machine will execute full GCs (garbage collections) more frequently, blocking
the processing of the Cloud Connector completely for multiple seconds, which massively slows down overall
performance. If you experience such situations regularly, you should increase the heap size in the Cloud
Connector UI (choose Configuration Advanced JVM ). See also Configure the Java VM [page 625].
Note
You should use the same value for both <initial heap size> and <maximum heap size>.
The shadow installation is typically not used in standard situations and hence does not need the same sizing,
assuming that the time span in which it takes over the master role is limited.
Note
The shadow only acts as master, for example, during an upgrade or when an abnormal situation occurs on
the master machine, and either the Cloud Connector or the full machine on OS level needs to be restarted.
Master Shadow
Choose the right connection configuration options to improve the performance of the Cloud Connector.
This section provides detailed information how you can adjust the configuration to improve overall
performance. This is typically relevant for an M or L installation (see Hardware Setup [page 369]). For S
installations, the default configuration will probably be sufficient to handle the traffic.
• You can configure the number of physical connections through the Cloud Connector UI. See also Configure
Advanced Connectivity [page 623].
• In versions prior to 2.11, you have to modify the configuration files with an editor and restart the Cloud
Connector to activate the changes.
In general, the Cloud Connector tunnel is multiplexing multiple virtual connections over a single physical
connection. Thus, a single connection can handle a considerable amount of traffic. However, increasing the
maximum number of physical connections allows you to make use of the full available bandwidth and to
minimize latency effects.
If the bandwidth limit of your network is reached, adding additional connections doesn't increase the
througput, but will only consume more resources.
Note
Different network access parameters may impact and limit your configuration options: if the access to an
external network is a 1 MB line with an added latency of 50 ms, you will not be able to achieve the same
data volumes like with a 10 GB line with an added latency of < 1 ms. However, even if the line is good, for
example 10 GB, but with an added latency of 100 ms, the performance might still be bad.
Related Information
Configure the physical connections for on-demand to on-premise calls in the Cloud Connector.
Adjusting the number of physical connections for this direction is possible both globally in the Cloud Connector
UI ( Configuration Advanced ), and for individual communication partners on cloud side ( On-Demand
To On-Premise Applications ).
Connections are established for each defined and connected subaccount. The current number of opened
connections is visible in the Cloud Connector UI via <Subaccount> Cloud Connections .
The global default is 1 physical connection per connected subaccount. This value is used across all
subaccounts hosted by the Cloud Connector instance and applies for all communication partners.
In general, the default should be sufficient for applications with low traffic. If you expect medium traffic for most
applications, it may be useful to set the default value to 2.
Note
An exact traffic forecast is difficult to achieve. It requires a deep understanding of the use case and of
the possible future load generated by different applications. For this reason, we recommend that you
focus on subsequent configuration adjustments, using the Cloud Connector monitoring tools to recognize
bottlenecks in time, and adjust Cloud Connector configuration accordingly.
In addition to the number of connections, you can configure the number of <Tunnel Worker Threads>.
This value should be at least equal to the maximum of all individual application tunnel connections in all
subaccounts, to have at least 1 thread available for each connection that can process incoming requests and
outgoing responses.
The value for <Protocol Processor Worker Threads> is mainly relevant if RFC is used as protocol. Since
its communication model towards the ABAP system is a blocking one, each thread can handle only one call at a
time and cannot be shared. Hence, you should provide 1 thread per 5 concurrent RFC requests.
Note
The longer the RFC execution time in the backend, the more threads you should provide. Threads can be
reused only after the response of a call was returned to SAP BTP.
Configure the number of physical connections for a Cloud Connector service channel.
Service channels let you configure the number of physical connections to the communication partner on cloud
side, see Using Service Channels [page 611]. The default is 1. This value is used as well in versions prior to
Cloud Connector 2.11, which did not offer a configuration option for each service channel. You should define the
number of connections depending on the expected number of clients and, with lower priority, depending on the
size of the exchanged messages.
If there is only a single RFC client for an S/4HANA Cloud channel or only a single HANA client for a HANA DB
on SAP BTP side, increasing the number doesn't help, as each virtual connection is assigned to one physical
connection. The following simple rule lets you to define the required number of connections per service
channel:
Example
For a HANA system in the SAP BTP, data is replicated using 18 concurrent clients in the on-premise network.
In average, about 5 of those clients are regularly sending 600k. For the number of clients, you should use
2 physical connections, for the 5 clients sending larger amounts add an additional 3, which sums up to 5
connections.
You can choose between a simple portable variant of the Cloud Connector and the MSI-based installer.
The installer is the generally recommended version that you can use for both developer and productive
scenarios. It lets you, for example, register the Cloud Connector as a Windows service and this way
automatically start it after machine reboot.
Tip
If you are a developer, you might want to use the portable variant as you can run the Cloud Connector
after a simple unzip (archive extraction). You might want to use it also if you cannot perform a full
installation due to lack of permissions, or if you want to use multiple versions of the Cloud Connector
simultaneously on the same machine.
Prerequisites
• You have one of the supported 64-bit operating systems. For more information, see Product Availability
Matrix [page 353].
• You have downloaded either the portable variant as ZIP archive for Windows, or the MSI installer
from the SAP Development Tools for Eclipse page.
• You must install Microsoft Visual Studio C++ 2019 runtime libraries (download vc_redist.x64.exe).
For more information, see Microsoft Visual C++ Redistributable latest supported downloads .
• A supported Java version must be installed. For more information, see JDKs [page 352].
If you want to use SAP JVM, you can download it from the SAP Development Tools for Eclipse page.
• When using the portable variant, the environment variable <JAVA_HOME> must be set to the Java
installation directory, so that the bin subdirectory can be found. Alternatively, you can add the relevant
bin subdirectory to the <PATH> variable.
Portable Scenario
1. Extract the <sapcc-<version>-windows-x64.zip> ZIP file to an arbitrary directory on your local file
system.
2. Set the environment variable JAVA_HOME to the installation directory of the JDK that you want to use to
run the Cloud Connector. Alternatively, you can add the bin subdirectory of the JDK installation directory
to the PATH environment variable.
3. Go to the Cloud Connector installation directory and start it using the go.bat batch file.
4. Continue with the Next Steps section.
Note
The Cloud Connector is not started as a service when using the portable variant, and hence will not
automatically start after a reboot of your system. Also, the portable version does not support the automatic
upgrade procedure.sapcc-<version>-windows-x64.msi
Note
The Cloud Connector is started as a Windows service in the productive use case. Therefore, installation
requires administration permissions. After installation, manage this service under Control Panel
Administrative Tools Services . The service name is Cloud Connector (formerly named Cloud
Connector 2.0). Make sure the service is executed with a user that has limited privileges. Typically,
privileges allowed for service users are defined by your company policy. Adjust the folder and file
permissions to be manageable by only this user and system administrators.
On Windows, the file scc_service.log is created and used by the Microsoft MSI installer (during Cloud
Connector installation), and by the scchost.exe executable, which registers and runs the Windows service if
you install the Cloud Connector as a Windows background job.
This log file is only needed if a problem occurs during Cloud Connector installation, or during creation and start
of the Windows service, in which the Cloud Connector is running. You can find the file in the log folder of your
Cloud Connector installation directory.
After installation, the Cloud Connector is registered as a Windows service that is configured to be started
automatically after a system reboot. You can start and stop the service via shortcuts on the desktop ("Start
Cloud Connector" and "Stop Cloud Connector"), or by using the Windows Services manager and look for the
service SAP Cloud Connector.
Access the Cloud Connector administration UI at https://localhost:<port>, where the default port is 8443 (but
this port might have been modified during the installation).
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine
on which you have installed the Cloud Connector. If you access the Cloud Connector locally from the same
machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 388].
Related Information
Context
You can choose between a simple portable variant of the Cloud Connector and the RPM-based installer.
The installer is the generally recommended version that you can use for both the developer and the
productive scenario. It registers, for example, the Cloud Connector as a daemon service and this way
automatically starts it after machine reboot.
Tip
If you are a developer, you might want to use the portable variant as you can run the Cloud Connector
after a simple "tar -xzof" execution. You also might want to use it if you cannot perform a full installation
due to missing permissions for the operating system, or if you want to use multiple versions of the Cloud
Connector simultaneously on the same machine.
Prerequisites
• You have one of the supported 64-bit operating systems. For more information, see Product Availability
Matrix [page 353].
• The supported platforms are x64 and ppc64le, represented below by the variable <platform>. Variable
<arch> is x86_64 or ppc64le respectively.
• You have downloaded either the portable variant as tar.gz archive for Linux or the RPM installer
contained in the ZIP for Linux, from SAP Development Tools for Eclipse.
rpm -i sapjvm-<version>-linux-<platform>.rpm
If you want to check the JVM version installed on your system, use the following command:
When installing it using the RPM package, the Cloud Connector will detect it and use it for its runtime.
• When using the tar.gz archive, the environment variable <JAVA_HOME> must be set to the Java
installation directory, so that the bin subdirectory can be found. Alternatively, you can add the Java
installation's bin subdirectory to the <PATH> variable.
Portable Scenario
1. Extract the tar.gz file to an arbitrary directory on your local file system using the following command:
Note
If you use the parameter "o", the extracted files are assigned to the user ID and the group ID of the user
who has unpacked the archive. This is the default behavior for users other than the root user.
2. Go to this directory and start the Cloud Connector using the go.sh script.
3. Continue with the Next Steps section.
Note
In this case, the Cloud Connector is not started as a daemon, and therefore will not automatically start after
a reboot of your system. Also, the portable version does not support the automatic upgrade procedure.
Installer Scenario
unzip sapcc-<version>-linux-<platform>.zip
2. Go to this directory and install the extracted RPM using the following command. You can perform this step
only as a root user.
rpm -i com.sap.scc-ui-<version>.<arch>.rpm
Caution
When adjusting the Cloud Connector installation (for example, restoring a backup), make sure the RPM
package management is synchronized with such changes. If you simply replace files that do not fit to the
information stored in the package management, lifecycle operations (such as upgrade or uninstallation)
might fail with errors. Also, the Cloud Connector might get into unrecoverable state.
Example: After a file system restore, the system files represent Cloud Connector 2.3.0 but the RPM
package management "believes" that version 2.4.3 is installed. In this case, commands like rpm -U and
rpm -e do not work as expected. Furthermore, avoid using the --force parameter as it may lead to an
unpredictable state with two versions being installed concurrently, which is not supported.
When using SNC for encrypting RFC communication, it might be required to provide some settings, for
example, environment variables that must be visible for the Cloud Connector process. To achieve this, you
must store a file named scc_daemon_extension.sh in the installation directory of the Cloud Connector
(/opt/sap/scc), containing all commands needed for initialization without a shebang.
Sample Code
export SECUDIR=/path/to/psefile
Note
Make sure JAVA_HOME is set to the JVM used (in the shell where the command is executed).
After installation via RPM manager, the Cloud Connector process is started automatically and registered as a
daemon process, which ensures the automatic restart of the Cloud Connector after a system reboot.
Next Steps
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine
on which you installed the Cloud Connector.
If you access the Cloud Connector locally from the same machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 388].
Related Information
Prerequisites
Note
Apple macOS is not supported for productive scenarios. The developer version described below must not
be used as productive version.
Caution
There are two different Cloud Connector portable versions available for running on Apple macOS with
native support either for Apple M1/M2 CPUs based on the aarch64 architecture, or with native support
for INTEL x86 64-bit CPUs based on the x64 architecture. Make sure you download and use the Cloud
Connector version in combination with a JVM version, which both match your used hardware CPU
architecture.
• You have one of the supported 64-bit operating systems. For more information, see Product Availability
Matrix [page 353].
• The supported platforms are aarch64 and x64, represented below by the variable <platform>.
Procedure
1. Extract the tar.gz file to an arbitrary directory on your local file system using the following command:
2. Go to this directory and start Cloud Connector using the go.sh script.
3. Continue with the Next Steps section.
Note
The Cloud connector is not started as a daemon, and therefore will not automatically start after
a reboot of your system. Also, the macOS version of the Cloud Connector does not support the
automatic upgrade procedure.
Next Steps
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine
on which you installed the Cloud Connector.
If you access the Cloud Connector locally from the same machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 388].
Related Information
For the Connectivity service and the Cloud Connector, you should apply the following guidelines to guarantee
the highest level of security for these components.
From the Connector menu, choose Security Status to access an overview showing potential security risks and
the recommended actions.
The General Security Status addresses security topics that are subaccount-independent.
• Choose any of the Actions icons in the corresponding line to navigate to the UI area that deals with that
particular topic and view or edit details.
Note
Navigation is not possible for the last item in the list (Service User).
• The service user is specific to the Windows operating system (see Installation on Microsoft Windows OS
[page 375] for details) and is only visible when running the Cloud Connector on Windows. It cannot be
accessed or edited through the UI. If the service user was set up properly, choose Edit and check the
corresponding checkbox.
The Subaccount-Specific Security Status lists security-related information for each and every subaccount.
Note
The security status only serves as a reminder to address security issues and shows if your installation
complies with all recommended security settings.
Upon installation, the Cloud Connector provides an initial user name and password for the administration UI,
and forces the user (Administrator) to change the password. You must change the password immediately
after installation.
The connector itself does not check the strength of the password. You should select a strong password that
cannot be guessed easily.
Note
To enforce your company's password policy, we recommend that you configure the Administration UI to
use an LDAP server for authorizing access to the UI.
The default user store is a local file store. It allows only one user, and only the Administrator role for this user.
Using an LDAP server as user store lets you create various users to access the UI, and assign different roles to
them. For more information on available roles, see Use LDAP for User Administration [page 637].
The Cloud Connector administration UI can be accessed remotely via HTTPS. The connector uses a standard
X.509 self-signed certificate as SSL server certificate. You can exchange this certificate with a specific
certificate that is trusted by your company. See Exchange UI Certificates in the Administration UI [page 632].
Note
Since browsers usually do not resolve localhost to the host name whereas the certificate usually is created
under the host name, you might get a certificate warning. In this case, simply skip the warning message.
OS-Level Access
The Cloud Connector is a security-critical component that handles the external access to systems of an
isolated network, comparable to a reverse proxy. We therefore recommend that you restrict the access to the
operating system on which the Cloud Connector is installed to the minimal set of users who would administrate
the Cloud Connector. This minimizes the risk of unauthorized users getting access to credentials, such as
certificates stored in the secure storage of the Cloud Connector.
We also recommend that you use the machine to operate only the Cloud Connector and no other systems.
Administrator Privileges
To log on to the Cloud Connector administration UI, the Administrator user of the connector must not
have an operating system (OS) user for the machine on which the connector is running. This allows the
OS administrator to be distinguished from the Cloud Connector administrator. To make an initial connection
between the connector and a particular SAP BTP subaccount, you need an SAP BTP user with the required
permissions for the related subaccount. We recommend that you separate these roles/duties (that means, you
have separate users for Cloud Connector administrator and SAP BTP).
We recommend that only a small number of users are granted access to the machine as root users.
Hard drive encryption for machines with a Cloud Connector installation ensures that the Cloud Connector
configuration data cannot be read by unauthorized users, even if they obtain access to the hard drive.
Supported Protocols
Currently, the protocols HTTP, HTTPS, RFC, RFC with SNC, LDAP, LDAPS, TCP, and TCP over TLS are
supported for connections between the SAP BTP and on-premise systems when the Cloud Connector and
the Connectivity service are used. The whole route from the application virtual machine in the cloud to the
Cloud Connector is always SSL-encrypted.
The route from the connector to the back-end system can be TLS-encrypted or SNC-encrypted. See Configure
Access Control (HTTP) [page 459] and Configure Access Control (RFC) [page 467].
We recommend that you turn on the audit log on operating system level to monitor the file operations.
The Cloud Connector audit log must remain switched on during the time it is used with productive systems.
The default audit level is SECURITY. Set it to ALL if required by your company policy. The administrators who
are responsible for a running Cloud Connector must ensure that the audit log files are properly archived, to
conform to the local regulations. You should switch on audit logging also in the connected back-end systems.
Encryption Ciphers
Tip
Enable all cipher suites that are compatible with the current UI certificate and are deemed secure as per
your company's and SAP's guidelines. Adapt the cipher suites whenever the UI certificate is exchanged or
the JVM is updated.
To enable or disable ciphers, choose Configuration from the main menu and go to tab User Interface, section
Cipher Suites.
The first column labeled Status Quo shows the current state of all available ciphers. The second column
Status New shows the state the ciphers will have after a restart, if that state differs from the current one (that
is, there is no entry in that column if the state remains the same after a restart).
Ciphers are either enabled or disabled, and they are either compatible or incompatible with the current UI
certificate (that is, can potentially be used or not be used) . Consult the tooltips of the (four types of) icons for
details. The third column shows SAP's security assessment of the ciphers as per the time of release. Enable or
disable individual ciphers using the button in the Action column. Enable or disable certain groups of ciphers
using the appropriate table button. Consult the tooltips for details.
Note
We recommend that you enable all cipher suites that are compatible with the UI whenever you plan to
switch to another JVM. You can comfortably do so by using the first button to the right of the filter buttons.
As the set of supported ciphers may differ, the selected ciphers may not be supported by the new JVM. In
that case, the Cloud Connector will not start anymore, and you must fix the issue by manually adapting the
file conf/server.xml. After a successful switch, you can adjust the list of eligible ciphers again.
1.2.2 Configuration
Configure the Cloud Connector to make it operational for connections between your SAP BTP applications and
on-premise systems.
Topic Description
Initial Configuration [page 388] After installing the Cloud Connector and starting the Cloud
Connector daemon, you can log on and perform the required
configuration to make your Cloud Connector operational.
Managing Subaccounts [page 404] How to connect SAP BTP subaccounts to your Cloud
Connector.
Authenticating Users against On-Premise Systems [page Basic authentication and principal propagation (user propa-
418] gation) are the authentication types currently supported by
the Cloud Connector.
Configure Access Control [page 456] Configure access control or copy the complete access con-
trol settings from another subaccount on the same Cloud
Connector.
Configuration REST APIs [page 484] Configure a newly installed Cloud Connector (initial configu-
ration, subaccounts, access control) using the configuration
REST API.
Using Service Channels [page 611] Service channels provide access from an external network to
certain services on SAP BTP, which are not exposed to direct
access from the Internet.
Configure Trust [page 619] Set up an allowlist for trusted cloud applications and a trust
store for on-premise systems in the Cloud Connector.
Connect DB Tools to SAP HANA via Service Channels [page How to connect database, BI, or replication tools running in
612] the on-premise network to a HANA database on SAP BTP
using the service channels of the Cloud Connector.
Configure Domain Mappings for Cookies [page 621] Map virtual and internal domains to ensure correct handling
of cookies in client/server communication.
Configure Solution Management Integration [page 622] Activate Solution Management reporting in the Cloud
Connector.
Configure Advanced Connectivity [page 623] Adapt connectivity settings that control the throughput and
HTTP connectivity to on-premise systems.
Configure the Java VM [page 625] Adapt the JVM settings that control memory management.
Configuration Backup [page 626] Backup and restore your Cloud Connector configuration.
Configure Login Screen Information [page 628] Add additional information to the login screen and configure
its appearance.
After installing and starting the Cloud Connector, log on to the administration UI and perform the required
configuration to make your Cloud Connector operational.
Tasks
Prerequisites
• You have downloaded and installed the Cloud Connector, see Installation [page 349].
• You have assigned one of these roles/role collections to the subaccount user that you use for initial Cloud
Connector setup, depending on the SAP BTP environment in which your subaccount is running:
Note
For the Cloud Foundry environment, you must know on which cloud management tools feature set (A
or B) your account is running. For more information on feature sets, see Cloud Management Tools —
Feature Set Overview.
Cloud Foundry [feature set A] The user must be a member of the Add Members to Your Global Account
global account that the subaccount
Managing Security Administrators in
belongs to.
Your Subaccount [Feature Set A]
Alternatively, you can assign the user
as Security Administrator.
Cloud Foundry [feature set B] Assign at least one of these default Default Role Collections [Feature Set
role collections (all of them includ- B] [page 15]
ing the role Cloud Connector
Role Collections and Roles in Global
Administrator):
Accounts, Directories, and Subac-
• Subaccount counts [Feature Set B]
Administrator
• Cloud Connector
Administrator
• Connectivity
and Destination
Administrator
After establishing the Cloud Connector connection, this user is not needed any more, since it serves only
for initial connection setup. You may revoke the corresponding role assignment then and remove the user
from the Members list ( Neo environment), or from the Users list (Cloud Foundry environment).
Note
If the Cloud Connector is installed in an environment that is operated by SAP, SAP provides a user that
you can add as member in your SAP BTP subaccount and assign the required role.
• We strongly recommend that you read and follow the steps described in Recommendations for Secure
Setup [page 382]. For operating the Cloud Connector securely, see also Security Guidelines [page 717].
To administer the Cloud Connector, you need a Web browser. To check the list of supported browsers, see
Prerequisites and Restrictions → section Browser Support.
Note
By default, the Cloud Connector includes a self-signed UI certificate. Browsers may show a security
warning because they don't trust the issuer of this certificate. In this case, you can skip the warning
message.
Initial Setup
1. When you first log in, you must change the password before you continue.
2. Afterwards, you can choose between master and shadow installation. Select Master if you are installing a
single Cloud Connector instance or the main instance of a high availability setup. For more information, see
High Availability Setup [page 654].
3. (Optional): When configuring a master, you can provide a (free-text) Description for this Cloud Connector
instance that helps you distinguish different Cloud Connectors. This information will also be shown in the
Cloud Connectors view in the SAP BTP cockpit.
4. Afterwards, you are forwarded to the main page. In the top right corner you can always see how long your
current session is still valid until you need to login again before proceeding.
To edit the password for the Administrator user, choose Configuration from the main menu, tab User
Interface, section User Administration:
Note
User name and password cannot be changed at the same time. If you want to change the user name, you
must enter only the current password in a first step. Do not enter values for <New Password> or <Repeat
New Password> when changing the user name. To change the password in second step, enter the old
password, the new one, and the repeated (new) password, but leave the user name unchanged.
When logging in for the first time, the following screen is displayed every time you choose an option from the
main menu that requires a configured subaccount:
1. (Optional) Enter an HTTPS proxy. When in doubt, consult your network administrator to check if a proxy is
required.
2. In the next step, you can choose between a manual configuration and a file-based configuration.
3. (Skip if you have selected file-based configuration) For manual configuration, the following dialog is shown:
Remember
The available regions and region domains depend on the SAP BTP environment you are using.
For more information, see Regions (Cloud Foundry and ABAP environment) or Regions and Hosts
Available for the Neo Environment.
Note
You can also configure a region yourself, if it is not part of the standard list. Either insert the region
host manually, or create a custom region, as described in Configure Custom Regions [page 418].
2. For <Subaccount>, enter the value you obtained when you registered your subaccount on SAP BTP.
Note
For a subaccount in the Cloud Foundry environment, you must enter the subaccount ID as
<Subaccount>, rather than its actual (technical) name. For information on getting the subaccount
ID, see Find Your Subaccount ID (Cloud Foundry Environment) [page 417].
For the c environment, enter the subaccount's technical name in the field <Subaccount>, not the
subaccount ID.
3. <Subaccount User> and <Password> require dedicated values, depending on the type of identity
provider (IDP) you are using:
Note
To understand which is the IDP configured in your subaccount (Neo environment), see Configuring
Platform Identity Provider.
For more information on IDPs in the Cloud Foundry environment, see Trust and Federation with
Identity Providers.
Note
For a subaccount in the Cloud Foundry environment, you must provide your Login E-
mail as <Subaccount User> instead of a user ID. The user must be a member of the
global account the subaccount belongs to.
• The user must be a member of the subaccount, and the subaccount must have the correct
(SAP ID Service) user base.
Note
Alternatively, you can add a new subaccount user in the SAP BTP cockpit, assign the required
authorization (see section Prerequisites [page 388]), and use the new user and password.
For a Neo subaccount, you can also add a new subaccount user with the role Cloud
Connector Admin from the Members tab in the SAP BTP cockpit and use the new user
and password.
Note
The Cloud Connector does not yet support SAP Universal ID. Please use your S-user or P-user
credentials for the <subaccount user> and <password> fields instead.
For the Neo environment, see also Add Members to Your Neo Subaccount.
For the Cloud Foundry environment, see also Add Org Members Using the Cockpit.
Tip
When using SAP Cloud Identity Services - Identity Authentication (IAS) as platform identity
provider with two-factor authentication (2FA / MFA) for your subaccount, you can simply append
the required token to the regular password. For example, if your password is "eX7?6rUm" and the
one-time passcode is "123456", you must enter "eX7?6rUm123456" into the <Password> field.
4. (Optional) You can define a <Display Name> that lets you easily recognize a specific subaccount in
the UI compared to the technical subaccount name.
5. (Optional) You can define a <Location ID> identifying the location of this Cloud Connector for a
specific subaccount. The location ID is used as routing information.It lets you connect multiple Cloud
Connectors to a single subaccount. If you don't specify any value for <Location ID>, the default is
Note
Location ID and description can be changed later on at any time. See Managing Subaccounts [page
404].
7. Choose Finish.
4. (Skip if you have selected manual configuration) For the file-based approach, the following dialog is shown:
Choose the file containing the desired authentication data and press Next. You can then review the data
extracted from the file, as well as optionally enter a location ID and a description (see step 3d and 3e for
details on the latter two properties).
Note
The internal network must allow access to the port. Specific configuration for opening the respective
port(s) depends on the firewall software used. The default ports are 80 for HTTP and 443 for HTTPS. For
RFC communication, you must open a gateway port (default: 33+<instance number> and an arbitrary
message server port. For a connection to a HANA Database (on SAP BTP) via JDBC, you must open an
arbitrary outbound port in your network. Mail (SMTP) communication is not supported.
• If you later want to change your proxy settings (for example, because the company firewall rules have
changed), choose Configuration from the main menu and go to the Cloud tab, section HTTPS Proxy.
Some proxy servers require credentials for authentication. In this case, you must provide the relevant
user/password information.
• If you want to change the description for your Cloud Connector (don't confuse with the description of
subaccounts), choose Configuration from the main menu, go to the Cloud tab, section Connector Info and
edit the description:
As soon as the initial setup is complete, the tunnel to the cloud endpoint is open, but no requests are allowed to
pass until you have performed the Access Control setup, see Configure Access Control [page 456].
To manually close (and reopen) the connection to SAP BTP, choose your subaccount from the main menu and
select the Disconnect button (or the Connect button to reconnect to SAP BTP).
• The green icon next to Region Host indicates that it is valid and can be reached.
• If an HTTPS Proxy is configured, its availability is shown the same way. In the screenshot, the grey diamond
icon next to HTTPS Proxy indicates that connectivity is possible without proxy configuration.
In case of a timeout or a connectivity issue, these icons are yellow (warning) or red (error), and a tooltip shows
the cause of the problem. Initiated By refers to the user that has originally established the tunnel. During
normal operations, this user is no longer needed. Instead, a certificate is used to open the connection to a
subaccount.
• The status of the certificate is shown next to Subaccount Certificate. It is shown as valid (green icon), if the
expiration date is still far in the future, and turns to yellow if expiration approaches according to your alert
settings. It turns red as soon as it has expired. This is the latest point in time, when you should Update the
Certificate for Your Subaccount [page 413].
Note
When connected, you can monitor the Cloud Connector also in the Connectivity section of the SAP BTP
cockpit. There, you can track attributes like version, description and high availability set up. Every Cloud
Connector configured for your subaccount automatically appears in the Connectivity section of the cockpit.
Related Information
To set up a mutual authentication between the Cloud Connector and any backend system it connects to, you
can import an X.509 client certificate into the Cloud Connector. The Cloud Connector then uses the so-called
system certificate for all HTTPS requests to backends that request or require a client certificate. The certificate
authority (CA) that signed the Cloud Connector's system certificate must be trusted by all backend systems to
which the Cloud Connector is supposed to connect.
You must provide the system certificate as PKCS#12 file containing the client certificate, the corresponding
private key and the CA root certificate that signed the client certificate (plus potentially the certificates of any
intermediate CAs, if the certificate chain is longer than 2).
Procedure
From the left panel, choose Configuration. On the tab On Premise, choose System Certificate Import a
certificate to upload a certificate and provide its password:
A second option is to start a certificate signing request procedure as described for the UI certificate in Exchange
UI Certificates in the Administration UI [page 632] and upload the resulting signed certificate.
If having intermediate certificates in place, make sure you import the complete chain (for example, as
PKCS#7 file) so that mTLS connections always work properly.
You can verify this in the Certificate Chain area of the System Certificate section:
A third option is to generate a self-signed certificate. It might be useful if no CA is needed, for example, in
a demo setup or if you want to use a dedicated CA. For this option, choose Create and import a self-signed
certificate:
If a system certificate has been imported successfully, its distinguished name, the name of the issuer, and the
validity dates are displayed:
You can delete a system certificate that is no longer required. To do this, use the respective button and confirm
deletion.
If you need the public key for establishing trust with a server, you can export the full chain via the Export button.
To renew a certificate that is close to expiration, install the new certificate as described above. This will replace
the expiring certificate.
Related Information
Configure a Secure Network Connection (SNC) to set up the Cloud Connector for RFC communication to an
ABAP backend system.
To set up a mutual authentication between Cloud Connector and an ABAP backend system (connected via
RFC), you can configure SNC for the Cloud Connector. It will then use the associated PSE for all RFC SNC
requests. This means that the SNC identity, represented by this PSE, must:
• Be trusted by all backend systems to which the Cloud Connector is supposed to connect
• Play the role of a trusted external system by adding the SNC name of the Cloud Connector to the
SNCSYSACL table. You can find more details in the SNC configuration documentation for the release of
your ABAP system.
Prerequisites
• You have configured your ABAP system(s) for SNC. For detailed information on configuring SNC for an
ABAP system, see also Configuring SNC on AS ABAP.
• You have configured the ABAP System to trust the Cloud Connector's system SNC identity. To do this, or
to establish trust for principal propagation, follow the steps described in Configure Identity Propagation for
RFC [page 437].
• Library Name: Provides the location of the SNC library you are using for the Cloud Connector.
Note
Bear in mind that you must use one and the same security product on both sides of the
communication.
• My Name: The SNC name that identifies the Cloud Connector. It represents a valid scheme for the
SNC implementation that is used.
• Quality of Protection: Determines the level of protection that you require for the connectivity to the
ABAP systems.
Note
When using CommonCryptoLibrary as SNC implementation, note 1525059 will help you to configure
the PSE to be associated with the user running the Cloud Connector process.
Note
When using the SAP Cryptographic Library as SNC implementation, you can use the interactive setup scripts
which can be found in the Cloud Connector's installation folder to ease the setup process.
Using the scripts mentioned below is not mandatory for using the SAP cryptographic library. If you are
familiar with the necessary steps needed for running a process with the desired identity, you can also
configure the SNC setup for the SAP cryptographic library on your own.
1. Download and extract the SAP Cryptographic Library from the Download Center (search for
sapcryptolib).
2. Copy the respective scripts depending on your OS to the SAP Cryptographic Library folder.
3. Make sure the Cloud Connector process is running.
4. Make sure the environment variable SECUDIR is properly set.
1. For Linux, you can set it solely for the Cloud Connector process by extending the daemon as described
in Installation on Linux OS [page 378].
5. Copy the partner's certificate you have exported to your SECUDIR folder. You must specify this to import it
into your PSE.
6. Run the script snc_create_pse now. You should see the SECUDIR and the Cloud Connector user:
SECUDIR D:\sec found.
Cloud Connector is running with user NT AUTHORITY\SYSTEM
7. Specify the certificate name for the Cloud Connector:
Specify the own certificate name in the format as CN=host,OU=org,O=comp,C=lang: CN=SCC
Specify the PSE password: <enter secure PW here>
8. Specify the exported certificate of the partner from step 5:
Specify the import certificate file of the partner contained in D:\sec: exported_abap_cert.cer
Creating PSE now...
...
PSE creation finished.
9. Now, a PSE with name scc.pse has been created in your SECUDIR folder. Additionally, a file
sccCertificateRequest.p10 has been created. This is the CSR you can use to get a signed certificate
by your CA now. You can import the response certificate from your CA directly, or use the other script
import_ca_response later. If you don't want to sign your certificate by a CA and use it as a self-signed
certificate, choose no. Depending on your CA, you might need to provide, besides the signed certificate,
all other intermediate certificates and the root certificate for the import. Copy them to the SECUDIR folder
and specify them in the order shown below.
Do you want to import a CA response now? [y/n] y
Specify the CA response file contained in D:\sec: ca_response.crt
Specify further Root CAs (PEM, Base64 or DER binary) in D:\sec needed to complete the chain (separated
by blanks), otherwise press enter: intermediate.crt root.crt
Importing CA response now...
...
CA-Response successfully imported into PSE "D:\sec\scc.pse"
10. Now, all other required steps are done, such as creating the credentials file cred_v2, importing the partner
certificate and exporting the Cloud Connector's certificate to SECUDIR with the name scc.crt (which
must be imported at the partner's side).
Create SSO server credentials...
...
Creation finished.
Related Information
Configure the Cloud Connector to support LDAP in different scenarios (cloud applications using LDAP or Cloud
Connector authentication).
Prerequisites
You have installed the Cloud Connector and done the basic configuration:
Steps
When using LDAP-based user management, you have to confgure the Cloud Connector to support this feature.
Depending on the scenario, you need to perform the following steps:
Scenario 1: Cloud applications using LDAP for authentication. Configure the destination of the LDAP server in
the Cloud Connector: Configure Access Control (LDAP) [page 474].
Scenario 2: Internal Cloud Connector user management. Activate LDAP user management in the Cloud
Connector: Use LDAP for User Administration [page 637].
Add and connect your SAP BTP subaccounts to the Cloud Connector.
Note
This topic refers to subaccount management in the Cloud Connector. If you are looking for information
about managing subaccounts on SAP BTP (Cloud Foundry or Neo environment), see
Context
You can connect to several subaccounts within a single Cloud Connector installation. Those subaccounts
can use the Cloud Connector concurrently with different configurations. By selecting a subaccount from the
drop-down box, all tab entries show the configuration, audit, and state, specific to this subaccount. In case of
audit and traces, cross-subaccount info is merged with the subaccount-specific parts of the UI.
Note
We recommend that you group only subaccounts with the same qualities in a single installation:
• Productive subaccounts should reside on a Cloud Connector that is used for productive subaccounts
only.
• Test and development subaccounts can be merged, depending on the group of users that are supposed
to deal with those subaccounts. However, the preferred logical setup is to have separate development
and test installations.
Prerequisites
You have assigned one of these roles/role collections to the subaccount user that you use for initial Cloud
Connector setup, depending on the SAP BTP environment in which your subaccount is running:
Note
For the Cloud Foundry environment, you must know on which cloud management tools feature set (A or B)
your account is running. For more information on feature sets, see Cloud Management Tools — Feature Set
Overview.
Cloud Foundry [feature set A] The user must be a member of the Add Members to Your Global Account
global account that the subaccount be-
Managing Security Administrators in
longs to.
Your Subaccount [Feature Set A]
Alternatively, you can assign the user as
Security Administrator.
Cloud Foundry [feature set B] Assign at least one of these default Default Role Collections [Feature Set B]
role collections (all of them includ- [page 15]
ing the role Cloud Connector
Role Collections and Roles in Global Ac-
Administrator):
counts, Directories, and Subaccounts
• Subaccount [Feature Set B]
Administrator
• Cloud Connector
Administrator
• Connectivity
and Destination
Administrator
After establishing the Cloud Connector connection, this user is not needed any more, since it serves only for
initial connection setup. You may revoke the corresponding role assignment then and remove the user from the
Members list.
Note
If the Cloud Connector is installed in an environment that is operated by SAP, SAP provides a user that you
can add as member in your SAP BTP subaccount and assign the required role.
Subaccount Dashboard
In the subaccount dashboard (choose your Subaccount from the main menu), you can check the state of all
subaccount connections managed by this Cloud Connector at a glance.
The Sort buttons for the columns Subaccount and Display Name let you sort the entries by the column either
ascending or descending, and the Filter buttons in the columns let you filter the listed entries.
You can use the Filter buttons above the dashboard to filter the shown subaccounts based on the connection
status. You can select all subaccounts, all connected ones, all disconnected ones, all subaccounts currently in
connecting or reconnecting status, or all subaccounts for which establishing the connection has failed.
If you want to connect an additional subaccount to your on-premise landscape, press the Add Subaccount
button. For more information on the configuration procedure, see Set up Connection Parameters and HTTPS
Proxy [page 391] in the context of initial configuration.
Remember
Keep in mind that the specification of an HTTPS proxy is only offered when establishing the first
connection.
Next Steps
• To modify an existing subaccount, choose the Edit icon and change the <Display Name>, <Location
ID> and/or <Description>.
Related Information
You can copy the configuration of a subaccount's Cloud To On-Premise and On-Premise To Coud sections to a
new subaccount, by using the export and import functions in the Cloud Connector administration UI.
Note
Principal propagation configuration (section Cloud To On-Premise) is not exported or imported, since it
contains subaccount-specific data.
1. In the Cloud Connector administration UI, choose your subaccount from the navigation menu.
2. To export the existing configuration, choose the Export button in the upper right corner. The configuration
is downloaded as a zip file to your local file system.
1. From the navigation menu, choose the subaccount to which you want to copy an existing configuration.
2. To import an existing configuration, choose the Import button in the upper right corner.
Enable custom identity provider (IDP) authentication to configure a Cloud Foundry subaccount in the Cloud
Connector by using a one-time passcode.
Content
Context
For a subaccount in the Cloud Foundry environment that uses a custom IDP, you can choose this IDP
for authentication instead of the (default) SAP ID service when configuring the subaccount in the Cloud
Connector.
Using custom IDP authentication, you can perform the following operations in the Cloud Connector:
Operation Description
Set up Connection Parameters and HTTPS Proxy [page 391] Add an initial subaccount to a fresh Cloud Connector instal-
lation.
Managing Subaccounts [page 404] Add additonal subaccounts to an existing Cloud Connector
installation.
Update the Certificate for a Subaccount [page 413] Refresh a subaccount certificate's validity period.
To enable custom IDP authentication, for each of these operations you must enter the marker value $SAP-CP-
SSO-PASSCODE$ in the <Subaccount User> or <User Name> field, and a one-time generated passcode
(known as temporary authentication code) in the <Password> field:
• When adding the initial subaccount to a fresh Cloud Connector installation, enter the user name $SAP-
CP-SSO-PASSCODE$ and the passcode on the Cloud Connector's Define Subaccount screen (see also Set
up Connection Parameters and HTTPS Proxy [page 391]):
To retrieve the one-time generated passcode, you must use the correct login URL for single sign-on (SSO) to
access your custom IDP. The procedure to get this URL depends on the SAP BTP feature set you are using.
Note
To choose the right procedure, you must know on which cloud management tools feature set (A or B) your
SAP BTP account is running. For more information on feature sets, see Cloud Management Tools — Feature
Set Overview.
Next Step
Caution
Mind the respective user rights described in the prerequisites for Initial Configuration [page 388] and
Managing Subaccounts [page 404]. For feature set A, it is mandatory to be a subaccount Security
Administrator, not only a global account member.
Sample Code
$ cf api api.cf.eu10.hana.ondemand.com
Setting api endpoint to api.cf.eu10.hana.ondemand.com...
OK
api endpoint: https://api.cf.eu10.hana.ondemand.com
api version: 2.156.0
$ cf login --sso
API endpoint: https://api.cf.eu10.hana.ondemand.com
Temporary Authentication Code ( Get one at https://
login.cf.eu10.hana.ondemand.com/passcode ):
Next Step
Option 2: Get the URL Using the Connectivity Service Instance Credentials
Next Step
1. Open the resulting URL in your browser to get the one-time passcode via SSO:
• If there is an active user session, the passcode is generated automatically and returned right away.
• If there is no active user session, you are asked to log on to the IDP manually. If several IDPs are
configured, you can choose one from the available options.
2. Use the passcode to proceed with the subaccount configuration in the Cloud Connector UI.
Back to Content [page 409]
Certificates used by the Cloud Connector are issued with a limited validity period. To prevent a downtime while
refreshing the certificate, you can update it for your subaccount directly from the administration UI.
You must have the required subaccount authorizations on SAP BTP to update certificates for your subaccount.
See:
Procedure
Tip
You can use this procedure even if the certificate has already expired.
1. From the main menu, choose your subaccount, and navigate to subaccount overview.
Note
To check the certificate's validity, click on the <Subaccount Certificate> in section Subaccount
Overview.
2. Press the Certificate button. Depending on the cloud environment, you can refresh using authentication
data from a file or manually by providing user name and password, or only the latter. The respective dialog
or wizard is shown.
3. Enter <User Name> and <Password> (or use the file with authentication data if the option is offered) and
choose Refresh (or Finish). The certificate assigned to your subaccount is refreshed.
Note
In the Cloud Foundry environment, you must provide your Login E-mail instead of a user ID as
<User Name>.
Tip
When using SAP Cloud Identity Services - Identity Authentication (IAS) as platform identity provider
with two-factor authentication (2FA / MFA) for your subaccount, you can simply append the required
token to the regular password. For example, if your password is "eX7?6rUm" and the one-time
passcode is "123456", you must enter "eX7?6rUm123456" into the <Password> field.
Caution
Due to the discontinuation of the Enhanced Disaster Recovery Service, the related functionality in the Cloud
Connector has been dropped as of version 2.16.
For more information, see What's New for SAP Business Technology Platform.
Each subaccount (except trial accounts) can optionally have a disaster recovery subaccount.
The disaster recovery subaccount is intended to take over if the region host of its associated original
subaccount faces severe issues.
A disaster recovery account inherits the configuration from its original subaccount except for the region host.
The user can, but does not have to be the same.
Procedure
Note
The selected region host must be different from the region host of the original subaccount.
Note
The technical subaccount name, the display name, and the location ID must remain the same. They are set
automatically and cannot be changed.
Note
You cannot choose another original subaccount nor a trial subaccount to become a disaster recovery
subaccount.
If you want to change a disaster recovery subaccount, you must delete it first and then configure it again.
To switch from the original subaccount to the disaster recovery subaccount, choose Employ disaster recovery
subaccount.
The disaster recovery subaccount then becomes active, and the original subaccount is deactivated.
You can switch back to the original subaccount as soon as it is available again.
Note
As of Cloud Connector 2.11, the cloud side informs about a disaster by issuing an event. In this case, the
switch is performed automatically.
Related Information
Convert a disaster recovery sucaccount into a standard subaccount if the former primary subaccount's region
cannot be recovered.
Caution
Due to the discontinuation of the Enhanced Disaster Recovery Service, the related functionality in the Cloud
Connector has been dropped as of version 2.16.
Disaster recovery subaccounts that were switched to disaster recovery mode can be elevated to standard
subaccounts if a disaster recovery region replaces an original region that is not expected to recover.
If a disaster recovery subaccount should be used as primary subaccount, you can convert it by choosing the
button Discard original subaccount and replace it with disaster recovery subaccount.
Get your subaccount ID to configure the Cloud Connector in the Cloud Foundry environment.
Note
For the Beta version, the cloud cockpit is not yet available.
In order to set up your subaccount in the Cloud Connector, you must know the subaccount ID. Follow these
steps to acquire it:
Tip
This procedure is useful in particular for regions introduced after the release of your current Cloud
Connector version. Those regions are not included in the list of predefined regions.
If you want to use a custom region for your subaccount, you can configure regions in the Cloud Connector,
which are not listed in the selection of standard regions.
1. From the Cloud Connector main menu, choose Configuration Cloud and go to the Custom Regions
section.
2. To add a region to the list, choose the Add icon.
3. In the Add Region dialog, enter the <Region> and <Region Host> you want to use.
4. Choose Save.
5. To edit a region from the list, select the corresponding line and choose the Edit icon.
Currently, the Cloud Connector supports basic authentication, principal propagation, and technical user
propagation as user authentication types towards internal systems. The destination configuration of the used
• To use basic authentication, configure an on-premise system to accept basic authentication and to
provide one or multiple service users. No additional steps are necessary in the Cloud Connector for this
authentication type.
• To use principal propagation or technical user propagation, you must explicitly configure trust to those
cloud entities from which user tokens are accepted as valid.
For more information, see Configuring Principal Propagation [page 419] and Configuring Technical User
Propagation [page 455].
• When using HTTP as communication protocol, you can also perform a logon using the Cloud Connector
system certificate, if there is no principal provided by the cloud side. The access control entry must be
configured for this.
Note
For HTTP, also some other token-based authentication methods might work between the cloud
application and the backend, depending on the backend capabilities.
Related Information
Use principal propagation to simplify the access of SAP BTP users to on-premise systems.
Task Description
Set Up Trust [page 420] Configure a trusted relationship in the Cloud Connector to
support principal propagation. Principal propagation lets you
forward the logged-on identity in the cloud to the internal
system without requesting a password.
Configure a CA Certificate [page 423] Install and configure an X.509 certificate to enable support
for principal propagation.
Configuring Identity Propagation to an ABAP System [page Learn more about the different types of configuring and sup-
429] porting principal propagation for a particular AS ABAP.
Configure Subject Patterns for Principal Propagation [page Define a pattern identifying the user for the subject of the
443] generated short-lived X.509 certificate, as well as its valid-
ity period.
Configure a Secure Login Server [page 448] Configuration steps for Java Secure Login Server (SLS) sup-
port.
Configure Kerberos [page 452] The Cloud Connector lets you propagate users authenti-
cated in SAP BTP via Kerberos against back-end systems.
It uses the Service For User and Constrained Delegation
protocol extension of Kerberos.
Configuring Identity Propagation to SAP NetWeaver AS for Find step-by-step instructions on how to configure principal
Java [page 454] propagation to an application server Java (AS Java).
Related Information
Establish trust to an identiy provider to support principal propagation and technical user propagation.
Tasks
The information in this section applies to both principal propagation and technical user propagation.
You perform trust configuration to support principal propagation, that is, forwarding the logged-on identity in
the cloud to the internal system without the need of providing the password. The same is done for technical
user propagation, which logs on a technical user identified by an access token for an OAuth client without the
need of providing the password.
By default, your Cloud Connector does not trust any entity that issues tokens for principal propagation.
Therefore, the list of trusted identity providers (IdPs) is empty by default.
If you decide to use the principal propagation feature, you must establish trust to at least one IdP. The following
IdP types are supported:
Note
In the Neo environment, you can also trust HANA instances and Java applications to act as IdPs.
You can configure trust to one or more IdPs per subaccount. After you've configured trust in the cockpit for
your subaccount, for example, to your own company's IdP(s), you can synchronize this list with your Cloud
Connector.
From your subaccount menu, choose Cloud to On-Premise and go to the Principal Propagation tab. Choose the
Synchronize button to store the list of existing identity providers locally in your Cloud Connector.
Note
Whenever you update a SAML IdP configuration for a subaccount on cloud side, you must
synchronize the trusted entities in theCloud Connector. Otherwise the validation of the forwarded
SAML assertion will fail with an exception containing an exception message similar to this:
Caused by: com.sap.engine.lib.xml.signature.SignatureException: Unable to validate signature ->
java.security.SignatureException: Signature decryption error: javax.crypto.BadPaddingException: Invalid
PKCS#1 padding: encrypted message and modulus lengths do not match!.
For more information, see also Include Tokens from Corporate Identity Providers or Identity Authentication in
Tokens of the SAP Authorization and Trust Management Service.
Set up principal propagation from SAP BTP to your internal system that is used in a hybrid scenario.
Note
As a prerequisite for principal propagation for RFC, the following cloud application runtime versions are
required:
1. Set up trust to an entity that is issuing an assertion for the logged-on user (see section above).
2. Set up the system identity for the Cloud Connector.
• For HTTPS, you must import a system certificate into your Cloud Connector.
• For RFC, you must import an SNC PSE into your Cloud Connector.
3. Configure the target system to trust the Cloud Connector.
There are two levels of trust:
1. First, you must allow the Cloud Connector to identify itself with its system certificate (for HTTPS), or
with the SNC PSE (for RFC).
2. Then, you must allow this identity to propagate the user accordingly:
• For HTTPS, the Cloud Connector forwards the true identity in a short-lived X.509 certificate in
an HTTP header named SSL_CLIENT_CERT. The system must use this certificate for logging on
the real user. The SSL handshake, however, is performed through the system certificate. For more
information on identity forwarding, see Configure Access Control (HTTP) [page 459].
• For RFC, the Cloud Connector forwards the true identity as part of the RFC protocol.
For more information, see Configuring Identity Propagation to an ABAP System [page 429].
Note
If you use an identity provider that issues unsigned assertions, you must mark all relevant applications as
trusted by the Cloud Connector in tab Principal Propagation, section Trust Configuration.
Configure an allowlist for trusted cloud applications, see Configure Trust [page 619].
Configure a trust store that acts as an allowlist for trusted on-premise systems. See Configure Trust [page
619].
Related Information
Install and configure an X.509 certificate to enable support for principal propagation in the Cloud Connector.
The information in this section applies to both principal propagation and technical user propagation.
Supported CA Mechanisms
You can enable support for principal propagation or technical user propagation with X.509 certificates by
performing either of the following procedures:
Note
Prior to version 2.7.0, this was the only option and the system certificate was acting both as client
certificate and CA certificate in the context of principal propagation.
• Using a Secure Login Server (SLS) and delegate the CA functionality to it.
The Cloud Connector uses the configured CA approach to issue short-lived certificates for logging on the same
identity in the back end that is logged on in the cloud. For establishing trust with the back end, the respective
configuration steps are independent of the approach that you choose for the CA.
To issue short-lived certificates that are used for principal propagation to a back-end system, you can import
an X.509 client certificate into the Cloud Connector. This CA certificate must be provided as PKCS#12 file
containing the (intermediate) certificate, the corresponding private key, and the CA root certificate that signed
the intermediate certificate (plus the certificates of any other intermediate CAs, if the certificate chain includes
more than those two certificates).
• Option 1: Choose the PKCS#12 file from the file system, using the file upload dialog. For the import process,
you must also provide the file password.
• Option 2: Start a Certificate Signing Request (CSR) procedure like for the UI certificate, see Exchange UI
Certificates in the Administration UI [page 632].
• Option 3: Generate a self-signed certificate, which might be useful in a demo setup or if you need a
dedicated CA. In particular for this option, it is useful to export the public key of the CA via the button
Download certificate in DER format.
Caution
The CA certificate should have the KeyUsage attribute keyCertSign. Many systems verify that the issuer
of a certificate has this attribute and deny a client certificate, if this attribute is not present. When using
the Certificate Signing Request procedure, the attribute will be requested for the CA certificate. Also, when
generating a self-signed certificate, this attribute will be added automatically.
Choose Create and import a self-signed certificate if you want to use option 3:
In particular for this option, it is useful to export the public key of the CA by choosing the respective button.
After successful import of the CA certificate, its distinguished name, the name of the issuer, and the validity
dates are shown:
If a CA certificate is no longer required, you can delete it. Use the respective Delete button and confirm the
deletion.
If you want to delegate the CA functionality to a Secure Login Server (SLS), choose the CA using the Secure
Login Server option and configure the SLS as follows, after having configured the SLS as described in Configure
a Secure Login Server [page 448].
A wizard offers in a first step a quick configuration by metadata URL pointing to the SLS you'd like to use.
Using the metadata URL lets you fetch the most relevant data from SLS instance. You only have to choose the
profile configured on SLS, which should be used for the generation of short-lived certificates. Choose Finish to
save the configuration.
If you don't have the metadata URL or would change the current configuration without metadata URL, you may
keep the field empty and go to the next step.
Note
• Profile: SLS profile that allows to issue certificates as needed for principal propagation with the Cloud
Connector.
Note
• Authentication Port: Port over which the Cloud Connector is requesting the short-lived certificates
from SLS.
Note
For this privileged port, a client certificate authentication is required, for which the Cloud Connector's
system certificate is used.
Related Information
Learn more about the different types of configuring and supporting principal propagation and technical user
propagation for a particular AS ABAP.
Note
The information in this section applies to both principal propagation and technical user propagation.
Task Description
Configure Identity Propagation for HTTPS [page 429] Step-by-step instructions to configure principal propagation
to an ABAP server for HTTPS.
Configure Identity Propagation via SAP Web Dispatcher Set up a trust chain to use principal propagation to an ABAP
[page 434] server for HTTPS via SAP Web Dispatcher.
Configure Identity Propagation for RFC [page 437] Step-by-step instructions to configure principal propagation
to an ABAP server for RFC.
Rule-Based Mapping of Certificates [page 441] Map short-lived certificates to users in the ABAP server.
Find step-by-step instructions to configure principal propagation to an ABAP server for HTTPS.
Note
The information in this section applies to both principal propagation and technical user propagation. These
two types of user propagation are combined as identity propagation.
Example Data
Tasks
Configure an ABAP System to Trust the Cloud Connector's System Certificate [page 430]
Prerequisites
To perform the following steps, you must have the corresponding authorizations in the ABAP system for the
transactions mentioned below (administrator role according to your specific authorization management) as
well as an administrator user for the Cloud Connector.
Configure the ABAP system to trust the Cloud Connector's system certificate [page 430]
Configure the Internet Communication Manager (ICM) to trust the system certificate for principal propagation
[page 431]
Configure the ABAP system to trust the Cloud Connector's system certificate:
Configure the Internet Communication Manager (ICM) to trust the system certificate for identity
propagation:
Note
If your ABAP system uses kernel 7.42 or lower, see SAP note 2052899 or set the following two
parameters:
• icm/HTTPS/trust_client_with_issuer: this is the issuer of the system certificate (example
data: CN=MyCompany CA, O=Trust Community, C=DE).
• icm/HTTPS/trust_client_with_subject: this is the subject of the system certificate
(example data: CN=SCC, OU=BTP Scenarios, O=Trust Community, C=DE).
Caution
The ICM expects blanks after the separating comma for the issuer and subject elements. So, even if
the Cloud Connector administration UI shows a string without blanks, for example: CN=SCC,OU=BTP
Scenarios,O=Trust Community,C=DE, you must specify in ICM: CN=SCC, OU=BTP Scenarios, O=Trust
Community, C=DE.
Caution
If you have an SAP Web Dispatcher installed in front of the ABAP system, trust must be added in its
configuration files with the same parameters as for the ICM. Also, you must add the system certificate
of the Cloud Connector to the trust list of the Web dispatcher Server PSE. For more information, see
Configure Identity Propagation via SAP Web Dispatcher [page 434].
Caution
When using identity propagation with X.509 certificates, you cannot use the strict mode in certificate block
management (transaction code: CRCONFIG) for the CRL checks within profile SSL_SERVER.
For systems later than SAP NetWeaver 7.3 EHP1 (7.31), you can use rule-based certificate mapping, which is
the recommended way to create the required user mappings. For more information, see Rule-Based Mapping
of Certificates [page 441].
In older releases (for which this feature does not exist yet), you can do this manually in the system as described
below, or use an identity management solution generating the mapping table for a more comfortable approach.
To access the required ICF services for your scenario in the ABAP system, choose one of the following
procedures:
• If some of the ICF services require Basic Authentication, while others should be accessed via system
certificate logon, perform these steps:
1. In the Cloud Connector system mapping, select Allow Principal Propagation, choose the principal type
X.509 Certificate, and select System Certificate for Logon in the corresponding system mapping
as described above.
2. In the ABAP system, choose transaction code SICF and go to Maintain Services.
3. Select the service that requires Basic Authentication as logon method.
4. Double-click the service and go to tab Logon Data.
5. Switch to Alternative Logon Procedure and ensure that the Basic Authentication logon procedure
is listed before Logon Through SSL Certificate.
• To access ICF services via certificate logon, choose the principal type X.509 Certificate (general
usage) in the corresponding system mapping. This setting lets you use the system certificate for trust as
well as for user authentication.
For details, see Configure Access Control (HTTP) [page 459], step 8 (Procedure for Cloud Connector
versions below 2.15).
Additionally, make sure that all required ICF services allow Logon Through SSL Certificate as logon
method.
• To access ICF services via the logon method Basic Authentication (logon with user/password)
and principal propagation, choose the principal type X.509 Certificate (strict usage) in the
corresponding system mapping. This setting lets you use the system certificate for trust, but prevents its
usage for user authentication.
For details, see Configure Access Control (HTTP) [page 459], step 8 (Procedure for Cloud Connector
versions below 2.15).
Additionally, make sure that all required ICF services allow Basic Authentication and Logon Through
SSL Certificate as logon methods.
• If some of the ICF services require Basic Authentication, while others should be accessed via system
certificate logon, perform these steps:
Note
If you are using SAP Web Dispatcher for communication, you must configure it to forward the SSL
certficate to the ABAP backend system, see Forward SSL Certificates for X.509 Authentication (SAP Web
Dispatcher documentation).
Related Information
Set up a trust chain to use identity propagation to an ABAP System for HTTPS via SAP Web Dispatcher.
Note
The information in this section applies to both principal propagation and technical user propagation. These
two types of user propagation are combined as identity propagation.
If you are using an intermediate SAP Web Dispatcher to connect to your ABAP backend system, you must set
up a trust chain between the involved components Cloud Connector, SAP Web Dispatcher, and ABAP backend
system.
Before configuring the ABAP system (see Configure Identity Propagation for HTTPS [page 429]), in a first
step you must configure SAP Web Dispatcher to accept and forward user principals propagated from a cloud
account to an ABAP backend.
Example Data
Tasks
Prerequisites
• Your SAP Web Dispatcher version is 7.53 or higher. See SAP note 908097 for information on
recommended SAP Web Dispatcher versions.
• We recommend that you use a standalone Web Dispatcher deployment. To learn about deployment
options, see SAP note 3115889 .
• Make sure your SAP Web Dispatcher supports SSL. See Configure SAP Web Dispatcher to Support SSL.
• Ensure that TLS client certificates can be used for authentication in the backend system. See How to
Configure SAP Web Dispatcher to Forward SSL Certificates for X.509 Authentication for step-by-step
instructions.
To allow Cloud Connector client certificates for authentication in the backend system, perform the following
two steps:
1. Configure SAP Web Dispatcher to trust the Cloud Connector's system certificate:
1. To import the system certificate to SAP Web Dispatcher, open the SAP Web Dispatcher administration
interface in your browser.
Note
2. In the menu, navigate to SSL and Trust Configuration and select PSE Management.
3. In the Manage PSE section, select SAPSSLS.pse from the drop-down list. By default, SAPSSLS.pse
contains the server certificate and the list of trusted clients that SAP Web Dispatcher trusts as a
server.
4. In the Trusted Certificates section, choose Import Certificate.
5. Enter the certificate as base64-encoded into the text box. The procedure to export your certificate in
such a format is described in Forward SSL Certificates for X.509 Authentication, step 1.
Note
Typically, this is a CA certificate. If you are using a self-signed system certificate, it's the system
certificate itself.
6. Choose Import.
7. The certificate details are now shown in section Trusted Certificates.
2. Configure SAP Web Dispatcher to trust the Cloud Connector's system certificate for identity
propagation:
• Create or edit the following parameter in SAP Web Dispatcher:
icm/trusted_reverse_proxy_<x> = SUBJECT="<subject>", ISSUER="<issuer>"
• Select a free index for <x>.
• <subject>: Subject of the system certificate (example data: CN=SCC, OU=BTP Scenarios,
O=Trust Community, C=DE)
• <issuer>: Issuer of the system certificate (example data: CN=MyCompany CA, O=Trust
Community, C=DE)
Example: icm/trusted_reverse_proxy_0 = SUBJECT="CN=SCC, OU=BTP Scenarios,
O=Trust Community, C=DE", ISSUER="CN=MyCompany CA, O=Trust Community, C=DE"
Caution
The ICM expects blanks after the separating comma for the issuer and subject elements. So,
even if the Cloud Connector administration UI shows a string without blanks, for example:
CN=SCC,OU=BTP Scenarios,O=Trust Community,C=DE, you must specify in ICM: CN=SCC,
OU=BTP Scenarios, O=Trust Community, C=DE.
• [Deprecated] Create or edit the following two parameters in SAP Web Dispatcher:
Note
Next Steps
• Step 1 of the basic identity propagation setup for HTTPS, see Configure an ABAP System to Trust the Cloud
Connector's System Certificate [page 430]. However, when using SAP Web Dispatcher, the ABAP backend
must trust the SAP Web Dispatcher instead of the Cloud Connector, see Forward SSL Certificates for X.509
Authentication, step 2 for details.
Then perform the remaining steps of the basic identity propagation setup for HTTPS as described here:
Find step-by-step instructions to configure principal propagation to an ABAP server for RFC.
Configuring principal propagation for RFC requires an SNC (Secure Network Communications) connection. To
enable SNC, you must configure the ABAP system and the Cloud Connector accordingly.
The following example provides step-by-step instructions for the SNC setup.
It is important that you use the same SNC implementation on both communication sides. Contact the
vendor of your SNC solution to check the compatibility rules.
Example Data
Note
The parameters provided in this example are based on an SNC implementation that uses the SAP
Cryptographic Library. Other vendors' libraries may require different values.
• An SNC identity has been generated and installed on the Cloud Connector host. Generating this identity
for the SAP Cryptographic Library is typically done using the tool SAPGENPSE. For more information, see
Configuring SNC for SAPCRYPTOLIB Using SAPGENPSE.
• The ABAP system is configured properly for SNC.
Note
For the latest system releases, you can use the SSO wizard to configure SNC (transaction code:
SNCWIZARD). System prerequisites are described in SAP note 2015966 .
• The Cloud Connector system identity's SNC name is p:CN=SCC, OU=SAP CP Scenarios, O=Trust
Community, C=DE.
• The ABAP system's SNC identity name is p:CN=SID, O=Trust Community, C=DE. This value can
typically be found in the ABAP system instance profile parameter snc/identity/as and hence is
provided per application server.
• When using the SAP Cryptographic Library, the ABAP system's SNC identity and the Cloud Connector's
system identity should be signed by the same CA for mutual authentication.
• The example short-lived certificate has the subject CN=P1234567, where P1234567 is the SAP BTP
application user.
Tasks
1. Open the SNC Access Control List for Systems (transaction SNC0).
2. As the Cloud Connector does not have a system ID, use an arbitrary value for <System ID> and enter it
together with its SNC name: p:CN=SCC, OU=SAP CP Scenarios, O=Trust Community, C=DE.
3. Save the entry and choose the Details button.
4. In the next screen, activate the checkboxes for Entry for RFC activated and Entry for certificate activated.
5. Save your settings.
You can do this manually in the system as described below or use an identity management solution for a more
comfortable approach. For example, for large numbers of users the rule-based certificate mapping is a good
way to save time and effort. See Rule-Based Certificate Mapping.
Set up the Cloud Connector to Use the SNC Implementation [page 440]
Prerequisites
• The required security product for the SNC flavor that is used by your ABAP back-end systems, is installed
on the Cloud Connector host.
• The Cloud Connector's system SNC identity is associated with the operating system user under which the
Cloud Connector process is running.
If you use SAP Cryptographic Library as SNC implementation, follow the steps described in Initial
Configuration (RFC) [page 400]. Additionally, SAP note 2642538 provides a good description to
associate an SNC identity of SAP Cryptographic Library with a user running an external program that
uses JCo. When using a different SNC offering, get in touch with the SNC library vendor for details.
1. In the Cloud Connector UI, choose Configuration from the main menu, select the On Premise tab, and go to
the SNC section.
2. Provide the fully qualified name of the SNC library (the security product's shared library implementing the
GSS API), the SNC name of the above system identity, and the desired quality of protection by choosing
the Edit icon.
For more information, see Initial Configuration (RFC) [page 400].
Note
The example in Initial Configuration (RFC) [page 400] shows the library location if you use the SAP
Secure Login Client as your SNC security product. In this case (as well as for some other security
products), SNC My Name is optional, because the security product automatically uses the identity
associated with the current operating system user under which the process is running, so you can
leave that field empty. (Otherwise, in this example it should be filled with p:CN=SCC, OU=SAP CP
Scenarios, O=Trust Community, C=DE.)
We recommend that you enter Maximum Protection for <Quality of Protection>, if your
security solution supports it, as it provides the best protection.
1. In the Access Control section of the Cloud Connector, create a hostname mapping corresponding to the
cloud-side RFC destination. See Configure Access Control (RFC) [page 467].
2. Make sure you choose RFC SNC as <Protocol> and ABAP System as <Back-end Type>. In the <SNC
Partner Name> field, enter the ABAP system's SNC identitiy name, for example, p:CN=SID, O=Trust
Community, C=DE.
3. Save your mapping.
Learn how to efficiently map short-lived certificates to users in the ABAP server.
Note
The information in this section applies to both principal propagation and technical user propagation.
Note
If dynamic parameters are disabled, enter the value using transaction RZ10 and restart the whole
ABAP system.
Note
To access transaction CERTRULE, you need the corresponding authorizations (see: Assign
Authorization Objects for Rule-based Mapping [page 442]).
Note
When you save the changes and return to transaction CERTRULE, the sample certificate which
you imported in Step 2b will not be saved. This is just a sample editor view to see the sample
certificates and mappings.
Related Information
Define patterns identifying the user for the subject of a generated short-lived X.509 certificate.
Note
The information in this section applies to both principal propagation and technical user propagation.
Using this configuration option, you can define different patterns identifying the user for the subject of the
generated short-lived X.509 certificate, based on a specified condition. You can also specify the validity period
and expiration tolerance.
To configure a subject pattern rule, choose Configuration On Premise Principal Propagation . In the
table shown, you can add or modify a rule consisting of a pattern and a condition.
This table represents an ordered list containing entries that have a specified condition, and the respective
subject pattern. You can change the order for an entry by choosing the respective arrow buttons. The workflow
in the Cloud Connector looks like this:
Use either of the following procedures to specify a condition based on the attributes of incoming tokens from
cloud side:
Using the selection menu, you can assign values for the following parameters:
• ${user_type}
• ${name}
• ${mail}
• ${email}
• ${display_name}
• ${login_name}
Operators
Note
For the condition ${user_type}, you can only switch between Technical or Business. The latter refers
to the "classical" propagation of business user information, whereas Technical is the propagation of a
technical user.
Use either of the following procedures to define the subject's distinguished name (DN), for which the certificate
will be issued:
Using the selection menu, you can assign values for the following parameters:
• ${name}
• ${mail}
• ${display_name}
• ${login_name}
Note
If the token provided by the Identity Provider contains additional values that are stored in attributes with
different names, but you still want to use it for the subject pattern, you can edit the variable name to
place the corresponding attribute value in the subject accordingly. For example, provide ${email}, if a
SAML assertion uses email instead of providing mail, or ${user_uuid} if the attribute user_uuid
representing the global user ID is contained in the assertion.
When using a subaccount in the Cloud Foundry environment: The Cloud Connector also offers direct
access to custom variables injected in the JWT (JSON Web token) by SAP BTP Authorization & Trust
Management that were taken over from a SAML assertion.
The values for these variables are provided by the trusted identiy provider in the token which is passed to the
Cloud Connector and specifies the user that has logged on to the cloud application.
Sample Certificate
By choosing Generate Sample Certificate you can create a sample certificate that looks like one of the short-
lived certificates created at runtime. You can use this certificate to, for example, generate user mapping rules
in the target system, via transaction CERTRULE in an ABAP system. If your subject pattern contains variable
fields, a wizard lets you provide meaningful values for each of them and eventually you can save the sample
certificate in DER format.
Validity Settings
You can change the validity settings by choosing the Edit button.
Note
The information in this section applies to both principal propagation and technical user propagation.
Note
The Secure Login Server mainstream maintenance ends on December 31, 2027.
Content
Overview
The Cloud Connector can use on-the-fly generated X.509 user certificates to log in to on-premise systems
if the external user session is authenticated (for example by means of SAML). If you do not want to use
the built-in certification authority (CA) functionality of the Cloud Connector (for example because of security
considerations), you can connect SAP SSO 2.0 Secure Login Server (SLS) or higher.
Note
Make sure you use a version that is still supported, which is currently at least SAP SSO 3.0 Secure Login
Server.
SLS is a Java application running on AS JAVA 7.20 or higher, which provides interfaces for certificate
enrollment.
• HTTPS
• REST
• JSON
• PKCS#10/PKCS#7
Note
Any enrollment requires a successful user or client authentication, which can be a single, multiple or even a
multi factor authentication.
• LDAP/ADS
• RADIUS
• SAP SSO OTP
• ABAP RFC
• Kerberos/SPNego
• X.509 TLS Client Authentication
SLS lets you define arbitrary enrollment profiles, each with a unique profile UID in its URL, and with a
configurable authentication and certificate generation.
Requirements
For user certification, SLS must provide a profile that adheres to the following:
With SAP SSO 2.0 SP06, SLS provides the following required features:
INSTALLATION
Follow the standard installation procedures for SLS. This includes the initial setup of a PKI (public key
infrastructure).
Note
SLS allows you to set up one or more own PKIs with Root CA, User CA, and so on. You can also import CAs
as PKCS#12 file or use a hardware security module (HSM) as "External User CA".
Note
You should only use HTTPS connections for any communication with SLS. AS JAVA / ICM supports TLS,
and the default configuration comes with a self-signed sever certificate. You may use SLS to replace this
certificate by a PKI certificate.
CONFIGURATION
SSL Ports
1. Open the NetWeaver Administrator, choose Configuration SSL and define a new port with Client
Authentication Mode = REQUIRED.
Note
You may also define another port with Client Authentication Mode = Do not request if you
did not do so yet.
2. Import the root CA of the PKI that issued your Cloud Connector system certificate.
3. Save the configuration and restart the Internet Communication Manager (ICM).
Authentication Policy
Root CA Certificate
Cloud Connector
Follow the standard installation procedure of the Cloud Connector and configure SLS support as explained in
Configuration of a CA hosted by a Secure Login Server:
1. Enter the policy URL that points to the SLS user profile group.
2. Select the profile, for example, Cloud Connector User Certificates.
3. Import the Root CA certificate of SLS into the Cloud Connector´s Trust Store.
Follow the standard configuration procedure for Cloud Connector support in the corresponding target system
and configure SLS support.
• AS ABAP: choose transaction STRUST and follow the steps in Maintaining the SSL Server PSE's Certificate
List.
• AS Java: open the Netweaver Administrator and follow the steps described in Configuring the SSL Key Pair
and Trusted X.509 Certificates.
Context
The Cloud Connector allows you to propagate users authenticated in SAP BTP via Kerberos against backend
systems. It uses the Service For User and Constrained Delegation protocol extension of Kerberos.
Note
This feature is not supported for ABAP backend systems. In this case, you can use the certificate-based
principal propagation, see Configure a CA Certificate [page 423].
The Key Distribution Center (KDC) is used for exchanging messages in order to retrieve Kerberos tokens for a
certain user and backend system.
For more information, see Kerberos Protocol Extensions: Service for User and Constrained Delegation
Protocol .
1. An SAP BTP application calls a backend system via the Cloud Connector.
Procedure
Example
You have a backend system protected with SPNego authentication in your corporate network. You want to call
it from a cloud application while preserving the identity of a cloud-authenticated user.
When you now call a backend system, the Cloud Connector obtains an SPNego token from your KDC for the
cloud-authenticated user. This token is sent along with the request to the back end, so that it can authenticate
the user and the identity to be preserved.
Related Information
Find step-by-step instructions on how to set up an application server for Java (AS Java) to enable identity
propagation for HTTPS.
Note
The information in this section applies to both principal propagation and technical user propagation. These
two types of user propagation are combined as identity propagation.
Prerequisites
To perform the following steps, you must have the corresponding administrator authorizations in AS Java (SAP
NetWeaver Administrator) as well as an administrator user for the Cloud Connector.
Procedure
1. Go to SAP NetWeaver Administrator Certificates and Keys and import the Cloud Connector's system
certificate into the Trusted CAs keystore view. See Importing Certificate and Key From the File System.
2. Configure the Internet Communication Manager (ICM) to trust the system certificate for identity
propagation.
a. Add a new SSL access point. See Adding New SSL Access Points.
b. Generate a certificate signing request and send it to the CA of your choice. See Configuration of the AS
Java Keystore Views for SSL.
Import the certificate signing response, the root X.509 certificate of the trusted CA, and the Cloud
Connector's system certificate into the new SSL access point from step 2a. Save the configuration and
restart the ICM. See Configuring the SSL Key Pair and Trusted X.509 Certificates.
d. Make sure the ICM trusts the system certificate for identity propagation. For more information, see
Using Client Certificates via an Intermediary Server.
e. Restart the ICM and test the SSL connection. For more information, see Testing the SSL Connection.
Procedure
1. Add the ClientCertLoginModule to the policy configuration that the Cloud Connector connects to. See
Configuring the Login Module on the AS Java.
2. Define the rules to map users authenticated with their certificate to users that exist in the User
Management Engine. See Using Rules for User Mapping in Client Certificate Login Module.
• To map the user ID of the certificate’s subject name field to users, see Using Rules Based on Client
Certificate Subject Names.
• To map the user ID based on rules for the certificate V3 extension SubjectAlternativeName, see Using
Rules Based on Client Certificate V3 Extensions.
• To use client certificate filters, see Defining Rules for Filtering Client Certificates.
Related Information
Use technical user propagation to provide access of technical users to on-premise systems.
Using technical user propagation, you can forward the identity of technical users that are identified using an
OAuth client and act on behalf of this identity in an on-premise system.
Note
Task Description
Set Up Trust [page 420] Configure a trusted relationship in the Cloud Connector to
support technical user propagation. Technical user propaga-
tion lets you forward a technical user in the cloud to the
internal system without requesting a password.
Configure a CA Certificate [page 423] Install and configure an X.509 certificate to enable support
for technical user propagation.
Configuring Identity Propagation to an ABAP System [page Learn more about the different types of configuring and sup-
429] porting technical user propagation for a particular AS ABAP.
Configure Subject Patterns for Principal Propagation [page Define a pattern identifying the user for the subject of the
443] generated short-lived X.509 certificate, as well as its valid-
ity period.
Configure a Secure Login Server [page 448] Configuration steps for Java Secure Login Server (SLS) sup-
port.
Configure Kerberos [page 452] The Cloud Connector lets you propagate users authenti-
cated in SAP BTP via Kerberos against backend systems.
It uses the Service For User and Constrained Delegation
protocol extension of Kerberos.
Configuring Identity Propagation to SAP NetWeaver AS for Find step-by-step instructions on how to configure technical
Java [page 454] user propagation to an application server Java (AS Java).
Related Information
Specify the backend systems that can be accessed by your cloud applications.
To allow your cloud applications to access a certain backend system on the intranet, you must specify this
system in the Cloud Connector. The procedure is specific to the protocol that you are using for communication.
Find the detailed configuration steps for each communication protocol here:
Adding a new subaccount is not of any use unless you expose the systems you want to be available for this
subaccount in the access control settings. You can copy the complete access control settings from another
subaccount on the same Cloud Connector, or from a file, by using the import/export mechanism provided by
the Cloud Connector.
You can find detailed information, for example the entry creation date for access control or resource entries, by
choosing the Details button in the Actions column.
1. From your subaccount menu, choose Cloud To On-Premise and select the tab Access Control.
2. To store the current settings in a ZIP file, choose Download icon in the upper-right corner.
3. You can later import this file into a different Cloud Connector.
There are two locations from which you can import access control settings:
• Overwrite: Select this checkbox if you want to replace existing system mappings with imported ones. Do
not select this checkbox if you want to keep existing mappings and only import the ones that are not yet
available (default).
Note
A system mapping is uniquely identified by the combination of virtual host and port.
• Include Resources: When this checkbox is selected (default), the resources that belong to an imported
system are also imported. Otherwise no resources are imported, that is, imported system mappings do not
expose any resources.
Backend Types
When creating a new access control entry, you can select one of the following backend types :
Related Information
Specify the backend systems that can be accessed by your cloud applications using HTTP.
To allow your cloud applications to access a certain backend system on the intranet via HTTP, you must specify
this system in the Cloud Connector.
Note
Make sure that also redirect locations are configured as internal hosts.
If the target server responds with a redirect HTTP status code (30x), the cloud-side HTTP client usually
sends the redirect over the Cloud Connector as well. The Cloud Connector runtime then performs a reverse
lookup to rewrite the location header that indicates where to route the redirected request.
If the redirect location is ambiguous (that is, several mappings point to the same internal host and port),
the first one found is used. If none is found, the location header stays untouched.
Tasks
5. Internal Host and Internal Port specify the actual host and port under which the target system can be
reached within the intranet. It needs to be an existing network address that can be resolved on the intranet
and has network visibility for the Cloud Connector without any proxy. Cloud Connector will try to forward
the request to the network address specified by the internal host and port, so this address needs to be real.
7. Allow Principal Propagation (available as of Cloud Connector 2.15): Defines if any kind of principal
propagation should be allowed over this mapping. If not selected, go to step 9.
The recommended variant is X.509 Certificate (Strict Usage) as this lets you use
principal propagation and, for example, basic authentication over the same access control entry,
regardless of the logon order settings in the target system.
To prevent the use of principal propagation to the target system, choose None as <Principal Type>.
In this case, no principal is injected.
For more information on principal propagation, see Configuring Principal Propagation [page 419].
9. System Certificate for Logon (available as of Cloud Connector 2.15): Specifies if the Cloud Connector's
system certificate should be used for authentication at the backend, if
1. No principal is received, or
2. Principal propagation is not allowed over this mapping at all.
If activated, the system certificate of the TLS handshake used for trust is also used for authentication.
If not activated, an additional HTTP header (SSL_CLIENT_CERT) is sent. It indicates to the target system
that the system certificate used for trust must not be used for authentication.
While unselected, another authentication method is used, for example, basic authentication.
Note
We recommend that you keep this option deactivated, as this lets you use principal propagation and
basic authentication over the same access control entry, regardless of the logon order settings in the
target system.
10. Host In Request Header lets you define, which host is used in the host header that is sent to the target
server. By choosing Use Internal Host, the actual host name is used. When choosing Use Virtual
Host, the virtual host is used. In the first case, the virtual host is still sent via the X-Forwarded-Host
header.
12. The summary shows information about the system to be stored and when saving the host mapping, you
can trigger a ping from the Cloud Connector to the internal host, using the Check availability of internal
host checkbox. This allows you to make sure the Cloud Connector can indeed access the internal system,
and allows you to catch basic things, such as spelling mistakes or firewall problems between the Cloud
Connector and the internal host.
If the ping to the internal host is successful (that is, the host is reachable via TLS), the state Reachable
is shown. If it fails, a warning pops up. You can view issue details by choosing the Details button, or check
them in the log files.
This check also tries to perform client authentication if possible, regardless of the host's availability. Find
additional information and hints by choosing the Details button. You can check, for example, if the system
certificate acting as a client certificate is configured correctly, and if the ABAP backend trusts it.
You can execute the availability check for all selected systems in the Access Control overview by pressing
In addition to allowing access to a particular host and port, you also must specify which URL paths (Resources)
are allowed to be invoked on that host. The Cloud Connector uses very strict allowlists for its access control.
Only those URLs for which you explicitly granted access are allowed. All other HTTP(S) requests are denied by
the Cloud Connector.
The Cloud Connector checks that the path part of the URL (up to but not including a possible question
mark (?) that may denote the start of optional CGI-style query parameters) is exactly as specified in the
configuration. If it is not, the request is denied. If you select option Path and all sub-paths, the Cloud Connector
allows all requests for which the URL path (not considering any query parameters) starts with the specified
string.
The Active checkbox lets you specify, if that resource is initially enabled or disabled. See the section below for
more information on enabled and disabled resources.
The WebSocket Upgrade checkbox lets you specify, whether that resource allows a protocol upgrade.
In some cases, it is useful for testing purposes to temporarily disable certain resources without having to delete
them from the configuration. This allows you to easily reprovide access to these resources at a later point of
time without having to type in everything once again.
• To activate the resource again, select it and choose the Activate button.
• By choosing Allow WebSocket upgrade/Disallow WebSocket upgrade this is possible for the protocol
upgrade setting as well.
Examples:
• /production/accounting and Path only (sub-paths are excluded) are selected. Only
requests of the form GET /production/accounting or GET /production/accounting?
name1=value1&name2=value2... are allowed. (GET can also be replaced by POST, PUT, DELETE, and so
on.)
• /production/accounting and Path and all sub-paths are selected. All requests of the form GET /
production/accounting-plus-some-more-stuff-here?name1=value1... are allowed.
• / and Path and all sub-paths are selected. All requests to this server are allowed.
Related Information
Specify the backend systems that can be accessed by your cloud applications using RFC.
Tasks
To allow your cloud applications to access a certain backend system on the intranet, insert a new entry in the
Cloud Connector Access Control management.
1. Choose Cloud To On-Premise from your Subaccount menu and go to tab Access Control.
2. Choose Add.
4. Choose Next.
5. Protocol: Choose RFC or RFC SNC for connecting to the backend system.
Note
The value RFC SNC is independent from your settings on the cloud side, since it only specifies the
communication beween Cloud Connector and backend system. Using RFC SNC, you can ensure that
the entire connection from the cloud application to the actual backend system (provided by the SSL
tunnel) is secured, partly with SSL and partly with SNC. For more information, see Initial Configuration
(RFC) [page 400].
Note
6. Choose Next.
7. Choose whether you want to configure a load balancing logon or connect to a specific application server.
• When using direct logon, the Application Server specifies one application server of the ABAP system.
The instance number is a two-digit number that is also found in the SAP Logon configuration.
Alternatively, it's possible to directly specify the gateway port in the Instance Number field.
• Virtual Message Server - specifies the host name exactly as specified as the jco.client.mshost
property in the RFC destination configuration in the cloud. The Virtual System ID allows you to
distinguish between different entry points of your backend system that have different sets of access
control settings. The value needs to be the same like for the jco.client.r3name property in the RFC
destination configuration in the cloud.
• Virtual Application Server - it specifies the host name exactly as specified as the jco.client.ashost
property in the RFC destination configuration in the cloud. The Virtual Instance Number allows you to
distinguish between different entry points of your backend system that have different sets of access
control settings. The value needs to be the same like for the jco.client.sysnr property in the RFC
destination configuration in the cloud.
10. This step is only relevant if you have chosen RFC SNC. The <Principal Type> field defines what kind
of principal is used when configuring a destination on the cloud side using this system mapping with
authentication type Principal Propagation. No matter what you choose, make sure that the general
configuration for the <Principal Type> has been done to make it work correctly. For destinations using
different authentication types, this setting is ignored. In case you choose None as <Principal Type>, it
is not possible to apply principal propagation to this system.
Note
If you use an RFC connection, you cannot choose between different principal types. Only the X.509
certificate is supported. You need an SNC-enabled backend connection to use it.
For more information on principal propagation, see Configuring Principal Propagation [page 419].
12. Mapping Virtual to Internal System: You can enter an optional description at this stage. The respective
description will be shown as a rich tooltip when the mouse hovers over the entries of the virtual host
column (table ).
13. The summary shows information about the system to be stored. When saving the system mapping, you
can trigger a ping from the Cloud Connector to the internal host, using the Check Internal Host checkbox.
This allows you to make sure the Cloud Connector can indeed access the internal system, and allows you
to catch basic things, such as spelling mistakes or firewall problems between the Cloud Connector and the
internal host.
14. Optional: You can later edit a system mapping (choose Edit) to make the Cloud Connector route the
requests for sales-system.cloud:sapgw42 to a different backend system. This can be useful if the
system is currently down and there is a back-up system that can serve these requests in the meantime.
However, you cannot edit the virtual name of this system mapping. If you want to use a different fictional
host name in your cloud application, you must delete the mapping and create a new one. Here, you can
also change the Principal Type to None in case you don't want to allow principal propagation to a certain
system.
15. Optional. You can later edit a system mapping to add more protection to your system when using RFC via
theCloud Connector, by restricting the mapping to specified clients and users: in column Actions, choose
the button Maintain Authority Lists (only RFC) to open an allowlist/blocklist dialog. In section Authority
Client Allowlist, enter all clients of the corresponding system in the field <Client ID> that you want to
allow to use the Cloud Connector connection. In section Authority User Blocklist, press the button Add a
user authority (+) to enter all users you want to exclude from this connection. Each user must be assigned
to a specified client. When you are done, press Save.
In addition to allowing access to a particular host and port, you also must specify which function modules
(Resources) are allowed to be invoked on that host. You can enter an optional description at this stage.
The Cloud Connector uses very strict allowlists for its access control. Besides internally used infrastructure
function modules, only function modules for which you explicitly granted access are allowed.
1. To define the permitted function modules for a particular backend system, choose the row corresponding
to that backend system and press Add in section Resources Accessible On... below. A dialog appears,
prompting you to enter the specific function module name whose invoking you want to allow.
2. The Cloud Connector checks that the function module name of an incoming request is exactly as specified
in the configuration. If it is not, the request is denied.
3. If you select the Prefix option, the Cloud Connector allows all incoming requests, for which the function
module name begins with the specified string.
4. The Active checkbox allows you to specify whether that resource should be initially enabled or disabled.
To allow your cloud applications to access an on-premise LDAP server, insert a new entry in the Cloud
Connector access control management.
4. Protocol: Select LDAP or LDAPS for the connection to the backend system. When you are done, choose
Next.
5. Internal Host and Internal Port: specify the host and port under which the target system can be reached
within the intranet. It needs to be an existing network address that can be resolved on the intranet and
has network visibility for the Cloud Connector. The Cloud Connector will try to forward the request to the
network address specified by the internal host and port, so this address needs to be real.
7. You can enter an optional description at this stage. The respective description will be shown as a tooltip
when you press the button Show Details in column Actions of the Mapping Virtual To Internal System
overview.
8. The summary shows information about the system to be stored. When saving the host mapping, you can
trigger a ping from the Cloud Connector to the internal host, using the Check Internal Host check box. This
allows you to make sure the Cloud Connector can indeed access the internal system. Also, you can catch
basic things, such as spelling mistakes or firewall problems between Cloud Connector the internal host.
If the ping to the internal host is successful, the state Reachable is shown. If it fails, a warning is displayed
in column Check Result. You can view issue details by choosing the Details button, or check them in the log
files.
You can execute such a check at any time later for all selected systems in the Mapping Virtual To Internal
System overview by pressing Check Availability of Internal Host in column Actions.
9. Optional: You can later edit the system mapping (by choosing Edit) to make the Cloud Connector route
the requests to a different LDAP server. This can be useful if the system is currently down and there is a
back-up LDAP server that can serve these requests in the meantime. However, you cannot edit the virtual
name of this system mapping. If you want to use a different fictional host name in your cloud application,
you have to delete the mapping and create a new one.
Add a specified system mapping to the Cloud Connector if you want to use the TCP protocol for
communication with a backend system.
To allow your cloud applications to access a certain backend system on the intranet via TCP, insert a new entry
in the Cloud Connector access control management.
4. Protocol: Select TCP or TCP SSL for the connection to the backend system. When choosing TCP, you can
perform an end-to-end TLS handshake from the cloud client to the backend. If the cloud-side client is using
plain communication, but you still need to encrypt the hop between Cloud Connector and the backend,
choose TCP SSL. When you are done, choose Next.
When selecting TCP as protocol, the following warning message is displayed: TCP connections
can pose a security risk by permitting unmonitored traffic. Ensure only
trustworthy applications have access. The reason is that using plain TCP, the Cloud
Connector cannot see or log any detail information about the calls. Therefore, in contrast to HTTP
or RFC (both running on top of TCP), the Cloud Connector cannot check the validity of a request. To
minimize this risk, make sure you
• deploy only trusted applications on SAP BTP.
• configure an application allowlist in the Cloud Connector, see Set Up Trust [page 420].
• take the recommended security measures for your SAP BTP (sub)account. See section Security in
the SAP BTP documentation.
5. Internal Host and Port or Port Range: specify the host and port under which the target system can be
reached within the intranet. It needs to be an existing network address that can be resolved on the intranet
and has network visibility for the Cloud Connector. The Cloud Connector will try to forward the request to
the network address specified by the internal host and port. That is why this address needs to be real.
For TCP and TCP SSL, you can also specify a port range through its lower and upper limit, separated by a
hyphen.
7. You can enter an optional description at this stage. The respective description will be shown as a tooltip
when you press the button Show Details in column Actions of the Mapping Virtual To Internal System
overview.
8. The summary shows information about the system to be stored. When saving the host mapping, you can
trigger a ping from the Cloud Connector to the internal host, using the Check Internal Host checkbox. This
allows you to make sure the Cloud Connector can access the internal system. Also, you can catch basic
things, such as spelling mistakes or firewall problems between the Cloud Connector and the internal host.
If the ping to the internal host is successful (that is, the host is reachable via TLS), the state Reachable is
shown. If it fails, a warning is displayed in column Check Result. You can view issue details by choosing the
Details button, or check them in the log files.
This check also tries to perform client authentication, if possible for TCPS, regardless of the host's
availability. Find additional information and hints by choosing the Details button. You can check, for
example, if the system certificate acting as a client certificate is configured correctly, and if the backend
trusts it.
You can execute such a check at any time later for all selected systems in the Mapping Virtual To Internal
System overview by pressing Check Availability of Internal Host in column Actions.
Configure backend systems and resources in the Cloud Connector, to make them available for a cloud
application.
Tasks
Initially, after installing a new Cloud Connector, no network systems or resources are exposed to the cloud. You
must configure each system and resource used by applications of the connected cloud subaccount. To do this,
choose Cloud To On Premise from your subaccount menu and go to tab Access Control:
• For systems using HTTP communication, see: Configure Access Control (HTTP) [page 459].
• For information on configuring RFC resources, see: Configure Access Control (RFC) [page 467].
We recommend that you limit the access to backend services and resources. Instead of configuring a system
and granting access to all its resources, grant access only to the resources needed by the cloud application. For
example, define access to an HTTP service by specifying the service URL root path and allowing access to all its
subpaths.
When configuring an on-premise system, you can define a virtual host and port for the specified system. The
virtual host name and port represent the fully qualified domain name of the related system in the cloud. We
recommend that you use the virtual host name/port mapping to prevent leaking information about a system's
physical machine name and port to the cloud.
The Cloud Connector lets you define a set of resources as a scenario that you can export, and import into
another Cloud Connector.
If you, as application owner, have implemented and tested a scenario, and configured a Cloud Connector
accordingly, you can define the scenario as follows:
Note
For applications provided by SAP, default scenario definitions may be available. To verify this, check the
corresponding application documentation.
Import a Scenario
1. Choose the Import Scenario button to add all required resources to the desired access control entry.
2. In the dialog, navigate to the folder of the archive that contains the scenario definition.
3. Choose Import. The resources of the scenario are merged with the existing set of resources which are
already available in the access control entry.
All resources belonging to a scenario get an additional scenario icon in their status. When hovering over it, the
assigned scenario(s) of this resource are listed.
Remove a Scenario
To remove a scenario:
You can use a set of APIs to perform the basic setup of the Cloud Connector.
Context
The Cloud Connector provides several REST APIs that let you configure a newly installed Cloud Connector. The
configuration options correspond to the following steps:
Note
For general information on the Cloud Connector REST APIs, see also REST APIs [page 734].
Related Information
Read and edit the Cloud Connector's common properties via API.
URI /api/v1/configuration/connector
Method GET
Request
Errors
• Response Properties:
• ha: Cloud Connector instance assigned to a high-availability role. Its value is either the string master or
shadow.
• description: Description of the Cloud Connector.
Get Version
Note
URI /api/v1/connector/version
Method GET
Request
Response
{version}
Errors
• Response Properties:
version: A string; the Cloud Connector version.
URI /api/v1/configuration/connector
Method PUT
Request
{description}
Errors INVALID_REQUEST
Roles Administrator
• Request Properties:
description: A string; use an empty string to remove the description.
• Response Properties:
• ha: Cloud Connector instance assigned to a high-availability role. Its value is either the string master or
shadow.
• description: Description of the Cloud Connector.
• Errors:
INVALID_REQUEST ((400): The value of description is not a JSON string.
Note
Read and edit the high availability settings of a Cloud Connector instance via API.
When installing a Cloud Connector instance, you usually define its high availability role (master or shadow
instance) during initial configuration, see Change your Password and Choose Installation Type [page 390].
If the high availability role was not defined before, you can set the master or shadow role via this API.
If a shadow instance is connected to the master, this API also lets you switch the roles: the master instance
requests the shadow instance to take over the master role, and then takes the shadow role itself.
Note
Editing the high availability settings is only allowed on the master instance, and supports only shadow as
input.
URI /api/v1/configuration/connector/haRole
Request
Errors
Example
Use this API if you want to set the role of a fresh installation (no role assigned yet).
As of version 2.12.0, this API also allows to switch the roles if a shadow instance is connected to the master.
In this case, the API is only allowed on the master instance and supports only the value shadow as input. The
master instance requests the shadow instance to take over the master role and then assumes the shadow role
itself.
URI /api/v1/configuration/connector/haRole
Method POST
Response
Roles Administrator
Errors:
Related Information
Read and edit the high availability settings for a Cloud Connector master instance via API.
Note
Restriction
These APIs are only permitted on a Cloud Connector master instance. The shadow instance rejects the
requests with error code 400 – Invalid Request.
Get Configuration
URI /api/v1/configuration/connector/ha/
master/config
Method GET
Request
Response
{haEnabled, haPort, allowedShadowHost}
Errors
Response Properties:
• haEnabled: Boolean value that indicates whether or not a shadow system is allowed to connect.
• haPort: Port on which the shadow instance can connect (restart required after change).
• allowedShadowHost: Name of the shadow host (a string) that is allowed to connect; an empty string
signifies that any host is allowed to connect as shadow.
Note
haPort is an optional field in the HA shadow configuration, available as of version 2.17.0. Default value for
the haPort parameter is the Cloud Connector's UI port.
Example
Set Configuration
URI /api/v1/configuration/connector/ha/
master/config
Method PUT
Request
{haEnabled, haPort, allowedShadowHost}
Errors INVALID_REQUEST
Roles Administrator
Response Properties:
• haEnabled: Boolean value that indicates whether or not a shadow system is allowed to connect.
• haPort: Port on which the shadow instance can connect (restart required after change).
• allowedShadowHost: Name of the shadow host (a string) that is allowed to connect. An empty string
means that any host is allowed to connect as shadow.
• INVALID_REQUEST (400): if the name of the shadow host is not a valid host name
Note
haPort is an optional field in the HA shadow configuration, available as of version 2.17.0. Default value for
the haPort parameter is the Cloud Connector's UI port.
Example
Get State
URI /api/v1/configuration/connector/ha/
master/state
Method GET
Request
Response
{state, shadowHost}
Errors
Response Properties:
Example
URI /api/v1/configuration/connector/ha/
master/state
Method POST
Request
Response
{op}
Roles Administrator
Request Properties:
Errors:
Example
Reset
A successful call to this API restores default values for all settings related to high availability on the master side.
Caution
Method DELETE
Request
Errors ILLEGAL_STATE
Roles Administrator
Errors:
Example
Read and edit the configuration settings for a Cloud Connector shadow instance via API (available as of Cloud
Connector version 2.12.0, or, where mentioned, as of version 2.13.0).
Note
The APIs below are only permitted on a Cloud Connector shadow instance. The master instance will reject
the requests with error code 403 – FORBIDDEN_REQUEST.
Get Configuration
URI /api/v1/configuration/connector/ha/
shadow/config
Method GET
Request
Errors
Response Properties:
Note
This API may take some time to fetch the own hosts from the environment.
Note
haPort is a new field in the HA shadow configuration, available as of version 2.17.0. Default value for the
haPort parameter is the Cloud Connector's UI port.
Example
Set Configuration
Method PUT
Request
{masterPort, masterHost, ownHost,
haPort, checkIntervalInSeconds,
takeoverDelayInSeconds,
connectTimeoutInMillis,
requestTimeoutInMillis}
Response
{masterPort, masterHost, ownHost,
haPort, checkIntervalInSeconds,
takeoverDelayInSeconds,
connectTimeoutInMillis,
requestTimeoutInMillis}
Errors
Roles Administrator
Request Properties:
Response Properties:
Note
haPort is an optional field in the HA shadow configuration, available as of version 2.17.0. Default value for
the haPort parameter is the Cloud Connector's UI port.
Example
Get State
Note
URI /api/v1/configuration/connector/ha/
shadow/state
Method GET
Request
Response
{state, ownHosts, stateMessage,
masterVersions}
Errors
Response Properties:
• state: Possible string values are: INITIAL, DISCONNECTED, DISCONNECTING, HANDSHAKE, INITSYNC,
READY, or LOST.
• ownHosts: List of alternative host names for the shadow instance.
• stateMessage: Message providing details on the current state. This property may not always be present.
Typically, this property is available if an error occurred (for example, a failed attempt to connect to the
master instance).
• masterVersions: Overview of relevant component versions of the master system, including a flag
(property ok) that indicates whether or not there are incompatibility issues because of differing master
and shadow versions.
Note
This property is only available if the shadow instance is connected to the master instance, or if there
has been a successful connection to the master system at some point in the past.
Change State
URI /api/v1/configuration/connector/ha/
shadow/state
Method POST
Request
Response
{op, user, password}
Roles Administrator
Request Properties:
• op: String value representing the state change operation. Possible values are CONNECT or DISCONNECT.
• user: User for logon to the master instance
• password: Password for logon to the master instance
Errors:
• INVALID_REQUEST (400): Invalid or missing property values were supplied; this includes wrong user or
password
• ILLEGAL_STATE (409): The requested operation cannot be executed given the current state of master
and shadow instance. This typically means the master instance does not allow high availability.
• RUNTIME_FAILURE (500): communication error between shadow and master, e.g. handshake failure or
master does not respond.
Note
The logon credentials are used for initial logon to master instance only. If a shadow instance is
disconnected from its master instance, it will reconnect to the (same) master instance using a certificate.
Hence, user and password can be omitted when reconnecting.
Reset
Note
A successful call to this API deletes master host and port, and restores default values for all other settings
related to a connection to the master.
Caution
URI /api/v1/configuration/connector/ha/
shadow/state
Method DELETE
Request
Response
Errors ILLEGAL_STATE
Roles Administrator
Errors:
Example
Read and edit the Cloud Connector's proxy settings via API.
URI /api/v1/configuration/connector/proxy
Method GET
Request
Response
{host, port, user}
Errors
Response Properties:
Sample Code
URI /api/v1/configuration/connector/proxy
Method PUT
Request
{host, port, user, password}
Response
Roles Administrator
Request Properties:
Errors:
• INVALID_REQUEST (400): invalid values were supplied, or mandatory values are missing.
• FORBIDDEN_REQUEST (403): the target of the call is a shadow instance.
Sample Code
Sample Code
URI /api/v1/configuration/connector/proxy
Method DELETE
Request
Errors FORBIDDEN_REQUEST
Errors:
Sample Code
Read and edit the Cloud Connector's authentication and UI settings via API.
URI /api/v1/configuration/connector/
authentication
Method GET
Request
Response
{type, configuration}
Errors
Response Properties:
• type: The authentication type, which is one of the following strings: basic or ldap.
• configuration: The configuration of the active LDAP authentication. This property is only available if
type is ldap. Its value is an object with properties that provide details on LDAP configuration.
Example
curl -i -k -H 'Accept:application/json'
URI /api/v1/configuration/connector/
authentication/basic
Method PUT
Request
{password, user}
Errors INVALID_REQUEST
Roles Administrator
Request Properties:
Errors:
URI /api/v1/configuration/connector/
authentication/basic
Method PUT
Request
{oldPassword, newPassword}
Errors INVALID_REQUEST
Roles Administrator
Errors:
Example
Caution
The Cloud Connector will restart if the request was successful. There is no test that confirms login will
work afterwards. If you run into problems you can revert to basic authentication by executing the script
useFileUserStore located in the root directory of your Cloud Connector installation.
URI /api/v1/configuration/connector/
authentication/ldap
Method PUT
Request
{enable, configuration}
Roles Administrator
Request Properties:
• enable: Boolean flag that indicates whether or not to employ LDAP authentication.
• configuration: The LDAP configuration, a JSON object with the properties {config, hosts,
user, password, customAdminRole , customDisplayRole, customMonitoringRole ,
customSupportRole}.
Errors:
Note
Example
Note
URI /api/v1/configuration/connector/ui/
uiCertificate
Method GET
Errors
Response Properties:
Note
Example
Note
URI /api/v1/configuration/connector/ui/
uiCertificate
Method POST
Request
{type, keySize, subjectDN,
subjectAltNames}
Errors
Roles Administrator
Request Properties:
Note
Example
Note
URI /api/v1/configuration/connector/ui/
uiCertificate
Method POST
Request
{type, keySizesubjectDN,
subjectAltNames}
Errors
Roles Administrator
Request Properties:
Note
Example
Note
URI /api/v1/configuration/connector/ui/
uiCertificate
Method PATCH
Roles Administrator
Request Parameters:
Errors:
• INVALID_REQUEST (400): The certificate chain provided does not match the most recent certificate
request, or it is not a certificate chain in the proper format (PEM-encoded).
Example
Example
For test purposes, you can sign the certificate signing request with keytool.
keytool -genkeypair -keyalg RSA -keysize 1024 -alias mykey -dname "cn=very
trusted, c=test" -validity 365 -keystore ca.ks -keypass testit -storepass testit
keytool -gencert -rfc -infile csr.pem -outfile signedcsr.pem -alias mykey
-keystore ca.ks -keypass testit -storepass testit
keytool -exportcert -rfc -file ca.pem -alias mykey -keystore ca.ks -keypass
testit -storepass testit
cat signedcsr.pem ca.pem > signedchain.pem
Note
URI /api/v1/configuration/connector/ui/
uiCertificat
pkcs12
password
keyPassword
Errors INVALID_REQUEST
Roles Administrator
Request Parameters:
Errors:
Note
keyPassword is optional. If missing, password is used to decrypt the pkcs#12 file and the private key.
Example
Example
For test purposes, you can create an own self-signed pkcs#12 certificate with keytool.
keytool -genkeypair -alias key -keyalg RSA -keysize 2048 -validity 365 -keypass
test20 -keystore test.p12 -storepass test20 -storetype PKCS12 -dname 'CN=test'
Note
URI /api/v1/configuration/
availableCipherSuites
Method GET
Request
Response
[name, ...]
Errors
Response Properties:
List of:
Note
name is case-sensitive.
Example
Note
URI /api/v1/configuration/connector/ui/
cipherSuites
Method GET
Request
Response
[name, ...]
Errors
Response Properties:
List of:
Note
name is case-sensitive.
Example
Note
URI /api/v1/configuration/connector/ui/
cipherSuites
Request
[name, ...]
Errors
Roles Administrator
Request Properties:
List of:
Note
name is case-sensitive.
Example
Note
URI /api/v1/configuration/connector/ui/
cipherSuites
Method DELETE
Request
Errors INVALID_REQUEST
Roles Administrator
Response Properties:
Note
name is case-sensitive.
Example
Note
There are two similar sets of APIs for system certificate and CA certificate for principal propagation.
Note
Some of the APIs list a parameter subjectAltNames (subject alternative names or SAN) for the request or
response object. This parameter is an array of objects with the following properties:
Note
URI /api/v1/configuration/connector/
onPremise/ppCaCertificate
Method GET
Response
{subjectDN, issuer,
notBeforeTimeStamp,
notAfterTimeStamp, subjectAltNames}
Errors NOT_FOUND
Response Properties:
Errors:
Note
URI /api/v1/configuration/connector/
onPremise/ppCaCertificate
Method GET
Errors NOT_FOUND
Response:
• Success: the binary data of the certificate; you can verify the downloaded certificate by storing it in file
ppca.crt, for instance, and then running
• Failure: an error in the usual JSON format; the content type of the response is application/json in this
case.
Errors:
Example
URI /api/v1/configuration/connector/
onPremise/ppCaCertificate
Method POST
Request
{type, keySize, subjectDN,
subjectAltNames}
Errors
Roles Administrator
Request Properties:
Note
Example
Method POST
Request
{type, keySize, subjectDN,
subjectAltNames}
Errors
Roles Administrator
Request Properties:
Note
Example
URI /api/v1/configuration/connector/
onPremise/ppCaCertificate
Method PATCH
Errors INVALID_REQUEST
Roles Administrator
Request Properties:
Errors:
• INVALID_REQUEST (400): the certificate chain provided does not match the most recent certificate
request, or it is not a certificate chain in the correct format (PEM-encoded).
Example
keytool -genkeypair -keyalg RSA -keysize 1024 -alias mykey -dname "cn=very
trusted, c=test" -validity 365 -keystore ca.ks -keypass testit -storepass testit
keytool -gencert -rfc -infile csr.pem -outfile signedcsr.pem -alias mykey
-keystore ca.ks -keypass testit -storepass testit
keytool -exportcert -rfc -file ca.pem -alias mykey -keystore ca.ks -keypass
testit -storepass testit
cat signedcsr.pem ca.pem > signedchain.pem
URI /api/v1/configuration/connector/
onPremise/ppCaCertificate
Method PUT
Errors INVALID_REQUEST
Roles Administrator
Request Parameters:
Errors:
Note
keyPassword is optional. If it is missing, password is used to decrypt the pkcs#12 file and the private key.
Example
keytool -genkeypair -alias key -keyalg RSA -keysize 2048 -validity 365 -keypass
test20 -keystore test.p12 -storepass test20 -storetype PKCS12 -dname 'CN=test'
URI /api/v1/configuration/connector/
onPremise/ppCaCertificate
Method DELETE
Errors NOT_FOUND
Roles Administrator
Errors:
Example
Note
URI /api/v1/configuration/connector/
onPremise/systemCertificate
Method GET
Response
{subjectDN, issuer,
notBeforeTimeStamp,
notAfterTimeStamp, subjectAltNames}
Response Properties:
Errors:
Note
Example
URI /api/v1/configuration/connector/
onPremise/systemCertificate
Method GET
Errors NOT_FOUND
• Success: the binary data of the certificate; you can verify the downloaded certificate by storing it in file
sys.crt, for instance, and then running
• Failure: an error in the usual JSON format; the content type of the response is application/json in this
case.
Errors:
Example
URI /api/v1/configuration/connector/
onPremise/systemCertificate
Method POST
Request
{type, keySize, subjectDN}
Errors
Roles Administrator
Request Properties:
Note
Example
URI /api/v1/configuration/connector/
onPremise/systemCertificate
Method POST
Request
{type, keySize, subjectDN}
Errors
Roles Administrator
Request Properties:
Note
URI /api/v1/configuration/connector/
onPremise/systemCertificate
Method PATCH
Errors INVALID_REQUEST
Roles Administrator
Request Properties:
Errors:
• INVALID_REQUEST (400): the certificate chain provided does not match the most recent certificate
request, or it is not a certificate chain in the correct format (PEM-encoded).
Example
For test purposes, you can sign the certificate signing request with keytool:
keytool -genkeypair -keyalg RSA -keysize 1024 -alias mykey -dname "cn=very
trusted, c=test" -validity 365 -keystore ca.ks -keypass testit -storepass testit
keytool -gencert -rfc -infile csr.pem -outfile signedcsr.pem -alias mykey
-keystore ca.ks -keypass testit -storepass testit
URI /api/v1/configuration/connector/
onPremise/systemCertificate
Method PUT
Errors INVALID_REQUEST
Roles Administrator
Request Parameters:
Errors:
Note
keyPassword is optional. If it is missing, password is used to decrypt the pkcs#12 file and the private key.
Example
keytool -genkeypair -alias key -keyalg RSA -keysize 2048 -validity 365 -keypass
test20 -keystore test.p12 -storepass test20 -storetype PKCS12 -dname 'CN=test'
URI /api/v1/configuration/connector/
onPremise/systemCertificate
Method DELETE
Request
Errors NOT_FOUND
Roles Administrator
Errors:
Example
Note
Method GET
Header
Response
{trustAllBackends, trustedBackends}
Errors
Response Properties:
• trustAllBackends: flag (boolean) indicating whether all backends are trusted (true), or only the
backends represented by certificates are trusted (false).
• trustedBackendSystems: list (array) of certificates representing the trusted backends. Each certificate
is specified through an object with the following properties:
Certificate Properties:
Note
No HTTP requests can be executed if trustAllBackends is false and the certificate list as per
trustedBackendSystems is empty.
Example
Note
Method PATCH
Request {trustAllBackends}
Errors
Roles Administrator
Request Properties:
• trustAllBackends: flag (boolean) indicating whether all backends are trusted (true), or only the
backends represented by certificates are trusted (false)
Caution
We discourage trusting all backends and recommend that you trust only specific backends represented by
their respective certificates.
Example
Note
URI /api/v1/configuration/connector/
onPremise/truststore/certificates
Errors INVALID_REQUEST
Roles Administrator
Request Properties:
Errors:
Example
Note
URI /api/v1/configuration/connector/
onPremise/truststore/certificates/
<alias>
Method DELETE
Header
Roles Administrator
Errors:
Example
Note
URI /api/v1/configuration/connector/
onPremise/truststore/certificates
Method DELETE
Header
Errors
Roles Administrator
Example
URI /api/v1/configuration/connector/
solutionManagement
Method GET
Request
Response
{hostAgentPath, isEnabled, dsrEnabled}
Errors
Response Properties:
Example
This API turns on the integration with the Solution Manager. The prerequisite is an available Host Agent. You
can specify a path to the Host Agent executable, if you don't use the default path.
URI /api/v1/configuration/connector/
solutionManagement
Request
{hostAgentPath, dsrEnabled}
Response
Errors
Response Properties:
Example
URI /api/v1/configuration/connector/
solutionManagement
Method DELETE
Request
Response
Errors
Generates a zip file containing the registration file for the solution management LMDB (Landscape
Management Database).
Note
URI /api/v1/configuration/connector/
solutionManagement/registrationFile
Method GET
Request
Response
Errors
1.2.2.5.7 Backup
URI /api/v1/configuration/backup
Method POST
Roles Administrator
Request Properties:
Errors:
Note
Only sensitive data in the backup are encrypted with an arbitrary password of your choice. The password is
required for the restore operation. The returned ZIP archive itself is not password-protected.
Sample Code
Caution
URI /api/v1/configuration/backup
Method PUT
Roles Administrator
Request Properties:
Errors:
• INVALID_REQUEST (400): if invalid or no backup file, or incorrect or missing password was provided.
• FORBIDDEN (403): if the instance role is shadow.
• INTERNAL_SERVER_ERROR (500): if an error happened during restore.
Note
Since this API uses a multipart request, it requires a multipart request header.
Sample Code
1.2.2.5.8 Subaccount
Operations
Get Subaccounts
URI /api/v1/configuration/subaccounts
Method GET
Request
Response
[{regionHost, subaccount,
locationID}]
Errors
Response:
URI /api/v1/configuration/subaccounts
Method POST
Request
{regionHost, subaccount, cloudUser,
cloudPassword, locationID,
displayName, description}
Request
{authenticationData, locationID,
(as of version 2.17.0) displayName, description}
Request Properties:
Response Properties:
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method DELETE
Request
Errors:
• NOT_FOUND (404): subaccount does not exist (in the specified region).
• ILLEGAL_STATE (409): there is at least one session that has access to the subaccount.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method PUT
Request
{locationID, displayName, description}
Response
{regionHost, subaccount, locationID,
displayName, description, tunnel}
Errors NOT_FOUND
Request Properties:
• locationID: location identifier for the Cloud Connector instance (a string; optional); if this parameter is
not supplied the location ID will not change. Revert to the default location ID by supplying the empty string.
• displayName: subaccount display name (a string; optional); if this parameter is not supplied the display
name will not change. Clear the display name by using an empty string.
• description: subaccount description (a string; optional); if this parameter is not supplied the description
will not change. Clear the description by using an empty string.
Response Properties:
Errors:
• NOT_FOUND (404): subaccount does not exist (in the specified region).
Method PUT
Request
{connected}
Errors
Request Properties:
• connected: a Boolean value indicating whether the subaccount should be connected (true) or
disconnected (false).
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/validity
Method POST
Request
{user, password}
Request
{authenticationData}
(as of version 2.17.0)
Response
{regionHost, subaccount, locationID,
displayName, description, tunnel}
Request Properties:
• BAD_REQUEST (400): the region in authentication data does not match the region of the subaccount.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method GET
Request
Response
{regionHost, subaccount,
locationID, displayName,
description, tunnel:{state,
connections, applicationConnections:
[], serviceChannels:[]}}
Errors
Response Properties:
Caution
Caution
Caution
Caution
Caution
Note
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/trust
Method POST
Request
Errors
Note
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/trust/idps
Method GET
Request
Response
[{id, name, description, certificate,
enabled}]
Errors
Response:
An array of objects, each of which represents a trusted IDP through the following properties:
Note
Method GET
Request
Response
[{id, name, applicationType, enabled}]
Errors
Response:
An array of objects, each of which represents a trusted application through the following properties:
Replace <type> with the type (either apps or idps) that belongs to the specified <id>.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/trust/<type>/
<id>
Method PUT
Request
{enabled}
Errors NOT_FOUND
Request Properties:
• enabled: true or false, indicating the specified trust source should be enabled or disabled.
• NOT_FOUND (404): specified <id> does not exist for the given <type>.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/trust/idps/
<id>
Method GET
Request
Response
{id, name, description, certificate,
enabled}
Errors NOT_FOUND
Response Properties:
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/trust/apps/
<id>
Method GET
Request
Response
{id, name, applicationType, enabled}
Errors NOT_FOUND
Response Properties:
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method GET
Response
[{virtualHost, virtualPort,
localHost, localPort, protocol,
backendType, authenticationMode,
hostInHeader, description, ...}]
Errors
Response:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/
<virtualHost>:<virtualPort>
Method GET
Response
{virtualHost, virtualPort, localHost,
localPort, protocol, backendType,
authenticationMode, hostInHeader,
description, ...}
Errors
Response:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method POST
Errors
Request Properties:
Note
The authentication modes NONE_RESTRICTED and X509_RESTRICTED prevent the Cloud Connector from
sending the system certificate in any case, whereas NONE and X509_GENERAL will send the system
certificate if the circumstances allow it.
Method DELETE
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method DELETE
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort
Method PUT
Request
{virtualHost, virtualPort, localHost,
localPort, protocol, backendType,
authenticationMode, hostInHeader,
description, ...}
Errors
Request Properties:
Note
The authentication modes NONE_RESTRICTED and X509_RESTRICTED prevent the Cloud Connector from
sending the system certificate in any case, whereas NONE and X509_GENERAL will send the system
certificate if the circumstances allow it.
System Mapping Resources Get all System Mapping Resources [page 553]
Allowed Clients Get Allowed Clients for RFC System Mapping [page 557]
Blocked Users Get Blocked Users for RFC System Mapping [page 561]
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort/
resources
Method GET
Request
Errors
Response:
• id: The resource itself, which, depending on the owning system mapping, is either a URL path (or the
leading section of it), or a RFC function name (prefix)
• enabled: Boolean flag indicating whether the resource is enabled.
• exactMatchOnly: Boolean flag determining whether access is granted only if the requested resource is an
exact match.
• websocketUpgradeAllowed: Boolean flag indicating whether websocket upgrade is allowed; this
property is of relevance only if the owning system mapping employs protocol HTTP or HTTPS.
• description: Description (a string); this property is not available unless explicitly set.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort/
resources/<encodedResourceId>
Method GET
Request
Response
{id, enabled, exactMatchOnly,
websocketUpgradeAllowed, description}
Errors
Response Properties:
• id: The resource itself, which, depending on the owning system mapping, is either a URL path (or the
leading section of it), or a RFC function name (prefix)
• enabled: Boolean flag indicating whether the resource is enabled.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort/
resources
Method POST
Request
{id, enabled, exactMatchOnly,
websocketUpgradeAllowed, description}
Errors
Request Properties:
• id: The resource itself, which, depending on the owning system mapping, is either a URL path (or the
leading section of it), or a RFC function name (prefix).
• enabled: Boolean flag indicating whether the resource is enabled (optional). The default value is false.
• exactMatchOnly: Boolean flag determining whether access is granted only if the requested resource is an
exact match (optional). The default value is false.
• websocketUpgradeAllowed: Boolean flag indicating whether websocket upgrade is allowed (optional).
The default value is false. This property is recognized only if the owning system mapping employs
protocol HTTP or HTTPS.
• description: Description (a string, optional)
Tip
Encoded Resource ID
URI paths may contain the resource ID in order to identify the resource to be edited or deleted. A resource
ID, however, may contain characters such as the forward slash that collide with the path separator of the
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
systemMappings/virtualHost:virtualPort/
resources/<encodedResourceId>
Method PUT
Request
{enabled, exactMatchOnly,
websocketUpgradeAllowed,
description}
Errors
Request Properties:
Method DELETE
Request
Errors NOT_FOUND
Errors:
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/resources
Method DELETE
Request
Errors
Returns the list of allowed ABAP clients for the system mapping definition.
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
allowedClients
Method GET
Request
Response
[client, ...]
Errors
Response Properties:
Note
Note
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
allowedClients
Request
[client, ...]
Response
Errors
Request Properties:
The existing list of available clients will be cleaned and populated by the provided in request.
Note
Adds more ABAP clients to the allowed list for the system mapping definition.
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
allowedClients
Method PATCH
Request
[client, ...]
Response
Errors
Request Properties:
Clients provided in this request will be added to the existing list. Clients that are already listed will be ignored.
Note
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
allowedClients/<client>
Method DELETE
Request
Response
Errors
Note
Method DELETE
Request
Response
Errors
Note
Note
Returns the list of blocked ABAP users for the system mapping definition.
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
blockedClientUsers
Method GET
Request
Response
[{client, user}, ...]
Errors
Note
Sets or replaces the list of blocked ABAP users for the system mapping definition.
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
blockedClientUsers
Method POST
Request
[{client, user}, ...]
Response
Errors
Request Properties:
Adds the provided list of blocked ABAP users to the current list.
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
blockedClientUsers
Method PATCH
Request
[{client, user}, ...]
Response
Errors
Request Properties:
Note
Removes one user from the list of blocked ABAP users for the system mapping definition.
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
blockedClientUsers/<client>:<user>
Request
Response
Errors
Note
Removes the list of blocked ABAP users for the system mapping definition.
URI /api/v1/configuration/subaccounts/
<region>/<subaccount>/systemMappings/
<virtualHost>:<virtualPort>/
blockedClientUsers
Method DELETE
Request
Response
Errors
Manage the Cloud Connector's configuration for domain mappings via API.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/domainMappings
Method GET
Request
Response
[{virtualDomain, internalDomai}]
Errors
Response:
An array of objects, each representing a domain mapping through the following properties:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/domainMappings
Method POST
Request
{virtualDomain, internalDomain}
Response 201
Errors
Method PUT
Request
{virtualDomain, internalDomain}
Errors NOT_FOUND
Request:
Errors:
Note
The internal domain in the URI path (i.e., <internalDomain>) is the current internal domain of the domain
mapping that is to be edited. It may differ from the new internal domain set in the request.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
domainMappings/<internalDomain>
Method DELETE
Request
Errors NOT_FOUND
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/domainMappings
Method DELETE
Request
Errors
Service Channel for HANA Database Get all HANA Service Channels [page 568]
Service Channel for Virtual Machine Get all Service Channels for Virtual Machines [page 572]
Service Channel for Kubernetes Get All Kubernetes Cluster Service Channels [page 579]
Available as of version 2.15.0. Create Kubernetes Service Channel (Master Only) [page
581]
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/HANA
Method GET
Request
Response
[{id, hanaInstanceName,
instanceNumber, type, port, enabled,
connections, state}]
Errors
An array of objects, each of which represents a HANA service channel through the following properties:
• id: unique identifier for the service channel (a positive integer number, starting with 1). This identifier is
unique across all types of service channels.
• hanaInstanceName: name of the HANA instance (a string).
• instanceNumber: instance number.
• type: string 'HANA'.
• port: port of the HANA service channel (a number).
• enabled: boolean flag indicating whether the channel is enabled and therefore should be open.
• connections: maximal number of open connections.
• state: current connection state; this property is only available if the channel is enabled (as per property
enabled). The value of this property is an object with the properties:
• connected (a boolean flag indicating whether the channel is connected),
• openedConnections (the number of open, possibly idle connections), and
• connectedSinceTimeStamp (the time stamp, a UTC long number, for the first time the channel was
opened/connected).
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
availableChannels/HANA
Method GET
Request
Response
[{hanaInstanceName, version, type}]
Errors RUNTIME_FAILURE
Response:
• RUNTIME_FAILURE (500): the list of available HANA database instances could not be retrieved.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/HANA/
<id>
Method GET
Request
Response
{id, hanaInstanceName,
instanceNumber, type, port, enabled,
connections, state}
Errors NOT_FOUND
Response:
• id: unique identifier for the service channel (a positive integer number, starting with 1). This identifier is
unique across all types of service channels.
• hanaInstanceName: name of the HANA instance (a string).
• instanceNumber: instance number.
• type: string 'HANA'.
• port: port of the HANA service channel (a number).
• enabled: boolean flag indicating whether the channel is enabled and therefore should be open.
• connections: maximal number of open connections.
• state: current connection state; this property is only available if the channel is enabled (as per property
enabled). The value of this property is an object with the properties:
• connected (a boolean flag indicating whether the channel is connected),
• openedConnections (the number of open, possibly idle connections), and
• connectedSinceTimeStamp (the time stamp, a UTC long number, for the first time the channel was
opened/connected).
Errors:
• NOT_FOUND (404): the HANA service channel with the given ID (that is, <id>) or the specified subaccount
does not exist.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/HANA
Method POST
Request
{hanaInstanceName, instanceNumber,
connections}
Errors INVALID_REQUEST
Request:
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/HANA/
<id>
Method PUT
Request
{hanaInstanceName, instanceNumber,
connections}
Request:
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/
VirtualMachine
Method GET
Request
Response
[{id, name, endpointId, type, port,
enabled, connections, state}]
Errors
Response:
• id: unique identifier for the service channel (a positive integer number, starting with 1). This identifier is
unique across all types of service channels.
• name: name of the virtual machine on the cloud platform (a string).
• endpointId: unique ID (GUID) of the virtual machine (a string).
• type: string 'VirtualMachine'.
• port: port of the service channel for the virtual machine (a number).
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/
availableChannels/VirtualMachine
Method GET
Request
Response
[{endpointId, name}]
Errors RUNTIME_FAILURE
Response:
Errors:
• RUNTIME_FAILURE (500): the list of available virtual machines could not be retrieved.
Method GET
Request
Response
{id, name, endpointId, type, port,
enabled, connections, state}
Errors NOT_FOUND
Response:
• id: unique identifier for the service channel (a positive integer number, starting with 1). This identifier is
unique across all types of service channels.
• name: name of the virtual machine on the cloud platform (a string).
• endpointId: unique ID (GUID) of the virtual machine (a string).
• type: string 'VirtualMachine'.
• port: port of the service channel for the virtual machine (a number).
• enabled: boolean flag indicating whether the channel is enabled and therefore should be open.
• connections: maximal number of open connections.
• state: current connection state; this property is only available if the channel is enabled (as per property
enabled). The value of this property is an object with the properties:
• connected (a boolean flag indicating whether the channel is connected),
• openedConnections (the number of open, possibly idle connections), and
• connectedSinceTimeStamp (the time stamp, a UTC long number, for the first time the channel was
opened/connected).
Errors:
• NOT_FOUND (404): the service channel for a virtual machine with the given ID (that is, <id>) or the
specified subaccount does not exist.
Method POST
Request
{name, endpointId, port, connections}
Errors INVALID_REQUEST
Request:
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/
VirtualMachine/<id>
Method PUT
Request
{name, endpointId, connections}
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/
ABAPCloud
Method GET
Request
Response
[{id, abapCloudTenantHost,
instanceNumber, type, port, enabled,
connections, state}]
Errors
Response:
An array of objects, each of which represents an ABAP Cloud service channel through the following properties:
• id: unique identifier for the service channel (a positive integer number, starting with 1). This identifier is
unique across all types of service channels.
• abapCloudTenantHost: the host name (a string).
• instanceNumber: instance number.
• type: string 'ABAPCloud'.
• port: port of the ABAPCloud service channel (a number).
• enabled: boolean flag indicating whether the channel is enabled and therefore should be open.
• connections: maximal number of open connections.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/
ABAPCloud/<id>
Method GET
Request
Response
{id, abapCloudTenantHost,
instanceNumber, type, port, enabled,
connections, state}
Errors NOT_FOUND
Response:
• id: unique identifier for the service channel (a positive integer number, starting with 1). This identifier is
unique across all types of service channels.
• abapCloudTenantHost: the host name (a string).
• instanceNumber: instance number.
• type: string 'ABAPCloud'.
• port: port of the ABAPCloud service channel (a number).
• enabled: boolean flag indicating whether the channel is enabled and therefore should be open.
• connections: maximal number of open connections.
• state: current connection state; this property is only available if the channel is enabled (as per property
enabled). The value of this property is an object with the properties:
• connected (a boolean flag indicating whether the channel is connected),
• openedConnections (the number of open, possibly idle connections), and
• connectedSinceTimeStamp (the time stamp, a UTC long number, for the first time the channel was
opened/connected).
• NOT_FOUND (404): the ABAP Cloud service channel with the given ID (that is, <id>) or the specified
subaccount does not exist.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/
ABAPCloud
Method POST
Request
{abapCloudTenantHost, instanceNumber,
connections}
Errors INVALID_REQUEST
Request:
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/
ABAPCloud/<id>
Request
{abapCloudTenantHost, instanceNumber,
connections}
Request:
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/K8S
Method GET
Request
Response
[{id, type, port, k8sCluster,
k8sService, enabled, connections,
state, comment}]
Errors
An array of objects, each of which represents a service channel for a virtual machine through the following
properties:
• id: unique identifier for the service channel (a positive integer number, starting with 1); this identifier is
unique across all types of service channels.
• k8sCluster: host name to access the Kubernetes cluster (a string).
• k8sService: host name providiing the service inside of Kubernetes cluster (a string).
• type: the string 'K8S'.
• port: port of the service channel for the virtual machine (a number).
• enabled: Boolean flag indicating whether the channel is enabled and therefore should be open.
• connections: maximal number of open connections.
• state: current connection state; this property is only available if the channel is enabled (as per property
enabled). The value of this property is an object with the properties connected (a Boolean flag
indicating whether the channel is connected), openedConnections (the number of open, possibly idle
connections), and connectedSinceTimeStamp (the time stamp, a UTC long number, for the first time the
channel was opened/connected).
• comment: comment or short description; this property is not supplied if no comment was provided.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/K8S/
<id>
Method GET
Request
Response
[{id, type, port, k8sCluster,
k8sService, enabled, connections,
state, comment}]
Errors
Response:
• id: unique identifier for the service channel (a positive integer number, starting with 1); this identifier is
unique across all types of service channels.
• k8sCluster: host name to access the Kubernetes cluster (a string).
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/K8S
Method POST
Request
{k8sCluster, k8sService, port,
connections, comment}
Response 201
Errors INVALID_REQUEST
Request:
• k8sCluster: Kubernetes cluster host name, optionally with port separated by colon (a string).
• k8sService: service host inside the Kubernetes cluster (a string).
• port: local port.
• connections: maximal number of open connections.
• comment: optional comment or short description (a string).
Errors:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/K8S/
<id>
Method PUT
Request
{k8sCluster, k8sService, port,
connections, comment}
Errors
Request:
• k8sCluster: Kubernetes cluster host name, optionally with port separated by colon (a string).
• k8sService: service host inside the Kubernetes cluster (a string).
• port: local port.
• connections: maximal number of open connections.
• comment: optional comment or short description (a string).
Errors:
Use one of the following channel types to replace <type> in the URI: HANA, VirtualMachine, or ABAPCloud.
Caution
The URI variant without <type> still works, but it is obsolete and may be removed in a future release. We
recommend that you move to using <type> as soon as possible.
Method PUT
Request
{enabled}
Response
Errors:
Caution
The URI variant without <type> still works, but it is obsolete and may be removed in a future release. We
recommend to move to using <type> as soon as possible.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/
<type>/<id>
Method DELETE
Request
Errors NOT_FOUND
Errors:
Note
Omit <type> to delete all service channels, regardless of the type. To delete all service channels of
a particular type, replace <type> with one of the following channel types: HANA, VirtualMachine, or
ABAPCloud.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels[/
<type>]
Method DELETE
Request
Errors
Caution
Obsolete. This API is deprecated and may be removed in a future release. Use the getters for the specific
service channel type (that is, HANA (database), Virtual Machine, or ABAP Cloud) which provide properties
tailored for the respective channel type.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels
Method GET
Request
Errors
Response:
An array of objects, each of which represents a service channel through the following properties:
• typeDesc: an object specifying the service channel type through the properties typeKey and typeName.
• details
• port
• enabled
• connected
• connectionsCount
• availableConnectionsCount
• connectedSinceTimeStamp
Caution
Obsolete. This API is deprecated and may be removed in a future release. Use the getters for the specific
service channel type (that is, HANA (database), Virtual Machine, or ABAP Cloud) which provide properties
tailored for the respective channel type.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/<id>
Method GET
Request
Response
[{typeDesc, details, port,
enabled, connected, connectionsCount,
availableConnectionsCount,
connectedSinceTimeStamp}]
Response:
• typeDesc: an object specifying the service channel type through the properties typeKey and typeName.
• details
• port
• enabled
• connected
• connectionsCount
• availableConnectionsCount
• connectedSinceTimeStamp
Caution
Obsolete. This API is deprecated and may be removed in a future release. Create a service channel using
the API for the respective channel type, that is, HANA (database), Virtual Machine, or ABAP Cloud.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels
Method POST
Request
{typeKey,details,serviceNumber,connect
ionCount}
Errors
Request Properties:
• typeKey: type of service channel. Valid values are HANA_DB, HCPVM, RFC.
• details:
• HANA instance name for HANA_DB
• VM name for HCPVM
Caution
Obsolete. This API is deprecated and may be removed in a future release. Replace a service channel using
the API for the respective channel type, that is, HANA (database), Virtual Machine, or ABAP Cloud.
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/<id>
Method PUT
Request
{typeKey,details,serviceNumber,connect
ionCount}
Errors
Manage RFC-specific access control entities for the Cloud Connector via API.
Remove an Item from the List of Allowed Clients (Master Only) [page 592]
Remove Items on the List of Blocked Client/User Pairs for a Given User (Master Only) [page 593]
Remove Items on the List of Blocked Client/User Pairs for a Given Client (Master Only) [page 593]
Remove an Item on the List of Blocked Client/User Pairs (Master Only) [page 594]
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
allowedClients
Method GET
Request
Response
[{client}]
Errors
Response:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
blockedClientUsers
Request
Response
[{client, user}]
Errors
Response:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
allowedClients
Method POST
Request
[client]
Errors
Request:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
allowedClients
Method PATCH
Request
[client]
Errors
Request:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
blockedClientUsers
Method POST
Request
[{client, user}]
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
blockedClientUsers
Method PATCH
Request
[{client, user}]
Errors
Request:
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
allowedClients
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
blockedClientUsers
Method DELETE
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
allowedClients/<client>
Method DELETE
Errors NOT_FOUND
Remove Items on the List of Blocked Client/User Pairs for a Given User
(Master Only)
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
blockedClientUsers/user/<user>
Method DELETE
Request
Errors NOT_FOUND
Remove Items on the List of Blocked Client/User Pairs for a Given Client
(Master Only)
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
blockedClientUsers/client/<client>
Method DELETE
Errors NOT_FOUND
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMapping/
<virtualHost>:<virtualPort>/
blockedClientUsers/client/
<client>:<user>
Method DELETE
Request
Errors NOT_FOUND
1.2.2.5.14 Examples
Find examples on how to use the Cloud Connector's configuration REST APIs.
Concept
The sample code in this section (download zip file) demonstrates how to use the REST APIs provided by Cloud
Connector to perform various configuration tasks.
The examples are implemented in Kotlin, a simple Java VM-based language. However, even if you are using
a different language, they still show the basic use of the APIs and their parameters for specific configuration
purposes.
If you are not familiar with Kotlin, find a brief introduction and some typical statements below.
In almost all requests and responses, structures are encoded in JSON format. To describe the parameter
details, we use Kotlin data classes.
This class represents a structure that you can use as value in a request or response:
{"user":<userValue>, "password":<passwordValue>}
"""{"user":"$userValue", "password":"$passwordValue"}"""
or as an object:
(a) Fuel.put(url)
(b) .header("Connection", "close")
(c) .authentication().basic(user, password)
(d) .jsonBody(credentials)
(e) .responseObject<OnlyPropertiesNamesAreRelevant> { _, _, result ->
when (result) {
(f) is Result.Failure -> processRequestError(result.error)
is Result.Success -> println("returned: ${result.get()}")
}
}
.join()
• (a) - Fuel is an HTTP framework used in the examples. Important is the verb after Fuel - it is the REST API
method.
• (b) - Adds a request header Connection: close, which forces a connection close after request. In the
examples, this header is defined on FuelManager for all calls.
• (c) - Basic authentication is used for the call with user and password.
• (d) - HTTP requests with parameters require mostly JSON. The jsonBody-method adds the header
Content-Type: application/json and serializes the provided object credentials to JSON. Requests
without body like DELETE or GET omit body methods.
• (e) - The responseObject<ClassType>-method adds the header Accept: application/json and
de-serializes the response to the specified class type. Some APIs do not have any response or non-JSON
response. In such cases, the method response is used instead of responseObject<ClassType>.
• (f) - The way, how Kotlin decides between success (2xx) and failed responses. For details on the possible
response status, see Configuration REST APIs [page 484].
Some common details, used by different examples, were extracted to the scenario.json configuration file.
This lets you use meaningful names like config.master!!.user in the examples. The file is loaded by
disableTrustChecks()
Caution
disableTrustChecks() is only used for test purposes. Do not use it in a productive environment.
When using the examples, start with Initial Configuration [page 599].
After a mandatory password change and defining the high availability role of the Cloud Connector instance
(master or shadow), the example demonstrates how to provide a description for the instance and how to set up
the UI and system certificates.
Once the initial configuration is done, you can optionally proceed with these steps:
Related Information
1.2.2.5.14.1 scenario.json
Sample Code
{
"subaccount": {
"regionHost": "cf.eu10.hana.ondemand.com",
"subaccount": "11aabbcc-7821-448b-9ecf-a7d986effa7c",
"user": "xxx",
"password": "xxx"
},
"master": {
"url": "https://localhost:8443",
"user": "Administrator",
"password": "test"
},
"shadow": {
"url": "https://localhost:8444",
"user": "Administrator",
"password": "test"
}
}
Sample Code
package com.sap.scc.examples
import com.github.kittinunf.fuel.core.FuelError
import com.google.gson.Gson
import java.io.File
import java.security.SecureRandom
import java.security.cert.X509Certificate
import javax.net.ssl.*
data class CloudConnector(
val url: String,
var user: String,
var password: String
)
fun disableTrustChecks() {
try {
HttpsURLConnection.setDefaultHostnameVerifier { hostname: String,
session: SSLSession -> true }
val context: SSLContext = SSLContext.getInstance("TLS")
val trustAll: X509TrustManager = object : X509TrustManager {
override fun checkClientTrusted(chain: Array<X509Certificate>,
authType: String) {}
override fun checkServerTrusted(chain: Array<X509Certificate>,
authType: String) {}
override fun getAcceptedIssuers(): Array<X509Certificate> {
return arrayOf()
}
}
context.init(null, arrayOf(trustAll), SecureRandom())
HttpsURLConnection.setDefaultSSLSocketFactory(context.socketFactory)
} catch (e: Exception) {
e.printStackTrace()
}
}
class ScenarioConfiguration {
var subaccount: SubaccountParameters? = null
var master: CloudConnector? = null
var shadow: CloudConnector? = null
}
data class SubaccountParameters(
val regionHost: String,
Sample Code
package com.sap.scc.examples
//import com.sap.scc.examples.SccCertificate
import com.github.kittinunf.fuel.Fuel
import com.github.kittinunf.fuel.core.BlobDataPart
import com.github.kittinunf.fuel.core.FuelManager
import com.github.kittinunf.fuel.core.Method
import com.github.kittinunf.fuel.core.extensions.authentication
import com.github.kittinunf.fuel.gson.responseObject
import com.github.kittinunf.result.Result
import java.io.ByteArrayInputStream
import java.io.File
/*
This example shows how to use REST APIs to perform the initial configuration
of a master instance
after installing and starting the Cloud Connector.
As a prerequisite you need to install and start the Cloud Connector.
The example begins with changing the initial password, setting the instance
to the master role, edit the description,
and upload UI and system certificates.
For the certificates used by Cloud Connector in order to access the UI and
for the system certificate used to access backend systems,
we simply upload the already available PKCS#12 certificates uiCert.p12 and
systemCert.p12 encrypted with the password "test1234".
Cloud Connector also provides other options for certificate management,
please take a look at the documentation.
The configuration details for master and shadow instances can be found in
scenario.json.
*/
Sample Code
package com.sap.scc.examples
import com.github.kittinunf.fuel.Fuel
import com.github.kittinunf.fuel.core.FuelManager
import com.github.kittinunf.fuel.core.extensions.authentication
import com.github.kittinunf.fuel.gson.jsonBody
import com.github.kittinunf.fuel.gson.responseObject
import com.github.kittinunf.result.Result
/*
This example shows how to use REST APIs to configure and connect a
subaccount in cloud connector.
The example begins with (1) connecting of the subaccount, then we create a
system (2) for an HTTP service
and (3) for an RFC service.
The configuration details for master and shadow instances can be found in
scenario.json.
*/
fun main() {
//Cloud Connector distribution generates only an untrusted self-signed
certificate.
Sample Code
package com.sap.scc.examples
import com.github.kittinunf.fuel.Fuel
import com.github.kittinunf.fuel.core.FuelManager
import com.github.kittinunf.fuel.core.Headers
import com.github.kittinunf.fuel.core.extensions.authentication
import com.github.kittinunf.fuel.gson.jsonBody
import com.github.kittinunf.fuel.gson.responseObject
import com.github.kittinunf.result.Result
import java.net.URL
/*
This example shows how to use REST APIs to change high availability settings
of the Cloud Connector instances.
As a prerequisite you need to install and start a shadow and a master
instance.
Afterwards perform the initial configuration of the master instance (see
InitialConfiguration.kt).
The configuration details for master and shadow instances can be found in
scenario.json.
*/
fun main() {
//Cloud Connector distribution generates only an untrusted self-signed
certificate.
//So for this demonstration use case we need to deactivate all trust
checks.
disableTrustChecks()
//Use 'Connection: close' header, to make stateless communication more
efficient
FuelManager.instance.baseHeaders = mapOf("Connection" to "close")
//Add output of cURL commands for revision
//FuelManager.instance.addRequestInterceptor(LogRequestAsCurlInterceptor)
Sample Code
package com.sap.scc.examples
import com.github.kittinunf.fuel.Fuel
import com.github.kittinunf.fuel.core.BlobDataPart
import com.github.kittinunf.fuel.core.FuelManager
import com.github.kittinunf.fuel.core.Headers
import com.github.kittinunf.fuel.core.Method
import com.github.kittinunf.fuel.core.extensions.authentication
import com.github.kittinunf.result.Result
import java.io.ByteArrayInputStream
import java.io.File
/*
This example shows how to use REST APIs to perform the Backup and Restore of
the Cloud Connector configuration.
As a prerequisite you have a stable Cloud Connector configuration and you
want to save the backup for later use.
Sample Code
package com.sap.scc.examples
import com.github.kittinunf.fuel.Fuel
import com.github.kittinunf.fuel.core.FuelManager
import com.github.kittinunf.fuel.core.extensions.authentication
import com.github.kittinunf.fuel.gson.jsonBody
import com.github.kittinunf.fuel.gson.responseObject
import com.github.kittinunf.result.Result
import java.io.File
/*
This example shows how to use REST APIs to configure the integration with
SAP solution management.
In most cases it should be only necessary to pass a boolean flag in order to
enable or
The configuration details for master and shadow instances can be found in
scenario.json.
*/
fun main() {
//Cloud Connector distribution generates only an untrusted self-signed
certificate.
//So for this demonstration use case we need to deactivate all trust
checks.
disableTrustChecks()
//Use 'Connection: close' header, to make stateless communication more
efficient
FuelManager.instance.baseHeaders = mapOf("Connection" to "close")
//Add output of cURL commands for revision
//FuelManager.instance.addRequestInterceptor(LogRequestAsCurlInterceptor)
//Load configuration from property file
val config = loadScenarioConfiguration()
//First we check the current configuration of solution management
Fuel.get("${config.master!!.url}/api/v1/configuration/connector/
solutionManagement")
.authentication().basic(config.master!!.user,
config.master!!.password)
.responseObject<SolutionManagementConfiguration> { _, _, result ->
when (result) {
is Result.Success -> println("response: ${result.get()}")
is Result.Failure -> processRequestError(result.error)
}
}
.join()
//Configure the hostAgent path on the shadow instance
//Invoking this API on a shadow instance changes only the configuration.
//Only when invoking it on a master instance it also activates solution
management integration.
//Note: This step is not required if host agent is installed at the
default location
Fuel.post("${config.shadow!!.url}/api/v1/configuration/connector/
solutionManagement")
.authentication().basic(config.shadow!!.user,
config.shadow!!.password)
.jsonBody(SolutionManagementConfiguration(hostAgentPath = "/path/to/
hostAgent"))
.response { _, _, result ->
when (result) {
is Result.Success -> println("new path to host agent was set")
is Result.Failure -> processRequestError(result.error)
}
}
.join()
//Turns on the solution management integration.
//Note: this operation is possible only on the master instance.
//In case of an HA setup, the flag enabled is propagated to the shadow
automatically.
//Note: optionally you can set here the new path for host agent and
enable DSR.
Fuel.post("${config.master!!.url}/api/v1/configuration/connector/
solutionManagement")
.authentication().basic(config.master!!.user,
config.master!!.password)
.jsonBody(SolutionManagementConfiguration())
.response { _, _, result ->
Configure Cloud Connector service channels to connect your on-premise network to specific services on SAP
BTP or to an ABAP Cloud system.
Cloud Connector service channels provide access from an external network to specific services on SAP BTP, or
to an ABAP Cloud system. The called services are not exposed to direct access from the Internet. The Cloud
Connector ensures that the connection is always available and communication is secured.
RFC Connection to an ABAP Cloud system The service channel for RFC supports calls from on-premise
systems to an ABAP Cloud system using RFC.
K8s Cluster (TCP connection to a SAP BTP Kubernetes Service channel to establish a connection to a service in a
cluster) Kubernetes cluster on SAP BTP.
Next Steps
Related Information
Context
You can connect database, BI, or replication tools running in on-premise network to an SAP HANA database on
SAP BTP using service channels of the Cloud Connector. You can also use the high availability support of the
Cloud Connector on a database connection. The picture below shows the landscape in such a scenario.
• For more information on using SAP HANA instances, see Using an SAP HANA XS Database System
• For the connection string via ODBC you need a corresponding database user and password (see step 4
below). See also: Creating Database Users.
• Find detailed information on failover support in the SAP HANA Administration Guide: Configuring Clients
for Failover.
Note
This link points to the latest release of SAP HANA Administration Guide. Refer to the SAP BTP Release
Notes to find out which SAP HANA SPS is supported by SAP BTP. Find the list of guides for earlier
releases in the Related Links section below.
Procedure
1. To establish a highly available connection to one or multiple SAP HANA instances in the cloud, we
recommend that you make use of the failover support of the Cloud Connector. Set up a master and a
shadow instance. See Install a Failover Instance for High Availability [page 655].
2. In the master instance, configure a service channel to the SAP HANA database of the SAP BTP subaccount
to which you want to connect. If, for example, the chosen HANA instance is 01, the port of the service
channel is 30115. See also Configure a Service Channel for an SAP HANA Database.
3. Connect on-premise DB tools via JDBC to the SAP HANA database by using the following connection
string:
Example:
jdbc:sap://<cloud-connector-master-host>:30115;<cloud-connector-shadow-
host>:30115[/?<options>]
"DRIVER=HDBODBC32;UID=<user>;PWD=<password>;SERVERNODE=<cloud-connector-
master-host>:30115,<cloud-connector-shadow-host>:30115;"
Related Information
For scenarios that need to call from on-premise systems to SAP BTP ABAP environment using RFC, you can
establish a connection to an ABAP Cloud tenant host. To do this, select On-Premise to Cloud Service
Channels in the Cloud Connector.
Prerequisites
• When using the default connectivity setup with the Cloud Foundry subaccount in which the system has
been provisioned, you can use a service channel without additional configuration, as long as the system is
a single-tenant system.
• When using connectivity via a Neo subaccount, you must create a communication arrangement for the
scenario SAP_COM_0200. For more information, see Create a Communication Arrangement for Cloud
Connector Integration (documentation for ABAP environment on SAP BTP).
Procedure
Note
7. In the same dialog window, define the <Local Instance Number> under which the ABAP
Cloud system is reachable for the client systems. You can enter any instance number for which
the port is not used yet on the Cloud Connector host. The port numbers result from the
following pattern: 33<LocalInstanceNumber>, for activated SNC (Secure Network Connection)
48<LocalInstanceNumber>.
8. Use the checkbox Activate SNC to specify if the opened service channel port should listen to and terminate
incoming SNC RFC connections instead of plain RFC connections. The SNC configuration used is the
same as in section Initial Configuration (RFC) [page 400]. In case of issues, you can use the trace Cloud
Connector loggers, and also the SNC payload and CPIC traces.
For more information on traces, see Troubleshooting [page 700].
Note
This SNC option is only supported for ABAP-based clients, not for any RFC connectors as JCo, NCo
or NW RFC SDK. For these scenarios, please use the WebSocket RFC protocol without the Cloud
Connector.
9. In the same dialog window, leave Enabled selected to establish the channel immediately after choosing
Finish. Unselect it if you don't want to establish the channel immediately.
Note
When addressing an ABAP Cloud system in a destination configuration, you must enter the Cloud
Connector host as application server host. As instance number, specify the <Local Instance Number>
that you configured for the service channel. As user, you must provide the business user name but not the
technical user name associated with the same.
Note
When using an RFC service channel in a high availability setup, you can put a TCP load balancer between
the client ABAP system and the Cloud Connector master and shadow instances. In this case, you must
configure the load balancer in the destination. The load balancer routes the request to the instance that is
currently in the master role, either by simply trying one instance after the other or by checking regularly,
which of the instances currently has the master role, and routing to this instance only.
Create a service channel to establish a connection to a service in a Kubernetes cluster in SAP BTP that is not
directly exposed to external access.
Follow the steps below to establish a service channel for a Kubernetes cluster (K8s cluster).
Prerequisites
You have a service running in a Kubernetes cluster that is connected to your subaccount.
3. In the Add Service Channel dialog, select K8s Cluster from the drop-down list of supported channel
types.
4. Optionally, provide a Description that explains what the Kubernetes cluster service channel is used for.
5. Choose Next. The K8s Cluster dialog opens.
6. Specify the host of the Kubernetes cluster and the host providing the service inside the Kubernetes cluster.
7. Choose the <Local Port> and the number of <Connections>. You can enter any port that is not used
yet.
8. Leave the Enabled option selected to establish the channel immediately after clicking Save, or deselect it if
the channel should not be established yet.
Note
Next Steps
Once you have established a service channel to the Kubernetes cluster, you can connect your client application
by accessing <Cloud_connector_host>:<local_port>.
A service channel overview lets you see the details of all service channels that are used by a Cloud Connector
installation.
The service channel port overview lists all service channels that are configured in the Cloud Connector. It lets
you see at a glance, which server ports are used by a Cloud Connector installation.
In addition, you can find the following information about each service channel:
From the Actions column, you can switch directly to the On-Premise To Cloud section of the corresponding
subaccount and edit the selected service channel.
To find the overview list, choose Connector from the navigation menu and go to section Service Channels
Overview:
The filter buttons above the overview table let you filter the shown service channels based on their status. You
can select all service channels, all enabled ones, all disabled ones, or all service channels for which enabling
has failed.
To configure the trust store, choose Configuration from the main menu, and go to tab On Premise, section Trust
Store.
By default, the Cloud Connector does not trust any on-premise system when connecting to it via TLS:
Note
You must provide the CA's X.509 certificates in DER or PEM format.
Caution
If you don't want to specify explicit CAs for trust, but rather trust all certificates used by your backends,
you can switch off the trust store. In this case, the allowlist is ignored. However, this is considered less
secure, since all server certificates are trusted and the issuing CA is not checked.
As a result, certain attacks on the hop between Cloud Connector and internal systems can be performed
more easily. Therefore, we strongly recommend that you don't do this in productive installations.
Back to Tasks
Context
Some HTTP servers return cookies that contain a domain attribute. For subsequent requests, HTTP clients
should send these cookies to machines that have host names in the specified domain.
However, in a Cloud Connector setup between a client and a Web server, this may lead to problems. For
example, assume that you have defined a virtual host sales-system.cloud and mapped it to the internal
host name ecc60.mycompany.corp. The client "thinks" it is sending an HTTP request to the host name
sales-system.cloud, while the Web server, unaware of the above host name mapping, sets a cookie for the
domain mycompany.corp. The client does not know this domain name and thus, for the next request to that
Web server, doesn't attach the cookie, which it should do. The procedure below prevents this problem.
Procedure
1. From your subaccount menu, choose Cloud To On-Premise, and go to the Cookie Domains tab.
2. Choose Add.
3. Enter cloud as the virtual domain, and your company name as the internal domain.
4. Choose Save.
The Cloud Connector checks the Web server's response for Set-Cookie headers. If it finds one with
an attribute domain=intranet.corp, it replaces it with domain=sales.cloud before returning the
HTTP response to the client. Then, the client recognizes the domain name, and for the next request
Note
Some Web servers use a syntax such as domain=.intranet.corp (RFC 2109), even though the
newer RFC 6265 recommends using the notation without a dot.
Note
The value of the domain attribute may be a simple host name, in which case no extra
domain mapping is necessary on the Cloud Connector. If the server sets a cookie with
domain=machine1.intranet.corp, the Cloud Connector automatically reverses the mapping
machine1.intranet.corp to www1.sales.cloud and replaces the cookie domain accordingly.
Related Information
If you want to monitor the Cloud Connector with the SAP Solution Manager, you can install a host agent on the
machine of the Cloud Connector and register the Cloud Connector on your system.
Prerequisites
• You have installed the SAP Diagnostics Agent and SAP Host Agent on the Cloud Connector host and
connected them to the SAP Solution Manager. The RPM on Linux ensures that the host agent configuration
is adjusted and that user groups are setup correctly.
For more details about the host agent and diagnostics agent, see SAP Host Agent and the SCN Wiki SAP
Solution Manager Setup/Managed System Checklist .
See also SAP notes 2607632 (SAP Solution Manager 7.2 - Managed System Configuration for SAP Cloud
Connector) and 1018839 (Registering in the System Landscape Directory using sldreg). For consulting,
contact your local SAP partner.
Note
Linux OS: if you installed the host agent after installing the Cloud Connector, you can execute
enableSolMan.sh in the installation directory to adjust the host agent configuration and user group
setup. This action requires root permission.
Procedure
1. From the Cloud Connector main menu, choose Configuration Reporting . In section Solution
Management of the Reporting tab, select Edit.
Note
To download the registration file lmdbModel.xml, choose the icon Download registration file from the
Reporting tab.
Related Information
Adapt connectivity settings that control the throughput and HTTP connectivity to on-premise systems.
If required, you can adjust the following parameters for the communication tunnel by changing their default
values:
Note
This parameter specifies the default value for the maximal number of tunnel connections per
application. The value must be higher than 0.
For detailed information on connection configuration requirements, see Configuration Setup [page 373].
1. From the Cloud Connector main menu, choose Configuration Advanced . In section Connectivity,
select Edit.
2. In the Edit Connectivity Settings dialog, change the parameter values as required.
3. Choose Save.
Additionally, you can specify the number of allowed tunnel connections for each application that you have
specified as a trusted application [page 423].
Note
If you don't change the value for a trusted application, it keeps the default setting specified above. If you
change the value, it may be higher or lower than the default and must be higher than 0.
HTTP Connectivity
Caution
Do not change these parameters unless you are absolutely sure that changes are indispensable.
Max. Chunk Size HTTP Packages (kb) Max. size of chunks transmitted in 64kb
HTTP streaming. The chunk size affects
the throughput of HTTP communica-
tion.
Max. HTTP Request Header Length (kb) Max. allowed size of HTTP request 64kb
Max. Size HTTP Request (kb) Size for the request line of HTTP re- 8kb
quest. HTTP Body is not included.
Max. HTTP Response Header Length Max. allowed size of HTTP response 8kb
(kb) headers.
Max. Size HTTP Response (kb) Size for the response line of the HTTP 8kb
response. HTTP Body is not included.
If required, you can adjust the following parameters for the Java VM by changing their default values:
Note
We recommended that you set the initial heap size equal to the maximal heap size, to avoid memory
fragmentation.
1. From the Cloud Connector main menu, choose Configuration Advanced . In section JVM, select Edit.
2. In the Edit JVM Settings dialog, change the parameter values as required.
3. Choose Save.
Related Information
Note
An archive containing a snapshot of the current Cloud Connector configuration is created and
downloaded by your browser. For security reasons, some files are encrypted, using the password
provided for the backup procedure.
Tip
You can use this archive to restore the current state of the same installation or to move the current
configuration to a new Cloud Connector installation, if the original instance can no longer be used.
2. To restore your configuration, enter the required Archive Password and the Login Password of the
currently logged-in administrator user in the Restore from Archive dialog and choose Restore.
Caution
The restore action overwrites the current configuration of the Cloud Connector. Its current state
and settings will be lost permanently unless you have created another backup before doing the
restore operation. Upon successfully restoring the configuration backup, the Cloud Connector
restarts automatically. All active sessions are then terminated.
Note
• The props.ini file is treated in a special way. If the file from the backup archive differs from
the one that is used in the current installation, it will be placed next to the original one as
props.ini.restored. If you want to use the props.ini.restored file, replace the existing
props.ini file by the restored one on OS level and restart the Cloud Connector afterwards.
• All custom path settings from the configuration backup archive, which are not valid or operable
anymore, will be automatically reset to their respective default values upon restore operation.
For example, this will be the case if a configured directory or file path from the backup archive
does not exist or is not accessible anymore.
• Restored certificates from older backup archives might have expired and must be replaced
with valid ones to get all configured scenarios to work again.
• If restoring a configuration backup archive on a host that is different from the original one, the
restored UI certificate might not be issued for the new hostname, domain or IP address, and
therefore would require a replacement with a matching UI server certificate as well.
• If you are using an external SNC library, neither the library itself is part of the configuration
backup archive, nor any configuration that this library would load from its own external
storage. As a result, the SNC configuration must be set up again after restoring a configuration
backup archive on a different host.
• The same applies to used own CA certificates which are stored in the JVM's managed trust
store, for example, if Email alerting or LDAP has been set up this way, with own certificates for
Caution
Do not run multiple Cloud Connector instances with the same configuration simultaneously. The
feature to restore a configuration backup archive on another Cloud Connector installation is not
supposed to be used for cloning purposes, but only for supporting the move of a Cloud Connector
instance from one host to another. After restoring a configuration backup archive on a different
Cloud Connector installation, you must stop the original Cloud Connector instance immediately.
Otherwise, running two or more instances with the same configuration will cause issues. Such a
cloned setup is not supported.
Add additional information to the login screen and configure its appearance.
Note
• When customizing the background color of the box, the opacity (of the background color), font
color, or text alignment, then the section Login Information automatically switches to preview mode.
That way you can follow live the changes made to the appearance of the box and of the information
displayed inside it.
Note
You can hide the box and to show only the text of the login information by choosing an opacity
value of 0 (opacity is the opposite of transparency. No opacity means complete transparency).
• You can position the box containing the login information at the top or bottom of the login page. To do
this, set the field <Position> to the corresponding pixel or percentage value.
4. Enter the information to be displayed in section Login Information. The information must be supplied as an
HTML fragment. There is a limited number of tags that can be used. Attributes available for these tags are
subject to restrictions.
ul No attributes allowed
ol No attributes allowed
li No attributes allowed
br No attributes allowed
h1 No attributes allowed
h2 No attributes allowed
h3 No attributes allowed
i No attributes allowed
b No attributes allowed
HTML syntax checking is strict. Attribute values must be enclosed by double quotes. Missing or
unmatched opening or closing tags are not permitted.
Note
Tag br does not require a closing tag as there cannot be any inner HTML.
Learn more about operating the Cloud Connector, using its administration tools and optimizing its functions.
Topic Description
Exchange UI Certificates in the Administration UI [page 632] By default, the Cloud Connector includes a self-signed UI
certificate. It is used to encrypt the communication between
the browser-based user interface and the Cloud Connector
itself. For security reasons, however, you should replace this
certificate with your own one to let the browser accept the
certificate without security warnings.
Logon to the Cloud Connector via Client Certificate [page Switch from default logon to client certificate logon to ac-
634] cess the Cloud Connector.
Configure Named Cloud Connector Users [page 636] If you operate an LDAP server in your system landscape, you
can configure the Cloud Connector to use the named users
who are available on the LDAP server instead of the default
Cloud Connector users.
High Availability Setup [page 654] The Cloud Connector lets you install a redundant (shadow)
instance, which monitors the main (master) instance.
Change the UI Port [page 661] Use the changeport tool (Cloud Connector version 2.6.0+) to
change the port for the Cloud Connector administration UI. .
Connect and Disconnect a Cloud Subaccount [page 661] As a Cloud Connector administrator, you can connect the
Cloud Connector to (and disconnect it from) the configured
cloud subaccount.
Secure the Activation of Traffic Traces [page 662] Tracing of network traffic data may contain business critical
information or security sensitive data. You can implement a
"four-eyes" (double check) principle to protect your traces
(Cloud Connector version 1.3.2+).
Monitoring [page 664] Use various views to monitor the activities and state of the
Cloud Connector.
Alerting [page 693] Configure the Cloud Connector to send email alerts when-
ever critical situations occur that may prevent it from oper-
ating.
Audit Logging [page 696] Use the auditor tool to view and manage audit log informa-
tion (Cloud Connector version 2.2+).
Troubleshooting [page 700] Information about monitoring the state of open tunnel con-
nections in the Cloud Connector. Display different types of
logs and traces that can help you troubleshoot connection
problems.
Process Guidelines for Hybrid Scenarios [page 705] How to manage a hybrid scenario, in which applications
running on SAP BTP require access to on-premise systems
using the Cloud Connector.
Configuring Backup [page 708] Find an overview of backup procedures for the Cloud
Connector.
Secure Store [page 708] Reduce the size of the Cloud Connector Secure Store.
By default, the Cloud Connector includes a self-signed UI certificate. It is used to encrypt the communication
between the browser-based user interface and the Cloud Connector itself. For security reasons, however, you
should replace this certificate with your own one to let the browser accept the certificate without security
warnings.
Procedure
Master Instance
1. From the main menu, choose Configuration and go to the User Interface tab.
2. In the UI Certificate section, start a certificate signing request procedure by choosing the icon Generate a
Certificate Signing Request.
3. In the pop-up Generate CSR, specify a key size and a subject fitting to your host name.
For host matching, you should use the available names within the subjectAlternativeName (SAN)
extension, see RFC 2818 and https://www.chromestatus.com/feature/4981025180483584. A check
verifies whether the host matches one of the entries in the SAN extension.
In section Subject Alternative Names, you can add additional values by pressing the Add button. Choose
one or more of the following SAN types and provide the matching values:
• DNS: a specific host name (for example, www.sap.com) or a wildcard hostname (for example,
*.sap.com).
• IP: an IPv4 or IPv6 address.
• RFC822 : an example for this type of value is a simple email address: for example,
[email protected].
• URI: a URI for which the certificate should be valid.
Note
7. Select Browse to locate the file and then choose the Import button.
8. Review the certificate details that are displayed.
9. Restart the Cloud Connector to activate the new certificate.
Shadow Instance
In a High Availability setup, perform the same operation on the shadow instance.
Caution
UI certificates are used for the secure communication between master and shadow instances. Replacing
the UI certificate breaks the trust relationship and communication between master and shadow is not
possible anymore.
Please disconnect the shadow instance when you are going to replace UI certificate(s). Once the certificate
update is done, connect the shadow instance again. You will be forced to enter user and password again to
establish the trust relationship between master and shadow instances.
Switch from default logon to client certificate logon to access the Cloud Connector.
If high availability setup is active, this feature does only work if an additional port has been specified (for
more information, see Install a Failover Instance for High Availability [page 655]).
Otherwise, a Cloud Connector shadow instance cannot connect due to missing trust with error: There is
no trust with Cloud Connector on https://<host>:<port>. Reset shadow configuration and try to connect
shadow again.
Restriction
Configuring the logon with a certificate is not allowed for SAP-operated Cloud Connectors, for example in
the context of Enterprise Cloud Services or S/4HANA Private Cloud Edition.
Instead of authenticating with user and password as configured by default (see Initial Configuration [page
388]), you can switch to client certificate authentication and logon. To do so, choose Configuration (or Shadow
Configuration) from the main menu, and tab User Interface, section Authentication.
In the next prompt, specify an object identifier (OID) to extract the user name from subject of any given client
certificate.
You must add at least one CA certificate to the Authentication Allowlist. These certificates are used by Cloud
Connector to validate an incoming client certificate. Typically, these are the root or issuer certificates of eligible
client certificates, or the client certificates themselves if they are self-signed.
1. (Prerequisite): Your browser (or your REST client) must provide a suitable client certificate. In particular,
the user extracted from such a client certificate as per the selected User Mapping (OID) must be a member
of the current user store (local or LDAP).
2. When accessing the Cloud Connector, an mTLS handshake is performed, that is, the Cloud Connector
checks your selected client certificate against the CAs from the authentication allowlist.
Caution
If your client certificate is not self-signed, you must provide (one of) the issuer certificates in the
authentication allowlist, not the client certificate from your browser.
3. If trust can be established, the Cloud Connector extracts the user from the subject as per the specified
user mapping OID. This user will be logged on if it can be found in the current user store. If LDAP is used,
the respective roles are checked and assigned.
If you select, for example, CN as OID, and have a client certificate with a subject containing CN=Administrator,
the user Administrator is used after mutual trust was established.
We recommend that you configure LDAP-based user management for the Cloud Connector to allow only
named administrator users to log on to the administration UI.
This guarantees traceability of the Cloud Connector configuration changes via the Cloud Connector audit log.
If you use the default and built-in Administrator user, you cannot identify the actual person or persons who
perform configuration changes. Also, you will not be able to use different types of user groups.
Configuration
If you have an LDAP server in your landscape, you can configure the Cloud Connector to authenticate Cloud
Connector users against the LDAP server.
Valid users or user groups must be assigned to one of the following roles:
Note
The role sccmonitoring provides access to the monitoring APIs, and is particularly used by the SAP
Solution Manager infrastructure, see Monitoring APIs [page 673].
Alternatively, you can define custom role names for each of these user groups, see: Use LDAP for User
Administration [page 637].
Once configured, the default Cloud Connector Administrator user becomes inactive and can no longer be
used to log on to the Cloud Connector.
Related Information
You can use LDAP (Lightweight Directory Access Protocol) to manage Cloud Connector users and
authentication.
After installation, the Cloud Connector uses file-based user management by default. Alternatively, the Cloud
Connector also supports LDAP-based user management. If you operate one or more LDAP servers in your
landscape, you can configure the Cloud Connector to use the LDAP user base.
If LDAP authentication is active, you can assign users or user groups to the following default roles:
Authorization is checked by the Cloud Connector based on the user role retrieved by the LDAP server.
1. From the main menu, choose Configuration and go to the User Interface tab.
2. From the User Administration section, choose Switch to LDAP.
3. Add further LDAP user stores by pressing the Add button (plus icon) of the LDAP User Stores table at the
top of the dialog.
Note
The number of LDAP user stores is limited to 3. The order of the stores is significant. During logon, the
given user will be authenticated by the first LDAP server in that list that succeeds in doing so.
We strongly discourage you from configuring several LDAP servers to authenticate the same user, in
particular with different roles, as this may cause undesirable effects, including security issues, if LDAP
servers are temporarily unavailable.
4. Click on a row of the LDAP User Stores table to edit the respective LDAP configuration.
5. Column Actions lets you delete or move the respective LDAP configuration.
6. (Optional) To save an intermediate LDAP configuration, choose Save Draft. This lets you store the changes
in the Cloud Connector without activation.
7. Usually, an LDAP server lists users in an LDAP node and user groups in another node. In this case, you can
use the following template for LDAP configuration. Copy the template into the configuration text area:
roleBase="ou=groups,dc=scc"
roleName="cn"
roleSearch="(uniqueMember={0})"
userBase="ou=users,dc=scc"
userSearch="(uid={0})"
8. Change the <ou> and <dc> fields in userBase and roleBase, according to the configuration on your
LDAP server, or use some other LDAP query.
9. Provide the LDAP server's host and port (port 389 is used by default) in the <Host> field. To use the secure
protocol variant LDAPS based on TLS, select Secure.
10. (Optional) Provide a failover LDAP server's host and port (port 389 is used by default) in the <Alternate
Host> field. To use the secure protocol variant LDAPS based on TLS, select <Secure Alternate Host>.
11. (Optional) Provide a service user and its password in the fields Connection User Name and Connection
Password.
12. (Optional) You can override the roles in the Custom Roles section. If no custom role is provided, the Cloud
Connector checks permissions for the corresponding default role name:
• <Administrator Role> (default: sccadmin)
• <Sub-Administrator Role> (default: sccsubadmin)
• <Support Role> (default: sccsupport)
• <Display Role> (default: sccdisplay)
• <Monitoring Role> (default: sccmonitoring)
13. You can execute an authentication test by choosing the Test LDAP Configuration button. In the pop-up
dialog, you must specify user name and password of a user who is allowed to logon after activating the
configuration. The check verifies if authentication would be successful or not for the respective LDAP
configuration.
Note
We strongly recommend that you perform an authentication test. If authentication fails, login may not
be possible anymore. The test dialog provides a test protocol that can be viewed and downloaded,
which can be helpful for troubleshooting.
Be advised that such a test queries the selected LDAP server only, not the entire list of servers (if there
is more than one). Make sure that at least one of the LDAP user stores succeeds in authenticating a
given user that you want to use for logon.
For more information about how to set up LDAP authentication, see https://tomcat.apache.org/
tomcat-9.0-doc/realm-howto.html .
Note
If you are using LDAP together with a high availability setup, you cannot use the configuration option
userPattern. Instead, use a combination of userSearch, userSubtree and userBase.
Caution
An LDAP connection over TLS can cause TLS errors if the LDAP server uses a certificate that is not
signed by a trusted CA. If you cannot use a certificate signed by a trusted CA, you must set up the trust
relationship manually, that is, import the public part of the issuer certificate to the JDK's trust storage.
Usually, the cacerts file inside the java directory (jre/lib/security/cacerts) is used for trust
storage. To import the certificate, you can use keytool:
14. After finishing the configuration, choose Activate. Immediately after activating the LDAP configuration, a
restart of the Cloud Connector server is enforced. After restart, log on to the Cloud Connector with the
credentials as per your LDAP configuration.
15. LDAP user stores may be modified, deleted, added, or moved (that is, changed in their order) while
LDAP-based user management is active. Navigate to the LDAP User Stores table and perform the required
actions:
16. To switch back to file-based user management, choose the Switch icon in section User Administration
again.
Note
If you have set up an LDAP configuration incorrectly, you may not be able to logon to the Cloud Connector
again. In this case, revert to the file-based user store without the administration UI. For more information,
see the next section.
If your LDAP settings do not work as expected, you can use the useFileUserStore tool to revert to the file-based
user store:
1. Change to the installation directory of the Cloud Connector and enter the following command:
• Microsoft Windows: useFileUserStore
• Linux, Mac OS: ./useFileUserStore.sh
The tool will inform you about the successful modification of the user store.
2. Restart the Cloud Connector service to activate the file-based user store:
• Microsoft Windows OS: Open the Windows Services console and restart the cloud connector
service.
• Linux OS: Execute
• System V init distributions: service scc_daemon restart
• Systemd distributions: systemctl restart scc_daemon
• Mac OS X: Not applicable because no daemon exists (for Mac OS X, only a portable variant is
available).
Related Information
Using an LDAP server for user management allows seamless integration of the Cloud Connector into the
on-premise environment. It requires some configuration that must match the setup on your LDAP server, and
therefore can't be generated automatically.
The configuration parameters are common for various products and mostly well known.
The apache tomcat project, which is used as underlying technology by the Cloud Connector, provides an
excellent tutorial: tomcat.apache.org/tomcat-8.5-doc/realm-howto.html . It explains the LDAP configuration
parameters and considers various LDAP directory setups, including their specific configuration.
However, some aspects may raise questions. For this reason, we show you how to configure LDAP and verify
LDAP configuration, providing useful background information in this topic.
A basic understanding of LDAP and tomcat's how-to guide is a prerequisite. As help tool, we are using the
ldapsearch utility. You can use any LDAP client for this procedure.
In a first step, you must establish a connection to the LDAP server. Like an HTTP connection, the connection to
LDAP can be secure (via TLS) or plain. It points to a host and port. The address looks like this:
Sample Code
ldapsearch -H ldap://<ldaphost>:<port>
The return value is -1 if the address is not reachable. Before you go ahead, you need to know the address of
your LDAP server. As soon as the ldapsearch utility returns a value other than -1, the address of the LDAP
server is correct. More precisely, it indicates only that there is a server listening on this port, which is supposed
to be the LDAP server.
Once the address is known, you can test the connection in the Cloud Connector. Enter the address and add a
dummy configuration, for example, x="x", to outwit the check. Then choose the test icon in the upper right
corner. For a valid address, the LDAP configuration test in the Cloud Connector reports the following:
TLS Issues
LDAP connection over TLS will run into TLS errors if the LDAP server uses an "untrusted" certificate. This could
be a self-signed certificate or a certificate signed by a generally untrusted authority.
If you cannot use a trusted certificate on your LDAP server, you must import the public part of the issuer
certificate to the JDK's trust storage. See the JDK documentation how to do that.
Usually, the trust storage location is cacerts inside the java directory (jre/lib/security/cacerts). You can
use the keytool utility for import.
Sample Code
Authentication
If the address of the LDAP server is correct and the Cloud Connector can establish a connection, you can
proceed with the next step: the authentication by the LDAP directory.
The LDAP server may require authentication or not (anonymous connection), before a query can be executed.
Authentication is done by user and password, specified by the connectionName and connectionPassword
properties.
Anonymous access is sufficient in most cases and provides the same level of security. However, you have to
deal with the existing setup on the LDAP server.
Note
The LDAP user is not the same user that is later used to logon on to the Cloud Connector. It is
a specific user, which has permissions to query the LDAP directory. It can be stored in an LDAP
directory separated from other users. We recommend that you specify the fully-qualified user name like
"uid=admin,ou=system".
To verify the values, let's first check the authentication with ldapsearch:
Sample Code
Note
The only thing to check here is, if authentication is ok. If it is not, LDAP returns INVALID_CREDENTIALS: Bind
failed:
Sample Code
Choose the test icon in the upper right corner, and enter something as name and password:
If you get an Exception performing authentication message here, check the reply from the LDAP server and
align your parameters until you can connect.
User Selection
Once authentication is checked, set the root node for users. User nodes are nodes containing user details. They
are located somewhere in the LDAP tree. Sometimes they are all listed under one branch (parent node), but
they may also be distributed across several branches. In any case, start with one branch that contains at least
one user node.
Sample Code
The output contains the nodes located under the specified base. Each node looks like this:
Sample Code
userSearch selects the attribute containing the user ID. Together with the userBase, the configuration looks
like this now:
Sample Code
userBase="ou=users,dc=scc"
userSearch="(uid={0})"
Sample Code
Now, we take a closer look now at the response. The user Thor was found, its password was successfully
validated by LDAP server, but there are no specific roles selected:
Besides uid, you are free to use every other attribute of the user node. For example, for
Active Directory, the preferred attribute is often sAMAccountName, the corresponding configuration is
userSearch="(sAMAccountName={0})".
Note
As mentioned above, you can use every attribute as user ID as long as it is unique. To verify this, check the
query result for the respective attribute with ldapsearch. If it contains more than one node, the test in the Cloud
Connector would report User name [<userid>] has multiple entries, No user found, Authentication for user
<userid> failed. In our test, the CN (common name) attribute is not unique and the following search returns
more than one entry.
Sample Code
If connectionUser is located under userBase and its ID can be selected by the same userSearch, you
can use just the user ID in the connectionUser field instead of a fully-qualified DN.
User Roles
At this point, the configuration let's you establish a connection to the LDAP server and authenticate a user.
In the next step, we configure authorization, that is, the roles assigned to a user.
A role is a group in LDAP terms. The Cloud Connector provides the following roles: sccadmin, sccsubadmin,
sccsupport, sccmonitoring and sccdisplay.
Most likely, your LDAP server does not define such groups. Here, the best practice is to create new groups for
role assignment. Using these special groups for managing Cloud Connector users lets administrators easily
grant permissions to the relevant users. For example, only users with administrator permissions would be
added to the sccadmin group. Like this, you avoid side effects, and you can increase the security and stability
levels of the Cloud Connector.
Reuse of already existing groups is also possible. Set these group names as custom roles in the Cloud
Connector's LDAP configuration. However, keep in mind that every user in the existing group will automatically
get permissions for the Cloud Connector. Even if at present all users in the available group should have
permissions for Cloud Connector, this could cause issues at some point in the future. To avoid this, custom
roles should not be used for reuse of existing groups on your LDAP server. The main purpose of reused groups
is to create group names that match your company's naming conventions.
Like users, also groups are represented as nodes in an LDAP tree branch. The branch where the groups are
located must be configured as roleBase.
The roleName defines, which of its attributes is taken as role name. Usually, you wouldn't use the fully-
qualified distinguished name as role name.
Sample Code
LDAP provides two ways to define the relationship between a user and its groups:
Remember
The roles specified by an attribute of a user entry are used as is, that is, the configuration parameter
roleName is irrelevant in this case, and the role name is just set to the attribute value.
Below, the test report selected neither internal nor direct roles:
To demonstrate an empty result, the parameter roleSearch was set to a non-existing attribute here.
Below, the test reflecting LDAP configuration eventually reports a non-empty list of found roles, containing the
role sccadmin:
Like user nodes, also group nodes on the LDAP server may be located under several branches inside the
"base" branch. In this case, add the boolean attribute roleSubtree="true".
If you want to use non-default groups, you can set a group of your choice in the Custom Roles section. You can
replace one or more roles.
Note
Keep in mind that the custom role definition replaces the standard role. So, once a custom role for
Administrator is set, the standard one (sccadmin) is not effective anymore.
There are some more configuration parameters available that are out of scope here. Most LDAP configurations
are covered by the parameters discussed above.
• adCompat="true", if your LDAP server uses MS Active Directory and you encounter strange errors in the
test report.
• forceDnHexEscape="true", if your LDAP server uses MS Active Directory and there are non-standard
characters in the DN.
• connectionTimeout="x", if you want to change the default of 5s.
General Recommendations
• Don't use the userPattern parameter. It invalidates SSL/TLS-based authentication and high availability
setup would fail.
• If the user ID has non-standard characters, escape them with \nn.
• For back-slash, always use \\.
You can operate the Cloud Connector in a high availability mode, in which a master and a shadow instance are
installed.
Context
In a failover setup, when the main instance should go down for some reason, a redundant one can take over
its role. The main instance of the Cloud Connector is called master and the redundant instance is called the
shadow. The shadow must be installed separately and connected to the master.
During high availability setup, the master pushes the entire configuration to the shadow. Later on, during
normal operation, the master also pushes configuration updates to the shadow to keep both instances
synchronized. The shadow pings the master regularly. If the master is not reachable for a while, the shadow
takes over the master role and establishes the tunnel to SAP BTP.
Caution
Master and shadow communicate outside the cloud (SAP BTP). Therefore, a communication breakdown
between master and shadow may trigger a takeover although the original master is still operational, and in
particular still connected to SAP BTP.
It is therefore imperative to choose a stable and reliable network environment for the master and shadow
instances.
Note
For detailed information about sizing of the master and the shadow instance, see also Sizing
Recommendations [page 369].
Related Information
Install a redundant Cloud Connector instance (shadow instance) that monitors the main instance.
Procedure
Optional Configuration
1. Choose Edit.
2. You can now change the high availability (HA) port to a different one from where the Cloud Connector
administration UI is accessible. This port is then used for the master/shadow communication only, that
is, to check for failovers or push configuration. This is especially needed in case of a Logon to the
Cloud Connector via Client Certificate [page 634], so that master/shadow communication is still possible
without having a client certificate for logon.
3. Additionally, by providing a concrete shadow host you can ensure that a shadow instance can be
connected from this host only.
Caution
Pressing the Reset button resets all high availability settings (besides the own port configuration) to their
initial state. As a result, high availability is disabled and the shadow host is cleared. Reset only works if no
shadow is connected.
Install the shadow instance in the same network segment as the master instance. Communication between
master and shadow via proxy is not supported. The same distribution package is used for master and shadow
instance.
Note
If you plan to use LDAP for the user authentication on both master and shadow, make sure you configure it
before you establish the connection from shadow to master.
1. On first start-up of a Cloud Connector instance, a UI wizard asks you whether the current instance should
be master or shadow. Choose Shadow and Save:
Note
If you want to attach the shadow instance to a different master, press the Reset button. All your high
availability settings (besides the own port configuration) will be removed, that is, reset to their initial
state. This works only if the shadow is not connected.
3. Upon a successful connection, the master instance pushes the entire configuration plus some information
about itself to the shadow instance. You can see this information in the UI of the shadow instance, but you
can't modify it.
Related Information
Manage the Cloud Connector master and shadow instances in a high availability setup.
There are several administration activities you can perform on the shadow instance. All configuration of tunnel
connections, host mappings, access rules, and so on, must be maintained on the master instance; however,
you can replicate them to the shadow instance for display purposes. You may want to modify the check
interval (time between checks of whether the master is still alive) and the takeover delay (time the shadow
waits to see whether the master would come back online, before taking over the master role itself).
Also, you can configure the timeout for the connection check, by pressing the gear icon in the section
Connection To Master of the shadow connector main page.
You can use the Reset button to drop all the configuration information on the shadow that is related to the
master, but only if the shadow is not connected to the master.
As of version 2.17, the shadow instance lets you monitor Hardware Metrics [page 666] just like the master
instance.
Once connected to the master, the shadow instance receives the configuration from the master instance. Yet,
there are some aspects you must configure on the shadow instance separately:
• User administration is configured separately on master and shadow instances. Generally, it is not required
to have the same configuration on both instances. In most cases, however, it is suitable to configure master
and shadow in the same way.
• The UI certificate is not shared. Each host can have its own certificate, so you must maintain the UI
certificates on master and shadow. You can use the same certificate though.
• SNC configuration: If secure RFC communication or principal propagation for RFC calls is used, you must
configure SNC on each instance separately.
Failover Process
The shadow instance regularly checks whether the master instance is still alive. If a check fails, the shadow
instance first attempts to reestablish the connection to the master instance for the time period specified by the
takeover delay parameter.
• If no connection becomes possible during the takeover delay time period, the shadow tries to take over the
master role. At this point, it is still possible for the master to be alive and the trouble to be caused by a
network issue between the shadow and master.
The shadow instance next attempts to establish a tunnel to the given SAP BTP subaccount. If the
connection attempt fails to all configured subaccounts (for whatever reason), the shadow instance
remains in "shadow status", periodically pinging the master and trying to connect to the cloud, while
the master is not yet reachable.
• Otherwise, if the the tunnel to the cloud side can be opened, the shadow instance will take over the master
role. From this moment, the shadow instance displays the UI of a master instance and allows the usual
operations of a master instance, for example, starting/stopping tunnels, modifying the configuration, an so
on.
This is, in particular, also the case if the master is still alive, but the network connection between
shadow and master is corrupted. This leads to a master-master setup, which is automatically detected
on the former master once the network between the two instances recovers. The former master will then
automatically relinquish its master role and assume the shadow role to get back to the desired setup.
When the original master instance restarts, it first checks whether the registered shadow instance has taken
over the master role. If it has, the master registers itself as a shadow instance on the former shadow (now
master) instance. Thus, the two Cloud Connector installations, in fact, have switched their roles.
Note
Only one shadow instance is supported. Any further shadow instances that attempt to connect are
declined by the master instance.
The master considers a shadow as lost, if no check/ping is received from that shadow instance during a time
interval that is equal to three times the check period. Only after this much time has elapsed can another
shadow system register itself.
On the master, you can manually trigger failover by selecting the Switch Roles button. If the shadow is
available, the switch is made as expected. Even if the shadow instance cannot be reached, the role switch
of the master may still be enforced. Select Switch Roles only if you are absolutely certain it is the correct
action to take for your current circumstances.
Even with the active role switch, zero downtime is not guaranteed. Depending on various aspects and
timings, there may be short time slots in which establishing new connections fails. When switching the role,
all active requests on the master will be broken as the sockets will be closed.
Context
By default, the Cloud Connector uses port 8443 for its administration UI. If this port is blocked by another
process, or if you want to change it after the installation, you can use the changeport tool, provided with
Cloud Connector version 2.6.0 and higher.
Note
Procedure
1. Change to the installation directory of the Cloud Connector. To adjust the port and execute one of the
following commands:
• Microsoft Windows OS:
changeport <desired_port>
./changeport.sh <desired_port>
2. When you see a message stating that the port has been successfully modified, restart the Cloud Connector
to activate the new port.
The major principle for the connectivity established by the Cloud Connector is that the Cloud Connector
administrator should have full control over the connection to the cloud, that is, deciding if and when the
Using the administration UI, the Cloud Connector administrator can connect and disconnect the Cloud
Connector to and from the configured cloud subaccount. Once disconnected, no communication is possible,
either between the cloud subaccount and the Cloud Connector, or to the internal systems. The connection
state can be verified and changed by the Cloud Connector administrator on the Subaccount Dashboard tab of
the UI.
Note
Once the Cloud Connector is freshly installed and connected to a cloud subaccount, none of the systems
in the customer network are yet accessible to the applications of the related cloud subaccount. Accessible
systems and resouurces must be configured explicitly in the Cloud Connector one by one, see Configure
Access Control [page 456].
A Cloud Connector instance can be connected to multiple subaccounts in the cloud. This is useful especially
if you need multiple subaccounts to structure your development or to stage your cloud landscape into
development, test, and production. In this case, you can use a single Cloud Connector instance for multiple
subaccounts. However, we recommend that you do not use subaccounts running in productive scenarios and
subaccounts used for development or test purposes within the same Cloud Connector. You can add or a delete
a cloud account to or from a Cloud Connector using the Add and Delete buttons on the Subaccount Dashboard
(see screenshot above).
Related Information
For support purposes, you can trace HTTP and RFC network traffic that passes through the Cloud Connector.
Traffic data may include business-critical information or security-sensitive data, such as user names,
passwords, address data, credit card numbers, and so on. Thus, by activating the corresponding trace level,
a Cloud Connector administrator might see data that he or she is not meant to. To prevent this behavior,
implement the four-eyes principle for your operating system as described below.
Once the four-eyes principle is applied, activating a trace level that dumps traffic data will require two separate
users:
• An operating system user on the machine where the Cloud Connector is installed;
• An Administrator user of the Cloud Connector user interface.
By assigning these roles to two different people, you can ensure that both persons are needed to activate a
traffic dump.
1. Create a file named writeHexDump in <scc_install_dir>\scc_config. The owner of this file must be
a user other than the operating system user who runs the cloud connector process.
Note
Usually, this file owner is the user which is specified in the Log On tab in the properties of the cloud
connector service (in the Windows Services console). We recommend that you do not use the Local
System user, but a dedicated OS user for the cloud connector service.
• Only the file owner should have write permission for the file.
• The OS user who runs the cloud connector process needs read-only permissions for this file.
• Initially, the file should contain a line like allowed=false.
• In the security properties of the file scc_config.ini (same directory), make sure that only the OS
user who runs the cloud connector process has write/modify permissions for this file. The most
efficient way to do this is simply by removing all other users from the list.
2. Once you've created this file, the Cloud Connector refuses any attempt to activate the Tunnel Traffic Trace
or ABAP Cloud SNC Traffic Trace flag.
3. To set CPIC trace level 3 or activate any traffic trace, first the owner of writeHexDump must change the
file content from allowed=false to allowed=true. Thereafter, the Administrator user can activate any
traffic trace from the Cloud Connector administration screens.
1. Go to directory /opt/sap/scc/scc_config and create a file with name writeHexDump. The owner of this
file must be different from the scctunnel user (that is, the operating system user under which the Cloud
Connector processes are running) and not a member of the operating system user group sccgroup.
1.2.3.8 Monitoring
Learn how to monitor the Cloud Connector from the SAP BTP cockpit and from the Cloud Connector
administration UI.
The simplest way to verify whether a Cloud Connector is running is to try to access its administration UI. If you
can open the UI in a Web browser, the cloud connector process is running.
• On Microsoft Windows operating systems, the cloud connector process is registered as a Windows
service, which is configured to start automatically after a new Cloud Connector installation. If the Cloud
Connector server is rebooted, the cloud connector process should also auto-restart immediately. You
can check the state with the following command:
To verify if a Cloud Connector is connected to a certain cloud subaccount, log on to the Cloud Connector
administration UI and go to the Subaccount Dashboard, where the connection state of the connected
subaccounts is visible, as described in section Connect and Disconnect a Cloud Subaccount [page 661].
The cockpit includes a Connectivity section, where users can check the status of the Cloud Connector(s)
attached in the current subaccount, if any, as well as information about the Cloud Connector ID, version, used
Java runtime, high availability setup (master and shadow instance), and so on (choose Connectivity Cloud
Connectors ).
• Neo envirnoment: Users with a role containing the permission readSCCTunnels, for example, the
predefined role Cloud Connector Admin.
• Cloud Foundry environment, feature set A: Users with a Cloud Foundry org role containing the permission
readSCCTunnels, for example, the role Org Manager.
Note
• Cloud Foundry environment, feature set B: Users with a role containing the permission readSCCTunnels,
for example, the predefined role Cloud Connector Administrator.
Note
For more information on feature sets in the Cloud Foundry environment, see Cloud Management Tools —
Feature Set Overview.
The Cloud Connector offers various views for monitoring its activities and state.
You can check the overall state of the Cloud Connector through its Hardware Metrics [page 666], whereas
subaccount-specific performance and usage data is available via Subaccount-Specific Monitoring [page 667].
To provide external monitoring tools, you can use the Monitoring APIs [page 673].
Check the current state of critical system resources in the Cloud Connector.
You can check the current state of critical system resources (disc space, Java heap, physical memory, virtual
memory) using pie charts.
To access the monitor, choose Hardware Metrics Monitor from the main menu.
In addition, the history of CPU and memory usage (physical memory, Java heap) is shown in history graphs
below the pie charts (recorded in intervals of 15 seconds), as well as the history of disk usage, recorded in
intervals of 60 seconds. History data is retained for at most 24 hours.
You can view the usage data for a selected time period in each history graph:
• Double-click inside the main graph area to set the start (or end) point, and drag to the left or to the right to
zoom in.
• The entire timeline is always visible in the smaller bottom area right below the main graph.
• A frame in the bottom area shows the position of the selected section in the overall timeline.
• Choose Undo zooming in... to reset the main graph area to the full range of available data.
Zooming, dragging, and undoing zooming is synchronized across all history graphs. All graphs always show
the situation during the same time period.
History data is written to a file to avoid data loss when stopping the Cloud Connector. Upon restart, the history
data is read from file. Downtime is represented by a gray rectangle indicating that no data is available during
the respective period of time.
Use different monitoring views in the Cloud Connector administration UI to check subaccount-specific activites
and data.
The Cloud Connector provides various views for monitoring the activities associated with an account, such
as HTTP requests and RFC calls (from cloud applications to backends as per access control settings) or
data statistics for service channels. Choose one of the sub-menus Monitor (Cloud to On-Premise) or Monitor
(On-Premise to Cloud).
Caution
The collected monitoring data is not part of the Cloud Connector's backup. After restoring a Cloud
Connector instance, all monitoring collections are empty.
Content
Performance Overview
All requests that travel through the Cloud Connector to a backend system, as specified through access control,
take a certain amount of time. You can check the duration of requests in a bar chart. The requests are not
shown individually, but are assigned to buckets, each of which represents a time range.
For example, the first bucket contains all requests that took 10ms or less, the second one the requests that
took longer than 10ms, but not longer than 20ms. The last bucket contains all requests that took longer than
5000ms.
In case of latency gaps, you may try to adjust the influencing parameters: number of connections, tunnel
worker threads, and protocol processor worker threads. For more information, see Configuration Setup [page
373].
The collection of duration statistics starts as soon as the Cloud Connector is operational. You can delete all of
these statistical records by selecting the button Delete All. After that, the collection of duration statistics starts
over.
Note
Delete All deletes not only the list of most recent requests, but it also clears the top time consumers.
The number of requests that are shown is limited to 50. You can either view all requests or only the ones
destined for a certain virtual host, which you can select.You can select a row to see more detail.
A horizontal stacked bar chart breaks down the duration of the request into several parts: external (backend),
open connection, internal (Cloud Connector), SSO handling, and latency effects (between SAP BTP and Cloud
Connector). The numbers in each part represent milliseconds.
Note
In the above example, the selected request took 34ms, to which the Cloud Connector contributed 1ms.
Opening a connection took 18ms. Backend processing consumed 7ms. Latency effects accounted for the
remaining 8ms, while there was no SSO handling necessary and hence it took no time at all.
The term "request" only refers to RFC and HTTP requests. TCP or LDAP traffic does not contribute to Most
Recent Requests or Top Time Consumers.
To further restrict the selection of the listed 50 most recent requests, you can edit the resource filter settings
for each virtual host:
In the Edit dialog, select the virtual host for which you want to specify the resource filter and choose one or
more of the listed accessible resources. This list includes all resources that have been exposed during access
control configuration (see also: Configure Access Control [page 456]). If the access policy for an accessible
resource is set to Path and all sub-paths, you can further narrow the selection by adding one or more
sub-paths to the resource as a suffix .
Note
If you specify sub-paths for a resource, the request URL must match exactly one of these entries to be
recorded. Without specified sub-paths (and the value Path and all sub-paths set for a resource), all
sub-paths of a specified resource are recorded.
This option is similar to Most Recent Requests; however, requests are not shown in order of appearance, but
rather sorted by their duration (in descending order). Furthermore, you can delete top time consumers, which
has no effect on most recent requests or the performance overview.
Usage Statistics
To view the statistical data regarding the traffic handled by each virtual host, you can select a virtual host
from the table. The detail view shows the traffic handled by each resource, as well as a 24 hour overview of
the throughput as a bar chart that aggregates the throughput (bytes received and bytes sent by a virtual host,
respectively) on an hourly basis.
Note
For communication via HTTP and RFC, also the number of calls or requests is recorded in the same way.
The data that is collected includes the number of bytes received from cloud applications and the number of
bytes sent back to cloud applications. The time of the most recent access is shown for the virtual hosts, and
in the detail view also for the resources. If no access has taken place yet, the most recent access is shown
The tables listing usage statistics of virtual hosts and their resources let you delete unused virtual hosts or
unused resources. Use action Delete to delete such a virtual host or resource.
Caution
In Cloud Connector versions before 2.14, usage statistics are collected during runtime only and are not
stored when stopping the Cloud Connector. That is, these statistics are lost when the Cloud Connector is
stopped or restarted. Use care when taking the decision to delete a resource or virtual host based on its
usage statistics.
As of version 2.14, usage statistics are periodically stored to disk. The collected statistics are still available
after a restart of the Cloud Connector.
A backup of the Cloud Connector settings does not include collected usage data. After restoring a Cloud
Connector instance, all monitoring collections are empty.
Using the Reset button, you can clean up all collected data.
Caution
For both virtual hosts and resources, you can use a classic Filter button to reduce the virtual hosts or resources
to those that have never been used (since the Cloud Connector started). For the virtual hosts, a second filter
type is available that selects only those virtual hosts that have been used, but include resources never used.
This feature facilitates locating obsolete resources of otherwise active virtual hosts.
Backend Connections
This option shows a tabular overview of all active and idle connections, aggregated for each virtual host. By
selecting a row (each of which represents a virtual host) you can view the details of all active connections
as well as a graphical summary of all idle connections. The graphical summary is an accumulative view of
connections based on the time the connections have been idle.
The maximum idle time appears on the rightmost side of the horizontal axis. For any point t on that axis
(representing a time value ranging between 0ms and the maximal idle time), the ordinate is the number of
connections that have been idle for no longer than t. You can click inside the graph area to view the respective
abscissa t and ordinate.
Connections
This section shows a tabular overview of all currently opened logical connections, aggregated for each local
port. You can identify if and how many connections are currently opened through this specific service channel.
Usage Statistics
Statistical data regarding the traffic handled by each port is shown in tabular form. From the table, you can
select a service channel. The respective detail views show a 24 hour overview as a bar chart that aggregates
the throughput (bytes received and bytes sent via a port, respectively) on an hourly basis.
The collected data comprises the number of bytes received from on-premise applications and the number of
bytes sent to cloud applications.
The table listing usage statistics of ports lets you delete unused service channels. Use action Delete to delete a
service channel.
Caution
Usage statistics are collected at runtime only. They are not stored when stopping the Cloud Connector.
These statistics are lost when the Cloud Connector is stopped or restarted. Be mindful of that fact, and use
caution when taking the decision to delete a service channel based on its usage statistics.
Using the Reset button you can clean up all collected data. This might be helpful when monitoring a specific use
case.
The table of service channels provides a filter button to reduce the service channels to those that have never
been used (since the Cloud Connector started).
Use the Cloud Connector monitoring APIs to include monitoring information in your own monitoring tool.
You might want to integrate some monitoring information in the monitoring tool you use.
For this purpose, the Cloud Connector includes a collection of APIs that allow you to read various types of
monitoring data.
Note
This API set is designed particularly for monitoring the Cloud Connector via the SAP Solution Manager, see
Configure Solution Management Integration [page 622].
Before you start using these APIs, please also read the general introduction to REST APIs [page 734] provided
by Cloud Connector.
Prerequisites
You must use Basic Authentication or form field authentication to read the monitoring data via API.
Note
The Health Check API does not require a specified user. Separate users are available through LDAP only.
Available APIs
Using the health check API, it is possible to recognize that the Cloud Connector is up and running. The purpose
of this health check is only to verify that the Cloud Connector is not down. It does not check any internal state
or tunnel connection states. Thus, it is a quick check that you can execute frequently:
URI /exposed?action=ping
Method GET
Request
Response
Errors
Note
Using this API, you can read the list of all subaccounts connected to the Cloud Connector and view detail
information for each subaccount:
URI /api/monitoring/subaccounts
Method GET
Request
Response {subaccounts,version}
Errors
Response Properties:
Example:
Note
The list of connections lets you view all backend systems connected to the Cloud Connector and get detail
information for each connection:
URI /api/monitoring/connections/backends
Method GET
Request
Errors
Response Properties:
Sample Code
Example:
Note
URI /api/monitoring/connections/serviceChannels
Method GET
Errors
Response Properties:
Sample Code
Note
Using this API, you can read the data provided by the Cloud Connector performance monitor:
URI /api/monitoring/performance/backends
Method GET
Request
Errors
Response Properties:
Sample Code
Example:
Note
Using this API, you can read the data of top time consumers provided by the Cloud Connector performance
monitor:
Method GET
Request
Errors
Response Properties:
Sample Code
Example:
Note
This API provides a snapshot of the current memory status of the machine where the Cloud Connector is
running:
URI /api/monitoring/memory
Method GET
Request
Errors
Response Properties:
• physicalKB: usage of the physical memory, split into four categories (all sizes in KB):
• total: total size of the physical memory
• CloudConnector: size of the physical memory used by the Cloud Connector
• others: size of the physical memory used by all other processes
• free: size of the free physical memory
• virtualKB : usage of the virtual memory, split into four categories (all sizes in KB)
• total: total size of the virtual memory
• CloudConnector: size of the virtual memory used by the Cloud Connector
• others: size of the virtual memory used by all other processes
• free: size of the free virtual memory
• cloudConnectorHeapKB : usage of the Java heap, split into three categories (all sizes in KB):
• total: total size of the Java heap
• used: size of the Java heap used by the Cloud Connector
• free: size of the free Java heap
Sample Code
Example:
URI /api/monitoring/disk
Method GET
Request
Errors
Response Properties:
Method GET
Request
Errors
Response Properties:
Note
Using this API, you can get an overview of the certificates currently employed by the Cloud Connector:
URI /api/monitoring/certificates
Method GET
Request
Errors
Response Properties:
• type: type of the certificate which can be one of the following strings:
• UI (for the UI certificate)
• System (for the system certificate)
• CA (for the certificate used in connection with Principal Propagation/Certification Authority)
• subaccount (for subaccount certificates)
• validTo: end date of the respective certificate's validty (as a long integer, that is, a UTC timestamp)
• subjectDN: subject DN of the respective certificate (included only for non-subaccount certificates)
• issuerDN: issuer DN of the respective certificate (included only for non-subaccount certificates)
• serialNumber: serial number of the respective certificate as hex-encoded string (included only for non-
subaccount certificates)
• subaccountName: name of the subaccount (only for subaccount certificates)
• subaccountRegion: region or landscape host of the the subaccount (only for subaccount certificates)
Sample Code
Example:
Note
Using this API, you can obtain an overview of the certificates currently employed by the Cloud Connector:
URI /api/monitoring/certificates/{selection}
Method GET
Request
Errors
Request:
• selection parameter
• expired: an array holding the list of all expired certificates
• expiring: an array holding the list of all certificates that will expire in less than N days, where N is the
number of days specified in the alerting setup regarding certificates that are close to their expiration
date
• ok: an array holding the list of all certificates that continue to be valid for N days or more, where
N is the number of days specified in the alerting setup regarding certificates that are close to their
expiration date
Response Properties:
• type: type of the certificate which can be one of the following strings:
• UI (for the UI certificate)
• System (for the system certificate)
• CA (for the certificate used in connection with Principal Propagation/Certification Authority)
• subaccount (for subaccount certificates)
• validTo: end date of the respective certificate's validty (as a long integer, that is, a UTC timestamp)
• subjectDN: subject DN of the respective certificate (included only for non-subaccount certificates)
• issuerDN: issuer DN of the respective certificate (included only for non-subaccount certificates)
• serialNumber: serial number of the respective certificate as hex encoded string (included only for non-
subaccount certificates)
• subaccountName: name of the subaccount (only for subaccount certificates)
• subaccountRegion: region or landscape host of the the subaccount (only for subaccount certificates)
Sample Code
This API provides usage statistics regarding the systems and resources available in the Cloud Connector:
URI /api/monitoring/usage
Method GET
Request
Errors
Response Properties:
Note
This property is only available if there has been at least one call or request.
• resources: usage statistics per resource, given as an array (i.e. the distribution of bytes received,
sent, as well as number of calls/requests, across the resources of the respective virtual host)
• resourceName: name of the resource (that is, a URL path or the name of a remote function)
• enabled: Boolean flag that indicates whether the resource is currently active (true) or
suspended (false)
• bytesReceived: total number of bytes that were received through a call or request and were
handled by this resource
• bytesSent: total number of bytes sent back as a response in the context of this resource
Note
This property is only available if at least one call or request was handled by this resource.
Sample Code
Example:
With the master role check API, you can recognize if a Cloud Connector instance has currently the master
role. The purpose of this master role check is only to recognize if the Cloud Connector instance is currently
the master instance or not, without the need of providing credentials. It is a quick check that you can execute
frequently.
URI /exposed?action=hasMasterRole
Method GET
Request
Response
{true, false}
Errors
1.2.3.9 Alerting
Configure the Cloud Connector to send e-mail messages when situations occur that may prevent it from
operating correctly.
To configure alert e-mails, choose Alerting from the top-left navigation menu.
You must specify the receivers of the alert e-mails (E-mail Configuration) as well as the Cloud Connector
resources and components that you want to monitor (Observation Configuration). The corresponding Alert
Messages are also shown in the Cloud Connector administration UI.
E-mail Configuration
1. Select E-mail Configuration to specify the list of em-ail addresses to which alerts should be sent (Send To).
Note
The addresses you enter here can use either of the following formats: [email protected] or John
Doe <[email protected]>.
Note
Connections to an SMTP server over TLS can cause TLS errors if the SMTP server uses an "untrusted"
certificate. If you cannot use a trusted certificate, you must import the public part of the issuer certificate
to the JDK's trust storage.
Usually, the trust storage is done in the file cacerts in the Java directory (jre/lib/security/cacerts).
For import, you can use the keytool utility:
Observation Configuration
Once you've entered the e-mail addresses to receive alerts, the next step is to identify the resources and
components of the Cloud Connector: E-mail messages are sent when any of the chosen components or
resources have malfunctioned or are in a critical state.
The Cloud Connector does not dispatch the same alert repeatedly. As soon as an issue has been resolved,
an informational alert is generated, sent, and listed in Alert Messages (see section below).
Note
These alerts are only triggered in case of an error or exception, but not upon intentional disconnect
action.
• An excessively high CPU load over an extended period of time adversely affects performance and may
be an indicator of serious issues that jeopardize the operability of the Cloud Connector. The CPU load
is monitored and an alert is triggered whenever the CPU load exceeds and continues to exceed a given
threshold percentage (the default is 90%) for more than a given period of time (the default is 60
seconds).
• Although the Cloud Connector does not require nor consume large amounts of Disk space, running out
of it is a circumstance that you should avoid. We recommend that you configure an alert to be sent if
the disk space falls below a critical value (the default is 10 megabytes).
• The Cloud Connector configuration contains various Certificates. Whenever one of those expires,
scenarios might no longer work as expected so it's important to get notified about the expiration (the
default is 30 days).
3. (Optional) Change the Health Check Interval (the default is 30 seconds).
4. Select Save to change the current configuration.
The Cloud Connector shows alert messages also on screen, in Alerting Alert Messages .
You can remove alerts using Delete or Delete All. If you delete active (unresolved) alerts, they reappear in the
list after the next health check interval.
Audit log data can alert Cloud Connector administrators to unusual or suspicious network and system
behavior.
Additionally, the audit log data can provide auditors with information required to validate security policy
enforcement and proper segregation of duties. IT staff can use the audit log data for root-cause analysis
following a security incident.
The Cloud Connector includes an auditor tool for viewing and managing audit log information about access
between the cloud and the Cloud Connector, as well as for tracking of configuration changes done in the Cloud
Connector. The written audit log files are digitally signed by the Cloud Connector so that their integrity can be
checked, see Manage Audit Logs [page 696].
Note
We recommend that you permanently switch on Cloud Connector audit logging in productive scenarios.
• Under normal circumstances, set the logging level to Security (the default configuration value).
• If legal requirements or company policies dictate it, set the logging level to All. This lets you use the
log files to, for example, detect attacks of a malicious cloud application that tries to access on-premise
services without permission, or in a forensic analysis of a security incident.
We also recommend that you regularly copy the audit log files of the Cloud Connector to an external persistent
storage according to your local regulations. The audit log files can be found in the Cloud Connector root
directory /log/audit/<subaccount-name>/audit-log_<timestamp>.csv.
Configure audit log settings and verify the integrity of audit logs.
Choose Audit from your subaccount menu and go to Settings to specify the type of audit events the Cloud
Connector should log at runtime. You can currently select between the following Audit Levels (for either
<subaccount> and <cross-subaccount> scope):
Caution
To prevent a single person from being able to both change the audit log level, and delete audit logs, we
recommend that the operating system administrator and the SAP BTP administrator are different persons.
We also suggest that you turn on the audit log at the operating system level for file operations.
Tip
We recommend that you don't log all events unless you are required to do so by legal requirements or
company policies. Generally, logging security events only is sufficient.
To enable automatic cleanup of audit log files, choose a period (14 to 365 days) from the list in the field
<Automatic Cleanup>.
Audit entries for configuration changes are written for the following categories:
In the Audit Viewer section, you can first define filter criteria, then display or download the selected audit
entries.
• In the Audit Type field, you can select the types of auditing events you are interested in.
• In the Pattern field, you can specify a certain string that the detail text of each selected audit entry must
contain. The detail text may contain, for example, information about the user name, requested resource/
URL, or the virtual <host>:<port>. Basic wildcards (glob patterns) are supported, that is, asterisk for any
number of characters (including none) and question mark for a single character. Use this feature to do the
following:
• Filter the audit log for all requests that a particular HTTP user has made during a certain time frame.
• Identify all users who attempted to request a particular URL.
• Identify all requests to a particular back-end system.
• Determine whether a user has changed a certain Cloud Connector configuration. For example, a
search for string BackendMapping returns all add-, delete- or modify- operations on the Mapping
Virtual To Internal System page.
• The Time Range settings specify the time frame to which you want to limit the eligible audit entries.
These filter criteria are combined with a logical AND so that all audit entries that match these criteria are shown.
If you have modified the criteria, choose the Search button again to display the updated selection of audit
events that match the new criteria.
Use the Download button to download the selected audit entries as a compressed (GZIP) CSV file that can be
imported, for example, by Excel. The ZIP file also contains a second text file, a manifest or information, that
provides the selection parameters as well as the date and time when the selection was extracted.
Example
In the following example, the Audit Viewer displays Any audit entries, at Security level, for the time frame
between December 18 2020, 00:00:00 and December 19, 00:00:00:
To check the integrity of all or a part of the audit logs, go to <scc_installation>/auditor. This directory
contains an executable go script file (respectively, go.cmd on Microsoft Windows and go.sh on other
operating systems).
If you start the go file without specifying parameters from <scc_installation>/auditor, all available audit
logs for the current Cloud Connector installation are verified.
The auditor tool is a Java application, and therefore requires a Java runtime, specified in JAVA_HOME, to
execute:
Alternatively, to execute Java, you can include the Java bin directory in the PATH variable.
You can check audit logs also in the UI, using the rightmost button of the Audits section.
Note
The selected date range determines the audit logs that are going to be checked (entire days only, ignoring
the time of day).
As of Cloud Connector 2.14 you can move audit logs to a different location. Standard location remains log/
audit.
Make sure there is enough space left on the device for the desired location and the Cloud Connector OS
user has permission to write files to that location.
Caution
If you choose a network location while access to the file system is slow, overall processing performance of
the Cloud Connector may decrease significantly.
1.2.3.11 Troubleshooting
To troubleshoot connection problems, monitor the state of your open tunnel connections in the Cloud
Connector, and view different types of logs and traces.
Note
For information about a specific problem or an error you have encountered, see Connectivity Support
[page 876].
Monitoring
To view a list of all currently connected applications, choose your Subaccount from the left menu and go to
section Cloud Connections:
• Application name: The name of the application, as also shown in the cockpit, for your subaccount
• Connections: The number of currently existing connections to the application
• Connected Since: The earliest start time of a connection to this application
• Peer Labels: The name of the application processes, as also shown for this application in the cockpit, for
your subaccount
The Log and Trace Files page includes some files for troubleshooting that are intended primarily for SAP
Support. These files include information about both internal Cloud Connector operations and details about the
communication between the local and the remote (SAP BTP) tunnel endpoint.
If you encounter problems that seem to be caused by some trouble in the communication between your cloud
application and the on-premise system, choose Log and Trace Files from your Subaccount menu, go to section
Settings, and activate the respective traces by selecting the Edit button:
• Cloud Connector Trace Level adjusts the levels for Java loggers directly related to Cloud Connector
functionality.
• Other Components Trace Level adjusts the log level for all other Java loggers available at the runtime.
Change this level only when requested to do so by SAP support. When set to a level higher than
Information, it generates a large number of trace entries.
• CPIC Trace Level allows you to set the level between 0 and 3 and provides traces for the CPIC-based RFC
communication with ABAP systems.
• When the Tunnel Traffic Trace is activated for a subaccount, all the HTTP and RFC traffic
crossing the tunnel for that subaccount going through this Cloud Connector, is traced in files with
names tunnel_traffic_<account id>_on_<landscapehost>.trc. This is helpful if you need to
understand what documents have been exchanged between the involved systems.
• ABAP Cloud SNC Traffic Trace: When the ABAP Cloud SNC traffic trace is activated for an account, all
RFC SNC-based traffic crossing a service channel for that account (going through this Cloud Connector),
is traced in files with names snc_traffic_<account id>_on_<landscapehost>.trc. This is helpful if you need to
understand issues with SNC termination in the Cloud Connector.
Caution
Use any traffic and CPIC tracing at level 3 carefully, and only when requested to do so for support
reasons. These traces may write sensitive information (such as payload data of HTTP/RFC requests and
responses) to the trace files, and thus present a potential security risk. The Cloud Connector supports the
implementation of a "four-eyes principle" for activating the trace levels that dump the network traffic into a
trace file. This principle requires two users to activate a trace level that records traffic data.
For more information, see Secure the Activation of Traffic Traces [page 662].
As of Cloud Connector 2.14 you can move trace files to a different location.
Note
Note
Make sure there is enough space left on the device for the desired location and the Cloud Connector OS
user has permission to write files to that location.
Caution
If you choose a network location while access to the file system is slow, overall processing performance of
the Cloud Connector may decrease significantly.
View all existing trace files and delete the ones that are no longer needed.
To prevent your browser from being overloaded when multiple large files are loaded simultaneously, the Cloud
Connector loads only one page into memory. Use the page buttons to move through the pages.
Use the Download/Download All icons to create a ZIP archive containing one trace file or all trace files.
Download it to your local file system for convenient analysis.
Note
If you want to download more than one file, but not all, select the respective rows of the table and choose
Download All.
When running the Cloud Connector with SAP JVM or as of version 2.14 also with other JVMs, you can trigger
the creation of a thread dump by choosing the Thread Dump button, which will be written to the JVM trace file
log/vm_$PID_trace.log for SAP JVM and log/vm_$PID_threads.log for other JVMs. You may be asked by SAP
support to create one, if considered helpful during incident analysis.
From the UI, you can't delete trace files that are currently in use. You can delete them from the Linux OS
command line; however, we recommend that you do not use this option to avoid inconsistencies in the
internal trace management of the Cloud Connector.
• Guided Answers: A new tab or window opens, showing the Cloud Connector section in Guided Answers .
It helps you identify many issues that are classified through hierarchical topics. Once you found a matching
issue, a solution is provided either directly, or by references to SAP Help Portal, Knowledge Base Articles
(KBAs), and SAP notes.
• Support Log Assistant: Opens the support log assistant. There, you can upload Cloud Connector log files
and have them analyzed. After triggering the scan, the tool lists all issues for which a solution can be
identified.
Note
The support log assistant analyzes the complete log. Therefore, also older issues may be found that are
no longer relevant.
Once a problem has been identified, you should turn off the trace again by editing the trace and log settings
accordingly to not flood the files with unnecessary entries.
Use the Refresh button to update the information that appears. For example, you can use this button because
more trace files might have been written since you last updated the display.
If you contact SAP support for help, please always attach the appropriate log files and provide the timestamp
or period, when the reported issue was observed. Depending on the situation, different logs may help to find
the root cause.
Some typical settings to get the required data are listed below:
• <Cloud Connector Trace> provides details related to connections to SAP BTP and to backend
systems as well as master-shadow communication in case of a high availability setup. However, it does
not contain any payload data. This kind of trace is written into scc_core.trc, which is the most relevant
log for the Cloud Connector.
• <Other Components Trace> provides details related to the tomcat runtime, in which the Cloud
Connector is running. The traces are written into scc_core.trc as well, but they are needed only in very
special support situations. If you don't need these traces, leave the level on Information or even lower.
• Tunnel traffic is written into the traffic trace file for HTTP or RFC requests if the tunnel traffic trace is
activated, or into the CPI-C trace file for RFC requests, if the CPI-C trace is set to level 3.
Related Information
Getting Support
A hybrid scenario is one, in which applications running on SAP BTP require access to on-premise systems.
Define and document your scenario to get an overview of the required process steps.
Tasks
To gain an overview of the cloud and on-premise landscape that is relevant for your hybrid scenario, we
recommend that you diagrammatically document your cloud subaccounts, their connected Cloud Connectors
and any on-premise back-end systems. Include the subaccount names, the purpose of the subaccounts
(dev, test, prod), information about the Cloud Connector machines (host, domains), the URLs of the Cloud
Connectors in the landscape overview document, and any other details you might find useful to include.
Document the users who have administrator access to the cloud subaccounts, to the Cloud Connector
operating system, and to the Cloud Connector administration UI.
Such an administrator role documentation could look like following sample table:
Cloud Subaccount X
(CA) Dev1
CA Dev2 X
CA Test X X
CA Prod X
Create and document separate email distribution lists for both the cloud subaccount administrators and the
Cloud Connector administrators.
Define and document mandatory project and development guidelines for your SAP BTP projects. An example
of such a guideline could be similar to the following.
Define and document how to set a cloud application live and how to configure needed connectivity for such an
application.
For example, the following processes could be seen as relevant and should be defined and document in more
detail:
1. Transferring application to production: Steps for transferring an application to the productive status on the
SAP BTP.
2. Application connectivity: The steps for adding a connectivity destination to a deployed application for
connections to other resources in the test or productive landscape.
3. Cloud Connector Connectivity: Steps for adding an on-premise resource to the Cloud Connector in the test
or productive landscapes to make it available for the connected cloud subaccounts.
4. On-premise system connectivity: The steps for setting up a trusted relationship between an on-premise
system and the Cloud Connector, and to configure user authentication and authorization in the on-premise
system in the test or productive landscapes.
5. Application authorization: The steps for requesting and assigning an authorization that is available inside
the SAP BTP application to a user in the test or productive landscapes.
6. Administrator permissions: Steps for requesting and assigning the administrator permissions in a cloud
subaccount to a user in the test or productive landscape.
Configuration Backup [page 626] Backup and restore your Cloud Connector configuration via
the administration UI.
Backup [page 533] Manage the Cloud Connector's configuration backup via
REST API.
Backup And Restore Configuration [page 608] Example: Backup and restore the Cloud Connector configu-
ration via REST API.
The Secure Store is a container for confidential information that is a part of the Cloud Connector configuration.
The Secure Store resides in file SSFS_SCC.DAT, located in directory scc_config.
Every change affecting the Secure Store (such as changing the proxy password) is incremental, that is, it is
appended to the Secure Store. As a consequence, the Secure Store will grow over time.
To do this, choose Configuration Shrink Secure Store (top right corner of the screen) from the Cloud
Connector main menu.
1.2.4 Security
Features
Security is a crucial concern for any cloud-based solution. It has a major impact on the business decision of
enterprises whether to make use of such solutions. SAP BTP is a platform-as-a-service offering designed to
run business-critical applications and processes for enterprises, with security considered on all levels of the
on-demand platform:
Level Features
The Cloud Connector enables integration of cloud applications with services and systems running in customer
networks, and supports database connections from the customer network to SAP HANA databases running on
SAP BTP. As these are security-sensitive topics, this section gives an overview on how the Cloud Connector
helps maintain security standards for the mentioned scenarios.
Related Information
On application level, the main tasks to ensure secure Cloud Connector operations are to provide appropriate
frontend security (for example, validation of entries) and a secure application development.
Basically, you should follow the rules given in the product security standard, for example, protection against
cross-site scripting (XSS) and cross-site request forgery (XSRF).
The scope and design of security measures on application level strongly depend on the specific needs of your
application.
You can use SAP BTP Connectivity to securely integrate cloud applications with systems running in isolated
customer networks.
After installing the Cloud Connector as integration agent in your on-premise network, you can use it to
establish a persistent TLS tunnel to SAP BTP subaccounts.
To establish this tunnel, the Cloud Connector administrator must authenticate himself or herself against the
related SAP BTP subaccount of which he or she must be a member. Once estabilshed, the tunnel can be used
by applications of the connected subaccount to remotely call systems in your network.
Architecture
The figure below shows a system landscape in which the Cloud Connector is used for secure connectivity
between SAP BTP applications and on-premise systems.
• A single Cloud Connector instance can connect to multiple SAP BTP subaccounts, each connection
requiring separate authentication and defining an own set of configuration.
• You can connect an arbitrary number of SAP and non-SAP systems to a single Cloud Connector instance.
• The on-premise system does not need to be touched when used with the Cloud Connector, unless
you configure trust between the Cloud Connector and your on-premise system. A trust configuration is
required, for example, for principal propagation (single sign-on), see Configuring Principal Propagation
[page 419].
• You can operate the Cloud Connector in a high availability mode. To achieve this, you must install a second
(redundant) Cloud Connector (shadow instance), which takes over from the master instance in case of a
downtime.
• The Cloud Connector also supports the communication direction from the on-premise network to the SAP
BTP subaccount, using a database tunnel that lets you connect common ODBC/JDBC database tools to
SAP HANA as well as other available databases in SAP BTP.
A company network is usually divided into multiple network zones according to the security level of
the contained systems. The DMZ network zone contains and exposes the external-facing services of an
organization to an untrusted network, typically the Internet. Besides this, there can be one or multiple other
network zones which contain the components and services provided in the company’s intranet.
You can set up the Cloud Connector either in the DMZ or in an inner network zone. Technical prerequisites for
the Cloud Connector to work properly are:
• The Cloud Connector must have access to the SAP BTP landscape host, either directly or via HTTPS proxy
(see also: Prerequisites [page 351]).
• The Cloud Connector must have direct access to the internal systems it shall provide access to. I.e. there
must be transparent connectivity between the Cloud Connector and the internal system.
Related Information
For inbound connections into the on-premise network, the Cloud Connector acts as a reverse invoke proxy
between SAP BTP and the internal systems.
Exposing Resources
Once installed, none of the internal systems are accessible by default through the Cloud Connector: you must
configure explicitly each system and each service and resource on every system to be exposed to SAP BTP in
the Cloud Connector.
You can also specify a virtual host name and port for a configured on-premise system, which is then used in the
cloud. Doing this, you can avoid that information on physical hosts is exposed to the cloud.
TLS Tunnel
The TLS (Transport Layer Security) tunnel is established from the Cloud Connector to SAP BTP via a so-called
reverse invoke approach. This lets an administrator have full control of the tunnel, since it can’t be established
from the cloud or from somewhere else outside the company network. The Cloud Connector administrator is
the one who decides when the tunnel is established or closed.
The tunnel itself is using TLS with strong encryption of the communication, and mutual authentication of both
communication sides, the client side (Cloud Connector) and the server side (SAP BTP).
The X.509 certificates which are used to authenticate the Cloud Connector and the SAP BTP subaccount are
issued and controlled by SAP BTP. They are kept in secure storages in the Cloud Connector and in the cloud.
Having encrypted and authenticated the tunnel, confidentiality and authenticity of the communication between
the SAP BTP applications and the Cloud Connector is guaranteed.
As an additional level of control, the Cloud Connector optionally allows restricting the list of SAP BTP
applications which are able to use the tunnel. This is useful in situations where multiple applications are
deployed in a single SAP BTP subaccount while only particular applications require connectivity to on-premise
systems.
SAP BTP guarantees strict isolation on subaccount level provided by its infrastructure and platform layer. An
application of one subaccount is not able to access and use resources of another subaccount.
Supported Protocols
The Cloud Connector supports inbound connectivity for HTTP and RFC, any other protocol is not supported.
Principal Propagation
The Cloud Connector also supports principal propagation of the cloud user identity to connected on-premise
systems (single sign-on). For this, the system certificate (in case of HTTPS) or the SNC PSE (in case of RFC)
is mandatory to be configured and trust with the respective on-premise system must be established. Trust
configuration, in particular for principal propagation, is the only reason to configure and touch an on-premise
system when using it with the Cloud Connector.
Related Information
The Cloud Connector supports the communication direction from the on-premise network to SAP BTP, using a
database tunnel.
The database tunnel is used to connect local database tools via JDBC or ODBC to the SAP HANA DB or other
databases onSAP BTP, for example, SAP Business Objects tools like Lumira, BOE or Data Services.
• The database tunnel only allows JDBC and ODBC connections from the Cloud Connector into the cloud. A
reuse for other protocols is not possible.
• The tunnel uses the same security mechanisms as for the inbound connectivity:
• TLS-encryption and mutual authentication
• Audit logging
To use the database tunnel, two different SAP BTP users are required:
• A platform user (member of the SAP BTP subaccount) establishes the database tunnel to the HANA DB.
• A HANA DB user is needed for the ODBC/JDBC connection to the database itself. For the HANA DB user,
the role and privilege management of HANA can be used to control which actions he or she can perform on
the database.
Related Information
As audit logging is a critical element of an organization’s risk management strategy, the Cloud Connector
provides audit logging for the complete record of access between cloud and Cloud Connector as well as of
configuration changes done in the Cloud Connector.
Integrity Check
The written audit log files are digitally signed by the Cloud Connector so that they can be checked for integrity
(see also: Manage Audit Logs [page 696]).
Alerting
• The audit log data can provide auditors with information required to validate security policy enforcement
and proper segregation of duties.
• IT staff can use the audit log data for root-cause analysis following a security incident.
Related Information
Infrastructure and network facilities of the SAP BTP ensure security on network layer by limiting access to
authorized persons and specific business purposes.
Isolated Network
The SAP BTP landscape runs in an isolated network, which is protected from the outside by firewalls, DMZ, and
communication proxies for all inbound and outbound communications to and from the network.
Sandboxed Environments
The SAP BTP infrastructure layer also ensures that platform services, like the SAP BTP Connectivity, and
applications are running isolated, in sandboxed environments. An interaction between them is only possible
over a secure remote communication channel.
Learn about data center security provided for SAP BTP Connectivity.
SAP BTP runs in SAP-hosted data centers which are compliant with regulatory requirements. The security
measures include, for example:
• strict physical access control mechanisms using biometrics, video surveillance, and sensors
• high availability and disaster recoverability with redundant power supply and own power generation
Topics
Hover over the elements for a description. Click an element to find the recommended actions in the table
below.
Network Zone Depending on the needs of the project, To access highly secure on-premise
the Cloud Connector can be either set systems, operate the Cloud Connector
Back to Topics [page 717]
up in the DMZ and operated centrally centrally by the IT department and in-
by the IT department or set up in the in- stall it in the DMZ of the company net-
tranet and operated by the line-of-busi- work.
ness.
Set up trust between the on-prem-
ise system and the Cloud Connector,
and only accept requests from trusted
Cloud Connectors in the system.
OS-Level Protection The Cloud Connector is a security-crit- Restrict access to the operating sys-
ical component that handles the in- tem on which the Cloud Connector is
Back to Topics [page 717]
bound access from SAP BTP applica- installed to the minimal set of users
tions to systems of an on-premise net- who should administrate the Cloud
work. Connector.
Administration UI After installation, the Cloud Connector Change the password of the
Back to Topics [page 717] provides an initial user name and Administrator user immediately after in-
password and forces the user stallation. Choose a strong password
(Administrator) to change the for the user (see also Recommenda-
password upon initial logon. tions for Secure Setup [page 382]).
You can access the Cloud Connector Exchange the self-signed X.509 certifi-
administration UI remotely via HTTPS. cate of the Cloud Connector adminis-
tration UI by a certificate that is trusted
After installation, it uses a self-signed
by your company and the company’s
X.509 certificate as TLS server certifi-
approved Web browser settings (see
cate, which is not trusted by default by
Exchange UI Certificates in the Admin-
Web browsers.
istration UI [page 632]).
Audit Logging For end-to-end traceability of configura- Switch on audit logging in the Cloud
tion changes in the Cloud Connector, Connector: set audit level to “All” (see
Back to Topics [page 717]
as well as communication delivered by Recommendations for Secure Setup
the Cloud Connector, switch on audit [page 382] and Manage Audit Logs
logging for productive scenarios. [page 696])
High Availability To guarantee high availability of the Use the high availability feature of the
connectivity for cloud integration sce- Cloud Connector for productive scenar-
Back to Topics [page 717]
narios, run productive instances of the ios (see Install a Failover Instance for
Cloud Connector in high availability High Availability [page 655]).
mode, that is, with a second (redun-
dant) Cloud Connector in place.
Supported Protocols HTTP, HTTPS, RFC and RFC over SNC The route from the Cloud Connector
are currently supported as protocols for
to the on-premise system should be en-
Back to Topics [page 717] the communication direction from the
crypted using TLS (for HTTPS) or SNC
cloud to on-premise.
(for RFC).
The route from the application VM in
the cloud to the Cloud Connector is al- Trust between the Cloud Connector
ways encrypted. and the connected on-premise systems
should be established (see Set Up Trust
You can configure the route from [page 420]).
the Cloud Connector to the on-prem-
ise system to be encrypted or unen-
crypted.
Configuration of On-Premise Systems When configuring the access to an in- Use hostname mapping of exposed on-
ternal system in the Cloud Connector,
premise systems in the access control
Back to Topics [page 717] map physical host names to virtual
of the Cloud Connector (see Configure
host names to prevent exposure of in-
formation on physical systems to the Access Control (HTTP) [page 459] and
cloud. Configure Access Control (RFC) [page
467]).
To allow access only for trusted appli- Narrow the list of cloud applications
cations of your SAP BTP subaccount which are allowed to use the on-prem-
to on-premise systems, configure the ise tunnel to the ones that need on-
list of trusted applications in the Cloud premise connectivity (see Set Up Trust
Connector. [page 420]).
Cloud Connector Instances You can connect a single Cloud Use different Cloud Connector instan-
Connector instance to multiple SAP ces to separate productive and non-
Back to Topics [page 717]
BTP subaccounts. productive scenarios.
1.2.5 Upgrade
Upgrade your Cloud Connector and avoid connectivity downtime during the update.
The steps for upgrading your Cloud Connector are specific to the operating system that you use. Previous
settings and configurations are automatically preserved.
Caution
Upgrade is supported only for installer versions, not for portable versions, see Installation [page 349].
Before upgrading, please check the Prerequisites [page 351] and make sure your environment fits the new
version. We recommend that you create a Configuration Backup [page 626] before starting an upgrade.
If you have a single-machine Cloud Connector installation, a short downtime is unavoidable during the upgrade
process. However, if you have set up a master and a shadow instance, you can perform the upgrade without
downtime by executing the following procedure:
Caution
After upgrading the former shadow instance from a version prior to 2.13 and having switched its role to
be the new master instance, reset high availability settings in both instances now, before continuing to
upgrade the second instance from a version prior to 2.13 as well. The master-shadow connection must
be re-established after both instances have been upgraded from versions prior to 2.13 to versions 2.13
or higher.
5. Shut down the new shadow instance and perform the upgrade procedure on it as well.
6. Restart the new shadow instance and wait until it has connected to the already upgraded current master
instance.
7. Perform again the Switch Roles operation if you want the previous master instance to act as the new
master instance again.
For more information, see Install a Failover Instance for High Availability [page 655].
Microsoft Windows OS
1. Uninstall the Cloud Connector as described in Uninstallation [page 724] and make sure to retain the
existing configuration.
2. Reinstall the Cloud Connector within the same directory. For more information, see Installation on
Microsoft Windows OS [page 375].
3. Before accessing the administration UI, clear your browser cache to avoid any unpredictable behavior due
to the upgraded UI.
Linux OS
rpm -U com.sap.scc-ui-<version>.rpm
Note
All extensions to the daemon provided via scc_daemon_extension.sh mechanism will survive a
version update. An upgrade to version 2.12.3 will already consider an existing file, even though previous
versions were not supporting that feature.
2. Before accessing the administration UI, clear your browser cache to avoid any unpredictable behavior due
to the upgraded UI.
Sometimes you must update the Java VM used by the Cloud Connector, for example, because of expired TLS
certificates contained in the JVM trust store, bug fixes, deprecated JVM versions, and so on.
• If you make a replacement in the same directory, shut down the Cloud Connector, upgrade the JVM, and
restart the Cloud Connector when you are done.
• If you change the installation directory of the JVM, follow the steps below for your operating system.
A Java Runtime Environment (JRE) is not sufficient. You must use a JDK or SAP JVM.
Microsoft Windows OS
Note
If the JavaHome value does not yet exist, create it here with a "String Value" (REG_SZ) and specify the full
path of the Java installation directory, for example: C:\Program Files\sapjvm.
5. Close the registry editor and restart the Cloud Connector.
Linux OS
• in sh/bash/dash/zsh:
export JAVA_HOME=/opt/sap/sapjvm_8
Note
• If you use your own CA certificates for the Email configuration (see Alerting [page 693]) or for LDAP
(see Use LDAP for User Administration [page 637]), you must reimport them to the JVM trust store as
described there.
• Make sure the selected cipher suites are accepted by the JVM you are about to install. If in doubt,
revert to the default selection prior to changing the JVM.
After executing the above steps, the Cloud Connector should be running again and should have picked up the
new Java version during startup. You can verify this by logging in to the Cloud Connector with your favorite
browser, opening the About dialogue and checking that the field <Java Details> shows the version number
and build date of the new Java VM. After you verified that the new JVM is indeed used by the Cloud Connector,
delete or uninstall the old JVM.
1.2.7 Uninstallation
• If you have installed an installer variant of the Cloud Connector, follow the steps for your operating system
to uninstall the Cloud Connector.
• To uninstall a developer version, proceed as described in section Portable Variants.
Microsoft Windows OS
1. In the Windows software administration tool, search for Cloud Connector (formerly named SAP HANA
cloud connector 2.x).
2. Select the entry and follow the appropriate steps to uninstall it.
3. When you are uninstalling in the context of an upgrade, make sure to retain the configuration files.
Linux OS
rpm -e com.sap.scc-ui
Mac OS X
Portable Variants
(Microsoft Windows OS, Linux OS, Mac OS X) If you have installed a portable version (zip or tgz archive) of the
Cloud Connector, simply remove the directory in which you have extracted the Cloud Connector archive.
Related Information
Technical Issues
Does the Cloud Connector send data from on-premise systems to SAP BTP or the other way
around?
The connection is opened from the on-premise system to the cloud, but is then used in the other direction.
An on-premise system is, in contrast to a cloud system, normally located behind a restrictive firewall and its
services aren’t accessible thru the Internet. This concept follows a widely used pattern often referred to as
reverse invoke proxy.
Is the connection between the SAP BTP and the Cloud Connector encrypted?
Yes, by default, TLS encryption is used for the tunnel between SAP BTP and the Cloud Connector.
Keep your Cloud Connector installation and JDK updated to avoid the use of weak and deprecated ciphers
for TLS communication. Which cipher and TLS versions are actually used, is defined by both the cloud region
setup and the JDK that is used for Cloud Connector. The TLS implementation used for the communication is
the one of the JDK.
Can I use a TLS-terminating firewall between Cloud Connector and SAP BTP?
This is not possible. Basically, this is a desired man-in-the-middle attack, which does not allow the Cloud
Connector to establish a mutual trust to the SAP BTP side.
Can I copy/clone a Cloud Connector installation and use it in parallel on a different machine?
This is not supported. You would face issues regularly, as those two instances will be considered as one, which
is not expected by the cloud side.
If you just want to move the installation to a new machine, create a backup via the Configuration Backup [page
626] feature, create a new installation, import the backup, and discard the previous installation.
What is the oldest version of SAP Business Suite that's compatible with the Cloud Connector?
The Cloud Connector can connect an SAP Business Suite system version 4.6C and newer.
6 7 8 11 17
Connector Ver-
= 2.7.2 Yes Yes Yes No No
sion
>=2.12.3 No No Yes No No
Restriction
Support for Java 7 has been discontinued. For more information, see Prerequisites [page 352].
Tip
We recommend that you always use the latest patch level of the respective Java version.
Version 2.8 and later of theCloud Connector may have problems with ciphers in Google Chrome, if you use
the JVM 7. For more information read this SCN Article .
Which configuration in the SAP BTP destinations do I need to handle the user management
access to the Cloud User Store of the Cloud Connector?
Is the Cloud Connector sufficient to connect the SAP BTP to an SAP ABAP back end or is SAP
BTP Integration needed?
It depends on the scenario: For pure point-to-point connectivity to call on-premise functionality like BAPIs,
RFCs, OData services, and so on, that are exposed via on-premise systems, the Cloud Connector might suffice.
However, if you require advanced functionality, for example, n-to-n connectivity as an integration hub, SAP BTP
Integration – Process Integration is a more suitable solution. SAP BTP Integration can use the Cloud Connector
as a communication channel.
The amount of bandwidth depends greatly on the application that is using the Cloud Connector tunnel. If the
tunnel isn’t currently used, but still connected, a few bytes per minute is used simply to keep the connection
alive.
What happens to a response if there's a connection failure while a request is being processed?
The response is lost. The Cloud Connector only provides tunneling, it does not store and forward data when
there are network issues.
For productive instances, we recommend installing the Cloud Connector on a single purpose machine. This is
relevant for Security [page 709]. For more details on which network zones to choose for the Cloud Connector
setup, see Network Zones [page 368].
How does a disaster recovery setup look like for the Cloud Connector?
There is no explicit implementation of a disaster recovery setup for the Cloud Connector. However, it is actually
not needed.
Instead, make sure you have machines available in some other data center than the one in which your
productive setup is running. Also, make sure you regularly generate a Configuration Backup [page 626].
We recommend that you use at least three servers, with the following purposes:
• Development
• Production master
• Production shadow
Note
Do not run the production master and the production shadow as VMs inside the same physical machine.
Doing so removes the redundancy, which is needed to guarantee high availability. A QA (Quality Assurance)
instance is a useful extension. For disaster recovery, you will need two additional instances: another master
instance, and another shadow instance as a reserve for the disaster case.
We currently support Windows and Linux versions with an installer for productive scenarios. Additionally, the
portable variant of the Cloud Connector is available not only for those operating systems, but also for macOS.
You can find the full product availability matrix in Prerequisites [page 351].
We currently support 64-bit operating systems running only on an x86-64 processor (also known as x64,
x86_64 or AMD64), and for Linux also on the PowerPC Little Endian variant (also known as ppc64le).
Yes, you should be able to connect almost any system that supports the HTTP Protocol, to the SAP BTP, for
example, Apache HTTP Server, Apache Tomcat, Microsoft IIS, or Nginx.
No, this is not possible. For client certificate authentication, an end-2-end TLS communication is required. This
is not the case, because the Cloud Connector needs to inspect incoming requests in order to perform access
control checks.
How can I do connection pooling for HTTP services that are exposed via the Cloud Connector?
The Cloud Connector itself does not perform connection pooling, but provides a 1-to-1 mapping for each logical
connection received trough the tunnel.
By this mapping, a new connection to the backend system is opened, and kept open until closed either by the
backend or by the client on cloud side.
The actual connection pooling is defined by the application client on cloud side:
• If a connection is re-used in the client library, it is re-used on the Cloud Connector side as well.
• If it is closed immediately, also the mapped one on Cloud Connector side will be closed immediately.
Can I open two windows or tabs in a single browser instance to administrate the Cloud
Connector?
No, this is not supported and may cause odd behavior on the different screens, in particular when trying
to navigate through multiple subaccounts. If you like to open the administration UI twice, use two separate
browser instances.
Modifications of HTTP response headers are done if needed. In particular, Set-Cookie domains are adjusted
according to the configured domain and host mappings. Also, in case of redirects, the location header will be
adjusted according to the host mappings. Modifications of HTTP request headers are also done if needed,
which is currently only the case for the Host header content. It will be replaced by the internal host, if the host
mapping configuration is set up accordingly. The Cloud Connector will not delete any header that is sent by the
cloud application. However, the Connectivity service will drop Connectivity service-specific headers, such as
SAP-Connectivity-Authentication or SAP-Connectivity-ConsumerAccount so that those headers
will neither reach the Cloud Connector nor the eventual backend.
Administration
Yes, find more details here: Manage Audit Logs [page 696].
No, currently there is only one role that allows complete administration of the Cloud Connector.
Yes, to enable this, you must configure an LDAP server. See: Use LDAP for User Administration [page 637].
How can I reset the Cloud Connector's administrator password when not using LDAP for
authentication?
This resets the password and user name to their default values.
You can manually edit the file; however, we strongly recommend that you use the users.xml file.
Starting with Cloud Connector version 2.11, you can use a dedicated backup feature, either from the
administration UI (see Configuration Backup [page 626]) or via REST API (see Backup [page 533]).
Yes, you can create an archive file of the installation directory to create a full backup. Before you restore from a
backup, note the following:
• If you restore the backup on a different host, the UI certificate will be invalidated.
• Before you restore the backup, you should perform a “normal” installation and then replace the files. This
registers the Cloud Connector at your operating systems package manager.
This user opens the tunnel and generates the certificates that are used for mutual trust later on.
The user is not part of the certificate that identifies the Cloud Connector.
In both the Cloud Connector UI and in the SAP BTP cockpit, this user ID appears as the one who performed the
initial configuration (even though the user may have left the company).
What happens to a Cloud Connector connection if the user who created the tunnel leaves the
company?
This does not affect the tunnel, even if you restart the Cloud Connector.
SAP supports the latest 2 feature releases in parallel. For the latest feature release, the last 2 patch levels are
supported. For the previous feature release, only its latest patch level is supported.
For more information on the Cloud Connector support strategy, see SAP Note 3302250 .
SAP BTP customers can purchase subaccounts and deploy applications into these subaccounts.
Additionally, there are users, who have a password and can log in to the cockpit and manage all subaccounts
they have permission for.
• A single subaccount can be managed by multiple users, for example, your company may have several
administrators.
• A single user can manage multiple subaccounts, for example, if you have multiple applications and want
them (for isolation reasons) to be split over multiple subaccounts.
For trial users, the account name is typically your user name, followed by the suffix “trial”:
Does the Cloud Connector work with the SAP BTP Cloud Foundry environment?
Yes, the Cloud Connector can establish a connection to regions based on the SAP BTP Cloud Foundry
environment.
The Cloud Connector offers a Service Channel to S/4HANA Cloud instances, given that they are associated
with the respective SAP BTP subaccount. For more information, see Using Service Channels [page 611].
The Cloud Connector also supports S/4HANA Cloud communication scenarios invoking HTTP services or
remote-enabled function modules (RFMs) in on-premise ABAP systems.
Does the Cloud Connector work with the SAP BTP ABAP environment?
The Cloud Connector supports communication from and to the SAP BTP ABAP environment using the Cloud
Foundry Connectivity service, which requires a Cloud Connector version 2.12.3 or higher.
You can connect multiple Cloud Connectors to a single subaccount. This lets you assign multiple separate
corporate network segments.
Those Cloud Connectors are distinguishable based on the location ID, which you must provide to the
destination configuration on the cloud side.
Can I use the Cloud Connector from cloud to on-premise for any protocol?
You can use the TCP channel of the Cloud Connector, if the client supports a SOCKS5 proxy to establish the
connection and the protocol is TCP-based. However, only the HTTP and RFC protocols currently provide an
additional level of access control by checking invoked resources.
This is possible only for a limited set of protocols. You can use the Cloud Connector as a JDBC or ODBC
proxy to access the HANA DB instance within your SAP BTP Neo subaccount (service channel). This is
sometimes referred to as “HANA protocol”. Also, there are service channels for SSH access to SAP BTP Neo
virtual machines, and for RFC access to ABAP cloud systems. All of these service channels provide access to
endpoints that are not visible in the Internet.
For HTTP, the endpoints that could be addressed are visible in the Internet. Therefore, you can simply use your
normal network infrastructure that is prepared for accessing HTTPS endpoints in the Internet anyway.
No, the audit log monitors access only from SAP BTP to on-premise systems.
Troubleshooting
How do I fix the “Could not open Service Manager” error message?
You are probably seeing this error message due to missing administrator privileges. Right-click the cloud
connector shortcut and select Run as administrator.
If you don’t have administrator privileges on your machine you can use the portable variant of the Cloud
Connector.
Note
The portable variants of the Cloud Connector are meant for nonproductive scenarios only.
JAVA_HOME must point to the installation directory of your SAP JVM or JDK while PATH must contain the bin
folder inside the installation directory of your SAP JVM or JDK. This is relevant in particular for the portable
versions. The installers will also detect JDKs in other places.
Open a command line prompt with administrator privileges or with sufficient user privileges to read and write
files in the Cloud Connector directory. Then, make sure the environment variable JAVA_HOME is set to the
installation directory of the JDK used by the Cloud Connector.
Afterwards, switch to the Cloud Connector directory and call the appropriate batch or shell script tools via
<toolname>.bat or ./<toolname>.sh. If the respective tool script requires further input parameters, its
usage syntax will be written to the console.
When I try to open the Cloud Connector UI, Google Chrome opens a Save as dialog, Firefox
displays some cryptic signs, and Internet Explorer shows a blank page, how do I fix this?
This happens when you try to access the Cloud Connector over HTTP instead of HTTPS. HTTP is the default
protocol for most browsers.
Adding “https://” to the beginning of your URL should fix the problem. For localhost, you can use https://
localhost:8443/.
The Cloud Connector provides REST APIs to support automation of configuration and monitoring tasks. REST
APIs are exposed on the same host and port that you use to access to the Cloud Connector.
The payload (that is, the data transmitted in the body) of requests and responses is mostly coded in JSON
format. The following example shows the request payload {description:<value>} coded in JSON:
In case of errors, the HTTP status code is 4xx or 500. Error details are supplied in the response body in JSON
format:
The request JSON is listed in the API descriptions in an abbreviated form, showing only the property names
and omitting the values. Details on the values (data types and restrictions) are provided separately below the
respective API descriptions.
The API documentation also includes possible error types, and details on those errors below the API
description. These errors pertain to property values of the payload only. We do not claim this list of errors
to be exhaustive. In particular, errors caused by obviously erroneous input, such as missing mandatory or
malformed property values, may have been omitted. As a rule of thumb, missing or invalid values result in
INVALID_REQUEST. Errors caused elsewhere (for example, inappropriate header field values) are not listed.
Note
Request bodies in JSON format require the header field Content-Type application/json. The API
descriptions do not explicitly mention this fact. Header fields are shown only if they deviate from the
default Content-Type: application/json.
The Cloud Connector supports the authentication types basic authentication and form-based authentication.
Once authenticated, a client can keep the session and execute subsequent requests in the session. A session
avoids the overhead caused by authentication. As usual, the session ID can be obtained from the response
header field Set-cookie (as JSESSIONID=<session Id>), and must be sent in the request header Cookie:
JSESSIONID=<session Id>.
The Cloud Connector employs CSRF tokens to counter CSRF (cross-site request forgery) attacks. Upon first
request, a CSRF token is generated and sent back in the response header in field X-CSRF-Token. The client
application must retain this token and send it in all subsequent requests as header field X-CSRF-Token together
with the session cookie as explained above.
Note
No CSRF token is generated if request header field Connection has the value close (as opposed to keep-
alive). In other words, if you want to make stateful, session-based REST calls, use Connection: keep-alive,
An inactive session will incur a timeout at some point, and will consequently be removed. A request using an
expired session will receive a login page (Content-type: text/html). The status code of the response is 200 in
this case. Hence, the only way to detect an expired session is to pay attention to the content type and status
code. Content type text/html in a connection with status code 200 indicates an expired session.
For security reasons, a session should be closed or invalidated once it is not needed anymore. You can achieve
this by including Connection: close in the header of the final call involving the session in question. As a result,
the Cloud Connector invalidates the session. Subsequent attempts to send a request in the context of that
session respond with a login page as explained above.
User Roles
The REST API supports different user roles. Depending on the role, an API grants or denies access. In
default configuration, the Cloud Connector uses local user storage and supports the single user Administrator
(administrator role). Using LDAP user storage, you can use various users:
Return Codes
Successful requests return the code 200, or, if there is no content, 204. POST actions that create new entities
return 201, with the location link in the header (that is, the value of the header field location is the full URI of the
entity created).
• 400 – Invalid request. For example, if parameters are invalid or the API is not supported anymore, or an
unexpected state occurs, as well as in case of other non-critical errors.
• 401 – Authorization required.
• 403 – The current Cloud Connector instance does not allow the request. Typically, this is the case when
the initial password has not been changed yet.
Most APIs return specific error details depending on the situation. Such errors are addressed by the respective
API description.
Remember
The Cloud Connector forbids simultaneous changes on a subaccount from different clients. A subaccount
used by the Cloud Connector UI is blocked for changes made from other instances. Even just selecting
a subaccount in the Cloud Connector dashboard (displaying the list of all subaccounts) blocks this
subaccount. If you have selected more than one subaccount, the first one in the list is blocked.
Operations on blocked resources raise an error. In this case, REST APIs return the code 409.
Entities returned by the APIs contain links as suggested by the current draft JSON Hypertext Application
Language (see https://tools.ietf.org/html/draft-kelly-json-hal-08 ).
Sample Code
Use the connectivity proxy for Kubernetes to connect workloads on a Kubernetes cluster to on-premise
systems.
The connectivity proxy is a Kubernetes component that connects workloads running on a Kubernetes cluster
to on-premise systems, which are exposed via the Cloud Connector [page 343]. The connectivity proxy must
be paired to an SAP BTP region to grant access to the Cloud Connectors connected to that region. The SAP
BTP domain model (subaccounts) is used to target a particular Cloud Connector.
The connectivity proxy is delivered as a Docker image and a Helm chart. You need to run the image on your
Kubernetes cluster with appropriate configurations. The Helm chart simplifies the installation process. See
Lifecycle Management [page 770] for more details.
You can find information about new versions of the connectivity proxy via SAP BTP release notes. To request a
new feature, you can use Influence SAP .
1.3.1 Concepts
Find an overview of important concepts for working with the connectivity proxy for Kubernetes.
How the Connectivity Proxy Works [page 740] Learn about the connectivity proxy for Kubernetes: Scenario
and conguration steps.
Operational Modes [page 743] Details about the different operational modes of the connec-
tivity proxy.
Mutual TLS [page 747] Use Transport Layer Security (TLS) with the connectivity
proxy.
External Health Checking [page 751] Perform external health checks for the connectivity proxy.
High Availability [page 755] Run the connectivity proxy for Kubernetes in high availability
mode.
Audit Logging [page 757] Using audit logging for the connectivity proxy.
Integration with SAP Services [page 758] Integrate the connectivity proxy with SAP services.
Automatic Pickup on Resource Changes [page 761] Configure automatic pickup on resource changes for the
connectivity proxy.
Service Channels: On-Premise-to-Cloud Connectivity [page Use the connectivity proxy to connect to an on-premise net-
762] work via service channel.
Installing the Connectivity Proxy in Clusters with Istio [page Install the connectivity proxy in a Kubernetes cluster that is
764] configured for Istio.
Installing the Transparent Proxy as Subchart of the Connec- Install the transparent proxy as subchart of the connectivity
tivity Proxy [page 766] proxy via Helm.
Installing the Connectivity Proxy in Multi-Region Mode [page Use a single connectivity proxy installation for your global
766] account.
Learn about the connectivity proxy for Kubernetes: Scenario and required configuration.
Glossary
• SAP Business Technology Platform (SAP BTP): SAP platform for cloud services (replaces SAP Cloud
Platform).
• SAP Connectivity service: Core platform service, offering a secure tunnelling solution between your
on-premise network and the cloud.
• Cloud Connector: On-premise client of the Connectivity service, deployed and lifecycle-managed in your
local network. The initiator of the secure tunnel to the platform, that is, to the Connectivity service.
• Connectivity proxy: Software component (logically part of the Connectivity service), deployed locally
to the consuming part (usually a cloud application or a service component). It can work in multiple
operational modes, depending on the exact requirement of the consuming party.
• SAP UAA (aka XSUAA): SAP Authorization service, issues client credentials and access tokens, associated
with the platform tenancy model.
Scenario
An end user works with a cloud application or solution. To complete the task, the application or solution needs
to connect to an on-premise system (hosted either by the consumer tenant or the cloud application provider
tenant). The system is not accessible directly via Internet, but securely exposed by the Cloud Connector. Only
selected parts of the system functionality may be exposed to the cloud application. For more information, see
Cloud Connector [page 343].
• The connectivity proxy is deployed and configured in the Kubernetes cluster (see Lifecycle Management
[page 770]).
• A cloud application is deployed on Kubernetes, next to the connectivity proxy, and it is configured to
connect to the proxy (see Using the Connectivity Proxy [page 794]). The cloud application is up and
running and accessible by end users.
• The Cloud Connector is installed and configured in your local network and connected to the cloud (that is,
to the Connectivity service), and stays in a ready-to-be-used mode (see Cloud Connector [page 343]).
• The on-premise systems that the cloud application needs to connect to are properly exposed via /
configured in the Cloud Connector (see Configure Access Control [page 456]).
When all prerequisites are met, the cloud application can be properly used by end users:
• An end user works with a client tool, for example, a browser or a REST client.
• The client tool connects to a cloud application, in this case hosted in a Kubernetes cluster.
Involved Parties
• Cloud application: Business workload initiated by end users (or a background job) of the business
solution.
• SAP BTP Services: The connectivity proxy cannot operate on its own. It needs to connect to other services
for key operations, namely:
• authorization - XSUAA: Ensure any operation is properly secured.
• pairing/integration with SAP Connectivity service: Secure access control to Cloud Connectors.
• On-Premise systems or services: The target system, securely hosted in your local network, usually behind
a firewall, and exposed via the Cloud Connector.
Configuration
For all these points, proper configuration is required. For more information, see Lifecycle Management [page
770].
Learn about the different operational modes of the connectivity proxy for Kubernetes.
The connectivity proxy can run in four different operational modes, based on two main categories:
• Trust for the surrounding environment and the callers of the proxy
• Tenant usage of the proxy
The connectivity proxy operates on behalf of a single, statically configured tenant. Applications cannot be
trusted. The connectivity proxy is configured as follows:
The connectivity proxy operates on behalf of multiple tenants. Applications cannot be trusted. The connectivity
proxy is configured as follows:
The connectivity proxy operates on behalf of a single, statically configured tenant. Applications are trusted. The
connectivity proxy is configured as follows:
The connectivity proxy operates on behalf of multiple tenants. Applications are trusted. Applications use
service keys of the connectivity_proxy service instance of the connectivity service. The connectivity
proxy is configured as follows:
Related Information
Use Transport Layer Security (TLS) for the connectivity proxy for Kubernetes.
TLS encrypts the connection between client and server, following the TLS specification. When using mutual
TLS, both the TLS client and the TLS server authenticate each other through X.509 certificates.
In an on-premise network, the TLS client is represented by the Cloud Connector. On the cloud side, the direct
TLS server may be:
• The Kubernetes Ingress: In this case, the Ingress terminates the TLS connection and establishes a new TLS
connection to the connectivity proxy.
Connectivity proxy deployment provides two options to configure end-to-end mutual TLS, that is, the TLS
communication between Cloud Connector and connectivity proxy:
• With TLS termination in the Ingress [page 748]: TLS configuration has to be made on both the Ingress
controller (TLS client) and connectivity proxy (TLS server).
• Without TLS termination in the Ingress [page 750]: TLS configuration has to be made only on the
connectivity proxy side (TLS server).
1. Enable the TLS communication on the connectivity proxy (TLS server) side, adding the following
configuration in the values.yaml file:
config:
servers:
businessDataTunnel:
enableTls: true
2. Add the required TLS configuration for the connectivity proxy (TLS server). There are two options to do
this:
• If you already have an appropriate secret for this purpose in the cluster, add the following configuration
in the values.yaml file:
secretConfig:
servers:
businessDataTunnel:
secretName: <secret name>
Note
The Kubernetes secret must contain the following properties, used for authentication to the
Ingress resource:
• If you don't have such a Kubernetes secret, it can be automatically generated. Add the following
configuration in the values.yaml file:
secretConfig:
servers:
businessDataTunnel:
secretName: <secret name>
Note
The certificate must be issued for the external host specified on the configuration path
config.servers.businessDataTunnel.externalHost or for the domain to which the
external host belongs. For example, if the external host is "ingress.mycluster.com", the certificate
CN or SAN must contain "ingress.mycluster.com" or "*.mycluster.com".
3. Add the required TLS configuration for the Ingress. There are three options to do this:
• If you already have an appropriate Kubernetes secret for this purpose in the cluster, add the following
configuration in the values.yaml file:
ingress:
tls:
proxy:
secretName: <secret name>
Note
The Kubernetes secret must contain the following properties, used for authentication to the
Ingress resource:
• If you don't have such a secret, it can be automatically generated. Add the following configuration in
the values.yaml file:
ingress:
tls:
proxy:
secretName: <secret name>
secretData:
key: <base 64-encoded private key in PEM format>
certificate: <base 64-encoded certificate in PEM format>
caCertificate: <base 64-encoded full Certificate Authority chain
in PEM format>
• If you didn't add any TLS configuration for the Ingress, the TLS configuration of the connectivity
proxy is reused for the Ingress, that is, the connectivity proxy and the Ingress will use the same TLS
configuration to communicate to each other.
Note
In this case, the specified certificate must be issued by the specified certificate authority.
Note
The certificate must be issued for the external host specified on the configuration path
config.servers.businessDataTunnel.externalHost or for the domain to which the external
Note
In this case, the minimum required version for the NGINX ingress controller is 0.31.0.
1. Enable the TLS communication on the connectivity proxy (TLS server) side, adding the following
configuration in the values.yaml file:
config:
servers:
businessDataTunnel:
enableTls: true
2. Add the required TLS configuration for the connectivity proxy (TLS server). There are two options to do
this:
• If you already have an appropriate secret for this purpose in the cluster, add the following configuration
in the values.yaml file:
secretConfig:
servers:
businessDataTunnel:
secretName: <secret name>
Note
The Kubernetes secret must contain the following properties, used for authentication to the
Ingress resource:
• ca.crt: Base 64-encoded full Connectivity certificate authority (CA) chain in PEM format.
• If you don't have such a secret, it can be automatically generated. Add the following configuration in
the values.yaml file:
secretConfig:
servers:
Note
secretConfig:
servers:
businessDataTunnel:
...
secretData:
...
caCertificate: <base 64-encoded full Connectivity Certificate
Authority chain in PEM format>
Note
The certificate must be issued for the external host specified on the configuration path
config.servers.businessDataTunnel.externalHost or for the domain to which the
external host belongs. For example, if the external host is "ingress.mycluster.com", the certificate
CN or SAN must contain "ingress.mycluster.com" or "*.mycluster.com".
3. Skip the TLS configuration on the Ingress resource, that is, skip the configuration path ingress.tls in
the values.yaml file.
Note
Note
By default, the NGINX ingress controller does not support communication without termination of the TLS
connection. To support such communication, the <--enable-ssl-passthrough> flag must be part of
the configuration with which the ingress controller is started. For more information, see Command line
arguments (Kubernetes documentation on github).
Perform external health checks for the connectivity proxy for Kubernetes.
• https://healthcheck.<config.servers.businessDataTunnel.externalHost>/healthcheck
• https://healthcheck.<config.servers.businessDataTunnel.externalHost>, that will
redirect the request to the first URL.
The external health checking configuration depends on the mutual TLS configuration. Currently, the
connectivity proxy deployment provides three mutual TLS configuration options:
Mutual TLS Only to the Ingress or End-to-End Mutual TLS with Termination of
the TLS Connection in the Ingress
In these cases, you have two options to configure the external health checking:
• Reuse the TLS secret that is used for communication with the Cloud Connector. In this case, you don't
need to do any extra configuration.
Note
The certificate from the TLS secret must be issued for both the external host specified on
the configuration path <config.servers.businessDataTunnel.externalHost> and the host
healthcheck.<config.servers.businessDataTunnel.externalHost>. For example, if the
external host is "ingress.mycluster.com", the certificate CN could be "ingress.mycluster.com" and
certificate SAN must contain "healthcheck.ingress.mycluster.com".
• Use a specific TLS secret only for the communication with the external health checking application. In this
case, there are two options to do this:
• If you already have an appropriate secret for this purpose in the cluster, add the following configuration
in the values.yaml file:
ingress:
healthcheck:
tls:
secretName: <secret name>
Note
The Kubernetes secret must contain the following properties, used for authentication to the
Ingress resource:
ingress:
healthcheck:
tls:
secretName: <secret name>
secretData:
key: <base 64 encoded private key in PEM format>
certificate: <base 64 encoded certificate in PEM format>
Note
The certificate from the TLS secret must be issued for the host
healthcheck.<config.servers.businessDataTunnel.externalHost>. For example, if the
external host is "ingress.mycluster.com", then the certificate CN or SAN must contain
"healthcheck.ingress.mycluster.com".
Note
The certificate chain of the CA that issues the certificate from the TLS secret must be imported to the trust
store of the external health checking application.
1. Download the plugins archive and unzip it (for more information, see Lifecycle Management [page 770]).
2. Navigate to the plugins folder and install connectivity-certificate-plugin by executing the following
command:
Note
• region_domain: BTP region to which you are pairing the connectivity proxy.
• subaccount: BTP subaccount, on whose behalf the connectivity proxy is running.
• user: BTP subaccount user.
4. Specify a TLS secret that will be used for the communication with the external health checking application.
There are two options to do this:
• If you already have an appropriate secret for this purpose in the cluster, add the following configuration
in the values.yaml file:
ingress:
healthcheck:
tls:
secretName: <secret name>
Note
The Kubernetes secret must contain the following properties, used for authentication to the
Ingress resource:
• If you don't have such a Kubernetes secret, it can be automatically generated. Add the following
configuration in the values.yaml file:
ingress:
healthcheck:
tls:
secretName: <secret name>
secretData:
key: <base 64 encoded private key in PEM format>
certificate: <base 64 encoded certificate in PEM format>
Note
The certificate from the TLS secret must be issued for the host
healthcheck.<config.servers.businessDataTunnel.externalHost>. For example, if the
external host is "ingress.mycluster.com", then the certificate CN or SAN must contain
"healthcheck.ingress.mycluster.com".
Note
The certificate chain of the CA that issues the certificate from the TLS secret must be imported to the
trust store of the external health checking application.
5. Specify the TLS secret that was generated in step 3, to be used for the communication between the Ingress
controller and the connectivity proxy. Add the following configuration in the values.yaml file:
ingress:
healthcheck:
tls:
proxy:
secretName: <name of the secret generated on step 3>
Content
Overview
The connectivity proxy can work in an active-active high-availability setup. In this setup, there are at least two
connectivity proxy instances, both running actively and simultaneously.
The main purpose of an active-active deployment is to provide high availability and allow zero-downtime
maintenance as well as horizontal-scaling capabilities.
The Kubernetes service exposing the connectivity proxy pods distributes and load-balances the traffic from the
workloads across all running connectivity proxy pods.
Note
The load balancing strategy for distributing traffic to the connectivity proxy pods depends on the kube-
proxy mode. The strategies used by the different kube-proxy modes are described in the Kubernetes
documentation . This configuration is done on cluster level.
Tip
MultiAZ with Pod Anti-Affinity and PodDisruptionBudget are supported out of the box in the Helm
chart.
There are two technical options (modes) for the active-active deployment:
• Path
• Subdomain
The difference between the two modes is in the way the routing information of the current connectivity proxy
(which is being used for the current connection) is passed to the Cloud Connector.
Depending on the perspective, as well as on the concrete requirements and boundaries you have, both modes
may have advantages and drawbacks, so you should carefully choose the one which best suits your needs.
The number of connectivity proxy instances that run simultaneously depends on the replicaCount
configuration in the values.yml file.
deployment:
replicaCount: <number_of_replicas>
To configure the connectivity proxy for the high-availability mode Path, add the following configuration in the
values.yml file:
config:
highAvailabilityMode: "path"
To configure the connectivity proxy for the high-availability mode Subdomain, add the following configuration in
the values.yml file:
config:
highAvailabilityMode: "subdomain"
Note
High-Availability mode Subdomain uses either wildcard certificates or SAN (Subject Alternative Name)
certificates. Wildcard certificates cover all possible subdomains. However, they are insecure since when the
certificate key that is installed on one subdomain gets compromised, it will be compromised on all of the
subdomains it is installed on. A safer option is to use SAN certificates, although they may be expensive and
require each subdomain to be specified explicitly.
At runtime, the connectivity proxy produces audit messages for the relevant auditable events. These audit
messages help you gather audit data which can be useful for identifying possible attacks or unwanted
access attempts. The audit messages are written on the standard output and it is up to the operator of
the connectivity proxy to configure the surrounding infrastructure in a way that these audit messages are
collected, stored, protected from tampering, rotated when necessary, and so on.
Note
When written to the standard system output, the audit log messages are preceded by the prefix [AUDIT-
EVENT] to distinguish them from the other logs.
Audit Messages
In the table below, the left column shows a description of the security event, the right column shows the
corresponding audit message. These messages are written on behalf of the consumer tenant.
The <message> field of the audit message should be treated as case insensitive.
In the table above, there are multiple keys with UPPER CASE values. At runtime, these keys hold real scenario
values:
• YYY: User executing the operation, if known. If the request is not done on behalf of a user, the value will be
fallback_for_missing_user.
• NNN: Secure tunnel between the connectivity proxy and the Cloud Connector.
• TTT: Tenant on whose behalf the operation is executed.
• FFF: Remote client (unique identifier of a particular installation) of the Cloud Connector.
• UUU: Additional information about the event.
Connectivity Service [page 759] Use the connectivity proxy for Kubernetes with the Connec-
tivity service to connect to on-premise systems.
Security (Proxy Authorization) [page 759] Perform proxy authorization for the connectivity proxy.
Use the connectivity proxy for Kubernetes with the Connectivity service to connect to on-premise systems.
On-premise systems are exposed via the Cloud Connector. The Cloud Connector is connected to (registered
in) the SAP Connectivity service and ready to serve. The connectivity proxy is the software component to
which, at runtime, the Cloud Connector connects and establishes the business data tunnel.
To use the connectivity proxy, it must be paired with the SAP Connectivity service, that is, configured
to connect to the central service to which initially theCloud Connector was connected. This enables the
connectivity proxy to dynamically (on-demand) serve its purpose, that is, to securely establish business data
tunnel connections to the Cloud Connector, hosted in an on-premise network next to the backend systems.
Prerequisites
You have an account on SAP BTP, Cloud Foundry environment, and your SAP BTP account user has the
authorization to create service instances.
Procedure
Tip
You can create the service key with x.509 credentials instead of a client secret, but only in the SAP-
managed certificate case. If you do so, you must take care to rotate the key before the certificate
expires.
3. Follow the instructions under Lifecycle Management [page 770] and set up the Kubernetes secret for
pairing with the central SAP Connectivity service, providing the content of the created service key.
4. Check the Configuration Guide [page 778] for config.integration.connectivityService*
parameters.
As described in Operational Modes [page 743], there are two major categories for consideration: The level of
trust in the environment, and the type of tenancy (single vs multi-tenant). This helps you prevent unwanted
callers and is especially useful in scenarios where calls to the proxy can originate from workloads outside of
your control, or if you simply want to apply stricter rules.
The authorization of the connectivity proxy is based on OAuth and it is provided via integration with the SAP
UAA (aka XSUAA) service acting as an OAuth server. For a successful authorization request, a JSON Web token
(JWT) must be sent to the connectivity proxy, for which the following is valid:
• Issued by XSUAA
• Not expired
• Passes the signature verification (the connectivity proxy takes care to get the token keys from XSUAA for
offline JWT verification).
• Its client_id (the client ID of the OAuth client which is part of the respective XSUAA service instance
credentials) matches the allowed client ID in the connectivity proxy configuration.
config:
servers:
proxy:
authorization:
oauth:
allowedClientId: "the client id which is allowed as JWT issuer for
proxy authorization"
Tip
The service instance used to protect the connectivity proxy can be any XSUAA service instance.
Note
Based on the above info, the connectivity proxy can be protected with the very same service instance you
use to perform the user login flow for your application. In this case, you can reuse the login token. However,
to achieve separation of concerns, we recommend that you use a dedicated service instance to protect the
connectivity proxy.
An important aspect of the proxy authorization is that, when enabled, it is also used to pass the tenant context
for the request to the connectivity proxy. When you fetch a token for proxy authorization, make sure you do so
on behalf of the correct tenant, over which you want to call the connectivity proxy.
In cases of multi-tenancy, or single-tenancy where the proxy-protecting instance is created in a tenant that is
different from the one for which the proxy is dedicated, you must ensure subscriptions between this tenant and
the proxy-protecting instance to fetch a token on behalf of the correct tenant.
The connectivity proxy can automatically pick up changes in secrets and configmaps. This is achieved by
performing a rolling restart when a resource is modified.
Restart Watcher
The restart watcher is responsible for picking up changes in the above-mentioned resources, and for restarting
the connectivity proxy if given conditions are met.
It will be informed for changes in a secret or configmap containing the label connectivityproxy.sap.com/restart.
To enable the restart watcher, you must configure the following in the values.yml file.
deployment:
restartWatcher:
enabled: true
Note
The restart watcher is unique for every Helm installation of the connectivity proxy and will be deployed as a
part of it.
Caution
the restart operation which is performed by the restart watcher may not be a ZDM (zero-downtime
migration).
To avoid such situation, you should perform a fresh Helm installation. This is inevitable because a change in
an immutable field of a Kubernetes resource is required.
Тhe restart watcher checks for two possible values of the label connectivityproxy.sap.com/restart:
• if the value is an empty string, the restart watcher will perform a rolling update on the connectivity proxy,
based on respective Helm installation.
• if the value has the form <helm_installation_name>.<namespace>, the restart watcher will perform a
rolling restart only if the values of <helm_Installation_name> and <namespace> match the watcher's
installation and namespace.
• for all other possible values, the restart watcher will not perform a restart.
Examples
• You have a Helm installation in the default namespace and a secret with label connectivityproxy.sap.com/
restart: "" (with value empty string) in namespace test. In this case, the connectivity proxy installation in
namespace default will be restarted for every change of the secret.
• You have a Helm installation with name connectivity-proxy in namespace test, and a configmap with
name test-configmap and label connectivityproxy.sap.com/restart : connectivity-proxy.test. In this case,
the connectivity proxy will be restarted for every change of the configmap. However, if you have another
installation in the same namespace test but with name connectivity-proxy2, restarts will not be triggered on
it based on changes of the configmap test-configmap.
Use the connectivity proxy to connect to an on-premise network via a Cloud Connector service channel.
If you want to consume a cloud application running in a Kubernetes cluster, and the application is not
accessible via Internet, it can be securely exposed by the connectivity proxy and consumed from the on-
premise network via a Cloud Connector service channel.
For more information on configuring service channels in the Cloud Connector, see Using Service Channels
[page 611].
Perform the following steps to expose an application through the connectivity proxy:
1. Install the connectivity proxy in the cluster where the cloud application is running.
2. Create a Kubernetes resource of type ServiceMapping for every cloud application that you want to
expose through the connectivity proxy.
Sample Code
apiVersion: connectivityproxy.sap.com/v1
kind: ServiceMapping
Note
• subaccountId: Required for multi-tenant modes and optional for single-tenant modes. This is the ID
of the tenant on whose behalf the cloud application is exposed. In single-tenant modes, the configured
dedicated tenant is used by default.
• serviceId: Required. This is the virtual host used to expose the cloud application.
• tenantId: Optional. If the connectivity proxy consumer's flow has a tenancy domain model different
from the SAP BTP tenancy domain model, you can specify the tenant identifier of the custom domain
model here.
• internalAddress: Required. This field is composed of the host and port for the exposed application.
The connectivity proxy will use the internal address for opening a connection to the application. This
usually is the DNS name of a Kubernetes service and a port.
• locationIds: Optional. List of location identifiers for which service channels can be opened. By
default, if the list of location identifiers is not provided, only the default location identifier is supported.
If more identifiers should be supported and the default one is amongst them, you can add them as ""
(empty string) in the list.
For more information about location identifiers, see Set up Connection Parameters and HTTPS Proxy
(step 4).
Example (Service Mapping):
Sample Code
spec:
locationIds:
- "locationId1"
- "locationId2"
- ""
Note
The combination of type, subaccountId and serviceId for a s ervice mapping resource must
be unique.
Note
The combination of type, subaccountId, and serviceId for a ServiceMapping resource should
be unique.
Caution
Remember
Picking up and loading a newly created ServiceMapping resource in the connectivity proxy may take
a few seconds, as it is done by Kubernetes-native mechanisms.
4. After the ServiceMapping is loaded in the connectivity proxy, the cloud application can be consumed via
the Cloud Connector as of version 2.15.0.
5. To stop exposing the cloud application through the connectivity proxy, delete the corresponding
ServiceMapping resource from the cluster.
Install the connectivity proxy in a Kubernetes cluster that is configured for Istio.
Restriction
Currently, in clusters with Istio configured, only the high availability mode "path" is supported. Also, you
cannot configure end-to-end mutual TLS (mTLS).
By default, the traffic between Cloud Connector and the ingress gateway is secured by mTLS. The traffic
between the ingress gateway pod and the connectivity proxy pod is also encrypted by Istio by default.
Required Configuration
1. The ingress className must be changed to "istio". You can specify this in the values.yaml file:
Sample Code
ingress:
className: "istio"
The default value of ingress.className is "nginx". It is mandatory to change it if you have Istio
configured.
2. By default, Istio is usually installed in the namespace "istio-system". However, you can install it in a
different namespace. The connectivity proxy needs to know Istio's installation namespace because it
creates secrets used by the Istio ingress gateway when performing the TLS handshake with the Cloud
Connector.
The connectivity proxy will create those secrets in the default namespace "istio-system". However, if you
have installed Istio in a different namespace, you can specify this in the values.yaml file:
Sample Code
ingress:
istio:
namespace: <istio-installation-namespace>
Caution
The secret that will be used for TLS configuration of the ingress-gateway should be created in the
installation namespace of Istio. This secret is specified with the field ingress.tls.secretName in
the values.yaml. For more information, see Configuration Guide [page 778].
3. The connectivity proxy uses the selector which is configured in the Istio ingress gateway to apply a
gateway configuration on the gateway pods. When installing Istio with default configurations, it uses "istio:
ingressgateway" as default value for the selector.
The connectivity proxy also has this value configured as default one. If Istio was installed with different
configurations and you need to change the selector, you can specfify this in the values.yaml file:
Sample Code
ingress:
istio:
gateway:
selector:
"key1" : "value1"
"key2" : "value2"
Caution
Istio AuthorizationPolicy
If you have configured an Istio authorization policy that could affect traffic to the connectivity proxy, you
may need to add the following rules (or the whole authorization policy):
Sample Code
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: <authorization-policy-name>
If you install the connectivity proxy with Helm, you can have the transparent proxy installed as a subchart. For
more information about Helm subcharts, see the Helm documentation .
1. The transparent proxy subchart is disabled by default. To enable it, set the enabled flag to true in the
transparent-proxy section in the values.yaml file.
Sample Code
transparent-proxy:
enabled: true
2. You can modify the transparent proxy version from the requirements.yaml file in the connectivity proxy
release.
Related Information
SAP BTP regions represent locations (of data centers) like eu10, eu20, us10, us20, and so on. Using the
multi-region mode, a single connectivity proxy installation can operate in different regions which, however,
must be part of the same landscape (global account). Otherwise, the uniqueness of subaccount IDs is not
guaranteed.
Using the multi-region mode, you can install a single connectivity proxy for your global account. Applications
- deployed in different regions or in different subaccounts within the same region - can consume the
Connectivity service through this single connectivity proxy instance.
Without multi-region mode, you must install one connectivity proxy for every region where applications on cloud
side need to consume the Connectivity service. This setup requires considerably more maintenance effort and
consumes a lot more resources than just a single (multi-region) instance.
1. Without multi-region mode, the service keys for integration with other services are provided through the
values.yaml file in the following way:
secretConfig:
In multi-region mode, you can assign different regions to a single connectivity proxy instance.
Consequently, the service keys for each region are needed. In this case, you cannot configure the
keys through the values.yaml file. Instead, you must put them into secrets, and then specify them in a
ConfigMap.
Let's create a ConfigMap containing two region configurations.
First, we configure the two secrets holding the service keys for each region:
apiVersion: v1
kind: Secret
metadata:
name: <secret-name-region1>
data:
serviceKey: <base64 encoded service key for region1>
apiVersion: v1
kind: Secret
metadata:
name: <secret-name-region2>
data:
serviceKey: <base64 encoded service key for region2>
Caution
The serviceCredentialsKey property for each service type specified under config.integration
must be used in all secrets that will hold service keys for the respective service type. For example, if the
following serviceCredentialsKey is configured for the Connectivity service in the values.yaml...
config:
integration:
connectivityService:
serviceCredentialsKey: serviceKey
...then it must be used in all secrets holding service keys for the Connectivity service.
In a second step, we create a ConfigMap that contains two region configuration IDs (region1 and region2),
which may have any value. Each region configuration contains the Connectivity service as dependency, and
the secret name holding the specified service key.
apiVersion: v1
kind: ConfigMap
metadata:
name: regionConfigurations
data:
region1: "{\"dependencies\":{\"connectivity\": \"<secret-name-region1>\"}}"
region2: "{\"dependencies\":{\"connectivity\": \"<secret-name-region2>\"}}"
Note
2. To enable the multi-region mode, add the following in the values.yaml file and specify the name of the
ConfigMap containing the region configurations:
Note
When multi-region mode is enabled, the connectivity proxy operates in multi-tenant mode. Therefore,
you must set the tenantMode property to shared.
Caution
If the connectivity proxy is installed in non-trusted mode [page 743] (or proxy authorization is enabled), the
allowed client IDs [page 759] for all region configurations must be specified in the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: regionConfigurations
data:
region1: "{\"allowedClientId\":\"<allowed-client-id>\", \"dependencies\":
{\"connectivity\": \"<secret-name>\"}}"
region2: "{\"allowedClientId\":\"<allowed-client-id>\", \"dependencies\":
{\"connectivity\": \"<secret-name>\"}}"
Caution
The connectivity poxy cannot be installed if both multi-region mode and service channels [page 762] are
enabled.
Remember
Keep in mind that the correct token for each region configuration must be sent. Sending an access token/
authorization token to the proxy is required with every request, because the connectivity proxy is working in
multi-tenant mode.
Sample Code
In this case, the request is for a region configuration with ID region1 in the ConfigMap.
Note
The access token must be issued for the subaccount that corresponds to the region configuration,
identified by the region ID that has been sent with the request.
Caution
If the proxy is installed in non-trusted mode, the authorization token must be provided via the Proxy-
Authorization header.
Example:
Sample Code
As an operator of the connectivity proxy, at some point you might need to make a change in the region
configurations ConfigMap or in any of the secrets containing service credentials. For example, add or remove a
region configuration, or change the service credentials in a secret. This can be done any time, and no further
actions are needed to make the connectivity proxy start working with the performed changes:
• After a short period of time, the changes will be picked up automatically by the connectivity proxy.
• A potentially required restart is also done automatically.
• The restart of the connectivity proxy's pods is rolling.
• If high availability mode is enabled, no downtime is expected.
Use the connectivity proxy image and the connectivity proxy Helm chart to manage the life cycle of the
connectivity proxy for Kubernetes.
• Connectivity Proxy Image [page 771]: the functional component that packages all the binaries and utilities.
• Connectivity Proxy Helm Chart [page 772]: used for configuring and managing the life cycle of the
connectivity proxy via the popular Helm package manager. For more information, see Operations via Helm
[page 773].
The connectivity proxy image is a standard Docker image containing all the required binaries for the
connectivity proxy. This includes the connectivity proxy binaries themselves, a JVM (SapMachine), and other
important utilities.
The connectivity proxy image is available via the DockerHub image repository, see https://hub.docker.com/r/
sapse/connectivity-proxy .
• Registry: docker.io
• Repository: sapse/connectivity-proxy
• Tag: 2.12.1
Example:
Sample Code
The connectivity proxy image is available via the RBSC docker image repository. This requires having an S-user
with associated licenses.
Note
• Registry: 73554900100900005672.dockersrv.cdn.repositories.cloud.sap
• Repository: com.sap.cloud.connectivity/connectivity-proxy
• Tag: 2.12.1
• Authorization: see Manage Technical Users in SAP Repositories Management (RBSC documentation).
The connectivity proxy delivery also includes a Helm chart that you can use for life cycle management. It is the
recommended way to perform life cycle management. The connectivity proxy Helm provides full configuration
capabilities via the standard Helm method of a values.yaml file (see Configuration Guide [page 778]).
The connectivity proxy Helm chart is available via the RBSC Helm repository. This requires having an S-user
with associated licenses.
Note
• Registry: 73554900100900005672.helmsrv.cdn.repositories.cloud.sap
• Repository: com.sap.cloud.connectivity/connectivity-proxy
• Tag: 2.12.1
• Authorization: see Manage Technical Users in SAP Repositories Management (RBSC documentation).
Example:
Related Information
Use the Helm chart to configure and manage the life cycle of the connectivity proxy for Kubernetes.
Note
Out of the box, the Helm chart only supports the NGINX Ingress Controller (default) and the Istio Ingress
Gateway (for more information, see Installing the Connectivity Proxy in Clusters with Istio [page 764]).
Deploy
Note
For this procedure, you must have a generated public/private TLS key pair as a prerequisite. For generating
the TLS key pair, you can use Gardener Certificate resources , openssl.
To deploy the connectivity proxy on a cluster that does not have the Helm chart yet, for example, in a new
namespace, follow these steps:
1. Get the connectivity proxy Helm chart, as described in Lifecycle Management [page 770].
2. Create a Kubernetes secret from the generated public/private TLS key pair (for the Ingress public
endpoint). Depending on the Connectivity proxy release version, the secret might require additional fields:
1. Connectivity proxy release < 2.4.1:
Download the list of trusted CAs for the BTP region to which you are pairing the Connectivity proxy.
Example:
Remember
The content of /api/v1/CAs may change over time. Make sure you update it regularly.
Note
If a secret with the same name pattern, as mentioned above, is present whenever the Connectivity
proxy is installed/upgraded, its content is overridden.
Example:
Where private.key is the private key and public.crt is the public key of a TLS certificate,
generated for the Ingress public endpoint of the connectivity proxy (the one which the Cloud
Connector connects to).
3. Prepare the values.yaml file for your your scenario, as described in Configuration Guide [page 778].
4. Use the Helm CLI to deploy the connectivity proxy. Example:
When you have a connectivity proxy deployed on the cluster, you may want to maintain it by changing
configurations and/or changing it's version. To do so, follow these steps:
1. Get the connectivity proxy Helm chart, as described in Lifecycle Management [page 770]. It can be the
same version as the one currently installed or a different version, when you want to upgrade or downgrade.
2. Prepare the values.yaml file for your your scenario, as described in Configuration Guide [page 778].
Here you can just modify the one you used previously by applying the changes you desire.
3. Use the Helm CLI to upgrade the connectivity proxy. Example:
Each Helm upgrade will result in a restart of the connectivity proxy pod(s). This is done to ensure that
configuration changes are picked up immediately.
Undeploy
If you need to remove the connectivity proxy from you cluster or from a namespace, you can do it almost
completely via normal Helm tools. However, there are some additional actions required. Please follow these
steps:
2. Delete the Kubernetes secret representing the TLS certificate for the connectivity proxy public endpoint.
Example:
Note
For Connectivity proxy releases >= 2.4.1, the secret containing the trusted CAs is removed automatically
when helm uninstall is executed.
Caution
As of connectivity proxy release 2.8, trying to execute helm uninstall, while there are still resources of
type ServiceMapping, would result in an error (job failed: BackoffLimitExceeded. Detailed error message can
be obtained after looking into the logs of the newly created pod servicemapping-cleanup-verifier-
job), and the proxy would not be uninstalled. Also, helm upgrade operations would no longer be
available.
To proceed, remove all resources of type ServiceMapping and retry the uninstallation .
Using separate YAML files to configure and manage the life cycle of the connectivity proxy for Kubernetes.
Caution
As operating with separate YAML files that you modify and maintain is considered error prone, we do not
recommend this method. Consider using Operations via Helm [page 773] instead. If you do need to use
separate YAML files, we recommend that you generate them via Helm template.
Deploy
To deploy the connectivity proxy on a cluster that does not have it yet, for example, in a new namespace, follow
these steps:
Where private.key is the private key and public.crt is the public key of a TLS certificate, generated
for the Ingress public endpoint of the connectivity proxy (the one which the Cloud Connector connects to).
Remember
The content of /api/v1/CAs may change over time. Make sure you update it regularly.
When you have a connectivity proxy deployed on the cluster, you may want to maintain it by changing
configurations and/or changing it's version. To do so, follow these steps:
1. Get the YAML files (or only those you want to modify) you used to deploy the connectivity proxy. You can
also export them from the cluster via kubectl.
2. Make the desired changes. For example, you can modify the subaccount ID in the config map or the
connectivity proxy version in the deployment.
3. Use the kubectl CLI to apply your changes. Example:
Tip
You can also use kubectl edit to directly modify the resources on the cluster.
Note
When updating secrets or config maps, you must restart the pod(s) to activate the changes. You can do this
by running kubectl rollout restart statefulset/connectivity-proxy.
Undeploy
If you need to remove the connectivity proxy from you cluster or from a namespace, follow these steps:
1. Get the YAML files you used to deploy the connectivity proxy. You can also export them from the cluster via
kubectl.
2. Use the kubectl CLI to undeploy the connectivity proxy. Example:
Tip
You can also use kubectl delete <resource type> <resource name>.
3. Delete the Kubernetes secret representing the TLS certificate for the connectivity proxy public endpoint.
Example:
Find an overview of configuration parameters for the connectivity proxy for Kubernetes.
Refer to the table below for the configurations available in the values.yaml file of the connectivity proxy.
Note
Make sure you become familiar with the semantics, restrictions and interoperability aspects of each
property before using it.
Parameter Overview
chart.nameSuffix Custom string used to suffix your re- "" (empty string)
sources.
Caution
Only one connectivity proxy per Ku-
bernetes cluster can have service
channels enabled.
deployment.utilityImage.pullSecret Pull secret for the utility image, used for None
authenticating against the repository.
Caution
The repository must point to an
image, which is a version of the
Docker official Alpine image (see
Alpine Docker Image ). Usage of
any other type of container image
is not recommended and can result
in broken or dysfunctional connec-
tivity proxy deployment.
Caution
To take effect, the Vertical Pod Au-
toscaling mechanism for the clus-
ter must be enabled. See Vertical
Pod Auto-Scaling for details (Gi-
thub Gardener Kubernetes docu-
mentation).
Caution
Make sure the load balancer expos-
ing the Ingress controller is also
configured to allow longer times.
For example, for NGINX with AWS,
you need to add a special annota-
tion for the LB service of the In-
gress controller.
Example
Caution
The following example shows the full structure and does not represent a productive values.yaml file.
Many properties listed here are mutually exclusive and should not be used together in real situations.
chart:
nameSuffix: "a string to append to resource names"
config:
jvm:
errorFilePath: "directory for JVM error logs"
heapDumpPath: "directory for JVM heap dumps"
memory:
maxHeapSize: 256m
minHeapSize: 16m
highAvailabilityMode: "off"/"subdomain"/"path"
subaccountId: "id of the subaccount, for which the proxy is running"
subaccountSubdomain: "subdomain of the subaccount, for which the proxy is
running"
tenantMode: dedicated/shared
serviceChannels:
enabled: true/false
secretConfig:
integration:
auditlogService:
secretData: "base64 encoded auditlog service key"
secretName: "name of the secret resource, holding the auditlog secret"
connectivityService:
secretData: "base64 encoded connectivity_proxy service key"
secretName: "name of the secret resource, holding the connectivity service
secret"
servers:
businessDataTunnel:
secretData:
caCertificate: "base64 encoded CA certificate"
certificate: "base64 encoded certificate"
key: "base64 encoded private key"
secretName: "name of the secret"
ingress:
annotations:
annotation1: value1
annotation2: value2
healthcheck:
tls:
proxy:
secretData:
caCertificate: "base64 encoded CA certificate"
certificate: "base64 encoded certificate"
key: "base64 encoded private key"
secretName: "name of the secret"
secretData:
certificate: "base64 encoded certificate"
key: "base64 encoded private key"
secretName: "name of the secret"
timeouts:
proxy:
connect: 10
read: 120
send: 120
tls:
proxy:
secretData:
caCertificate: "base64 encoded CA certificate"
certificate: "base64 encoded certificate"
key: "base64 encoded private key"
secretName: "name of the secret"
secretData:
caCertificate: "base64 encoded CA certificate"
certificate: "base64 encoded certificate"
key: "base64 encoded private key"
secretName: "name of the secret"
nginx:
tls:
ciphers: "HIGH:!aNULL:!MD5"
istio:
tls:
ciphers:
- ECDHE-RSA-AES128-GCM-SHA256
transparent-proxy:
enabled: true/false
This section refers to the traffic flow between a workload from a Kubernetes cluster and the connectivity proxy
running in the same cluster.
The traffic between a workload running in the cluster and the HTTP and LDAP/RFC proxies is not encrypted.
The traffic to the SOCKS5 proxy can be SSL-encrypted if the client application initiates an SSL connection. The
HTTP and LDAP/RFC proxies are disabled by default for a connectivity proxy installation. If you want to enable
them, add the following configuration in the values.yml file:
config:
servers:
proxy:
rfcAndLdap:
enabled: true
http:
enabled: true
Note
All business data that is transmitted between the connectivity proxy in the cluster and the Cloud Connector
is sent over an SSL-encrypted tunnel.
This section refers to the traffic flow between the Cloud Connector and the connectivity proxy running in a
Kubernetes cluster.
By default, the connections between the Cloud Connector and the Ingress load balancer in the Kubernetes
cluster are SSL-encrypted. If you want to enable mutual TLS for the connections from the Ingress to the
connectivity proxy or have ent-to-end mutual TLS between the Cloud Connector and the connectivity proxy,
follow the procedures described in Mutual TLS [page 747].
This section refers to any security-sensitive configuration data which is required for the connectivity proxy.
The connectivity proxy installation requires some configuration data to be supplied via a Kubernetes secret.
By default, the Kubernetes secrets are stored as unencrypted base64-encoded strings and are transmitted in
base64-encoded format, so they are basically accessible by persons with access to the cluster.
See the Kubernetes documentation for details on how to secure the usage of secrets in a cluster.
Find basic sizing guidance for the connectivity proxy for Kubernetes.
Sizing Options
The following table gives basic sizing guidance for different usage scenarios. The values listed in the CPU and
Memory, and Thread Pools columns correspond to the properties you should define in the values.yaml file
(for more information, see Configuration Guide [page 778]):
The connectivity proxy can operate in multiple Operational Modes [page 743].
A major criteria is the type of tenancy used: single or multi-tenant. If multiple tenants are served, we
recommend that you choose a slightly bigger size to make sure each tenant is served without precedence
to any other tenant at the same time. Based on the chosen tenancy mode, the following general
recommendations apply:
Single-Tenant S, M
Multi-Tenant M, L
Note
The above-mentioned sizing recommendations are related to the connectivity proxy software component,
also acting as a tunnel server to which Cloud Connector instances connect. This means the Cloud
Connector must be sized properly as well, see Sizing Recommendations [page 369] (Cloud Connector).
Remember
These sizing recommendation are just a direction point. There are many factors that affect the
performance of the tunneling between Cloud Connector and connectivity proxy. They are closely related to
the specifics of your scenario, expected regular and intermittent load, and so on.
Once you have installed the connectivity proxy in your cluster, you can perform the following checks to verify it
is running successfully.
Before starting the checks, you must wait for a few seconds until all the components are started and can be
consumed.
1. Execute the following command and verify that the status of the pod (pods) is running.
2. Call the external health check endpoint of the connectivity proxy to verify it is returning a successful
response. You can call the endpoint in a web browser or execute the following command from the
command line:
Caution
If the host of the Ingress in your cluster is configured with a self-signed certificate, add the -k flag to
curl to disable the SSL certificate verification.
3. Make sure you have connected a Cloud Connector to your cloud subaccount. For more information, see
Managing Subaccounts [page 404].
1. If you have deployed the connectivity proxy in a single-tenant trusted mode and have enabled the
HTTP proxy in the values.yml file, execute the following command:
2. If you have deployed the connectivity proxy in a single-tenant non-trusted or multi-tenant non-
trusted mode and have enabled the HTTP proxy in the values.yml file, execute the following
command:
3. If you have deployed the connectivity proxy in a single-tenant trusted mode and have enabled the
HTTP proxy in the values.yml file, execute the following command:
For more informaion on the process of fetching a JWT, see Consuming the Connectivity Service [page
214].
For more informaion on the different operational modes, see Operational Modes [page 743].
If the connectivity proxy is working as expected, you get one of these responses:
• Access denied to system virtual:1234: If this was a valid request, make sure you expose the
system correctly in your Cloud Connector.
This response indicates that the business data from the cluster is successfully reaching your Cloud
Connector, but the system virtual:1234 is not exposed there.
• Response from your backend system if you have exposed it in the Cloud Connector.
Note
This response confirms that the business data from the cluster is successfully reaching your
backend system exposed in the Cloud Connector.
If you encounter problems with any of the above steps, please refer to Troubleshooting [page 798] and
Recommended Actions [page 801] for further investigation.
1.3.4 Monitoring
Check operability, scenarios and metrics of the connectivity proxy for Kubernetes.
The basic availability check is the minimal verification you can do to make sure the connectivity proxy is alive.
This check only shows if the process of the connectivity proxy is running and if it is able to handle requests.
This check is also what is configured as the liveness probe for the Kubernetes deployment resource.
You can perform this check on-demand by invoking a simple HTTP GET request to the healthcheck endpoint
of the connectivity proxy. If the response is 200 OK, the check was successful. Any other response means the
check failed. There are two ways to perform this:
Scenario Monitoring
On top of the availability monitoring of the component itself, it is useful to also monitor entire scenarios. This,
however, cannot be provided out of the box by the connectivity proxy as it is specific to the way you use the
component. Some options you can explore for this are:
Metrics
Currently, the connectivity proxy does not provide any dedicated support for metrics monitoring via full
metrics pipelines .
However, you may want to use the resource metrics pipeline to collect basic metrics and observe them, or
configure alerts based on those metrics.
Use the connectivity proxy for Kubernetes with different communication protocols and principal propagation
(SSO).
Overview
The connectivity proxy offers multiple proxy endpoints which are communication protocol-specific: TCP (via
SOCKS5), HTTP, RFC (invoking ABAP functions) and LDAP. Depending on the operational mode, the usage may
involve technical authorization when connecting to the proxy, see Operational Modes [page 743].
Note
• The tunnel channel (between Cloud Connector and connectivity proxy) always uses TLS, that is, it is
encrypted.
• The actual connection from the application to the connectivity proxy may be encrypted or not,
depending on the exact scenario used.
Note
If a TLS connection is attempted by the application, the SOCKS5 proxy endpoint must be used.
• The actual connection from the Cloud Connector towards the on-premise system is established and
controlled in the Cloud Connector. As a result, TLS can be enabled or disabled, regardless of the
connection from the application to the connectivity proxy.
TCP connectivity is achieved via the SOCKS5 proxy protocol. As the authorization is based on OAuth, we
provide a custom authorization scheme for SOCKS5 that lets the application pass the required OAuth tokens
and establish technical authorization with the connectivity proxy. For more information, see Using the TCP
Protocol for Cloud Applications [page 234].
HTTP
Uses a standard HTTP proxy, just like using a corporate proxy to reach out to the Internet. For more
information, see Authentication to the On-Premise System [page 223].
Principal Propagation
Principal propagation, also known as user propagation, lets you perform single sign-on (SSO) authentication of
the cloud user towards an on-premise system.
The cloud user identity is passed as a token represented by a JSON Web token (JWT). It is forwarded via the
connectivity proxy to the Cloud Connector, which validates and further processes it to establish SSO with the
on-premise system.
For more information, see Authenticating Users against On-Premise Systems [page 418] and Set Up Trust
[page 420].
As of connectivity proxy release 2.1.1, support for principal propagation with IAS tokens is added.
Prerequisites:
Note
If IAS tokens are used, they can only be provided via the SAP-Connectivity-Authentication
header.
Note
If in a non-trusted environment the user context is provided via the Proxy-Authorization header, the
SAP-Connectivity-Authentication must not be sent.
Caution
For more information, see also XSUAA Token Client and Token Flow API .
Technical user propagation lets you perform single sign-on (SSO) authentication of a cloud technical user
towards an on-premise system. It is very similar to principal propagation, but instead of forwarding the identity
of business users, the identity of technical users is forwarded.
The technical user is represented by a token in form of a JSON web token (JWT) that is usually obtained via the
client_credentials OAuth flow. It is forwarded via the connectivity proxy to the Cloud Connector, which
validates and processes it to perform SSO with the on-premise system.
Technical user propagation is supported as of connectivity proxy version 2.6.1. As a prerequisite, Cloud
Connector version 2.15 or higher must be used.
The token representing the technical user must be sent to the connectivity proxy via the SAP-Connectivity-
Technical-Authentication HTTP request header.
Caution
In technical user propagation scenarios, you must omit the Authorization header.
For more information, see Configuring Principal Propagation [page 419] and XSUAA Token Client and Token
Flow API .
1.3.6 Troubleshooting
Find procedures to troubleshoot issues with the connectivity proxy for Kubernetes.
As the connectivity proxy workload is represented as a standard Kubernetes StatefulSet , fetching the logs is
done in the standard Kubernetes way. Example via kubectl:
When the default logging level is not sufficient for debugging the issue you are facing, you can change the log
level to get more insight about the problem.
Changing a log level to something more verbose will have a negative impact on the performance of the
connectivity proxy. Thus, we recommend that you do not keep such a log level for a long period of time.
Changing a log level is done without any downtime and requires no restarts. All you need to do is invoke a
simple command on a pod of the connectivity proxy. Here are some examples:
For troubleshooting the Cloud Connector, see Troubleshooting [page 700] (Cloud Connector).
Issue Solution
You use the HTTP proxy (port 20003) and get a 405 re- Make sure the URL you are calling is http://
sponse. <virtual host>:<virtual port>and not
https://<virtual host>:<virtual port>.
You use the HTTP proxy (port 20003) and get a 407 re- Make sure you set the Proxy-Authorization header
sponse (in a non-trusted environment / proxy authorization with a value in the format "Bearer <valid JWT
turned on). token>".
You use the HTTP proxy (port 20003) and get a 503 re- • Make sure the Cloud Connector you are targeting is still
sponse stating that there is no Cloud Connector for your connected.
subaccount.
• Make sure it's location ID matches the one used in the
request.
• Make sure the Cloud Connector is connected to the
same subaccount, on whose behalf the token for the
request is issued.
Out of nowhere, the connectivity proxy stops working / mon- See Recommended Actions [page 801].
itors indicate a failure (considered an outage if it happens in
production).
An SSL error is shown in the Cloud Connector when trying to Assuming the SSL error occurs during the call from the
send a request through the connectivity proxy.
Cloud Connector to the public endpoint of the connectivity
proxy, there are two options:
Related Information
Find procedures to resolve an outage of the connectivity proxy for Kubernetes functionality.
Caution
Before performing any of the steps below, make sure there is really an outage of the connectivity proxy and
not just a general problem on the entire cluster.
Before doing any restarts or modifications, it is important that you collect all relevant information.
1. Check the status of the connectivity proxy pods and collect the outcome. Example via kubectl:
2. Perform basic availability monitoring from within the cluster, as described in Monitoring [page 793], and
collect the outcome. This can be done from an existing container or by spinning up a container for the
check. Example:
3. Perform basic availability monitoring from outside the cluster, as described in Monitoring [page 793],
and collect the outcome. This can be done from your browser or via REST clients like cUrl or Postman.
Example:
4. Collect the logs of the connectivity proxy (see Troubleshooting [page 798]).
5. Proceed to Recovery Attempt.
Recovery Attempt
The exact action to take here depends on the check result in steps 2 and 3 of section Pre-Intervention Steps
[page 802]. Choose one of the four options below, according to the outcome of your checks.
Tip
Check if the cause of the outage might be insufficient resources. For more information, see Sizing
Recommendations [page 790]. If this is the possible cause, try scaling the connectivity proxy vertically
and/or horizontally (see Configuration Guide [page 778]).
• Option 1: Check Succeeds from within the Cluster and Fails from outside the Cluster [page 802]
• Option 2: Check Fails from within the Cluster and Succeeds from outside the Cluster [page 803]
• Option 3: Check Fails from within the Cluster and Fails from outside the Cluster [page 803]
• Option 4: Check Succeeds from within the Cluster and Succeeds from outside the Cluster [page 803]
Check Succeeds from within the Cluster and Fails from outside the Cluster
Such a situation is likely not an issue with the connectivity proxy. Next steps should be to stop following the
steps here and shift focus towards the Ingress configuration and Ingress controller.
Check Fails from within the Cluster and Succeeds from outside the Cluster
This indicates some sort of issue with the exposure of the connectivity proxy to internal pods. Some possible
reasons:
• Some unwanted network policy came into effect, preventing calls to the connectivity proxy from where you
are executing them.
Such a situation is likely not an issue with the connectivity proxy. Next steps should be to stop following the
steps here and shift focus towards cluster configurations and the network policies that affect access to the
connectivity proxy.
Check Fails from within the Cluster and Fails from outside the Cluster
This indicates that the connectivity proxy itself is indeed having issues. Please perform the following steps:
2. Collect logs from the connectivity proxy after the restart completes.
3. Check if outage is still ongoing:
1. If no, issue is resolved, proceed with Request Root Cause Analysis (RCA) [page 804].
2. If yes, issue is not resolved, proceed with Request Help from SAP [page 804].
Check Succeeds from within the Cluster and Succeeds from outside the Cluster
This indicates that the connectivity proxy is currently considered operational, however it might still have
trouble when used for real scenarios (depends on how you detect the outage).
3. Collect logs from the connectivity proxy after the restart completes.
4. Check if outage is still ongoing:
• If no, issue is resolved, proceed with Request Root Cause Analysis (RCA) [page 804].
• If yes, issue is not resolved, proceed with Request Help from SAP [page 804].
If you cannot resolve the issue and require help from SAP, follow this procedure:
1. Open an incident on the support component (see Connectivity Support [page 876]) for the connectivity
proxy (with the appropriate priority and impact stated).
2. Provide all the collected information in the incident, including all the logs, timestamps of the events that
occurred, summary of the taken actions, version of the connectivity proxy you are using, and so on.
3. Engage your SAP contacts to help with this.
4. Continue working in parallel to identify as much information as possible or to fine a temporary measure to
mitigate the outage.
Once the issue is resolved, the next step is figuring out what exactly caused the issue and if there is something
that can be done to prevent it from happening in the future. Follow this procedure for requesting RCA for the
issue you experienced:
1. Open an incident on the support component for the connectivity proxy (see Connectivity Support [page
876]).
2. Provide all the collected information in the incident, including all the logs, timestamps of the events that
occurred, summary of the taken actions, version of the connectivity proxy you are using, and so on.
3. The incident will be handled according to the SAP incident SLAs.
Answers to the most common questions about the connectivity proxy for Kubernetes.
When using one of the two untrusted operational modes, what is the purpose of the
allowedClientId property? What token should I provide in the Proxy-Authorization header?
The Proxy-Authorization header serves as a way for workloads calling the proxy to authenticate against it.
To do this, they need to provide this header with a JSON Web token (JWT) as value.
To accept the JWT, it must be issued by the OAuth client, specified via the allowedClientId. By configuring
the allowedClientId, you determine which OAuth client protects the proxy endpoints.
• The signature is being verified by calling XSUAA to get the public keys for the tenant, on behalf of which the
JWT is issued, and using those keys to check if it is valid.
• The token validity is also verified and expired tokens are rejected.
• Also, we allow only tokens issued by a specified client ID (via the allowedClientId configuration).
The connectivity proxy is a distributed software component that needs to connect to an instance of the central
Connectivity service to function.
This pairing is achieved via a connectivity:connectivity_proxy service key, which contains both
routing information and credentials for this pairing.
To preserve the integrity of an on-premise landscape and not expose anything from there to the Internet, the
flow for establishing the connection between connectivity proxy and Cloud Connector is initiated by the Cloud
Connector.
The public endpoint is used by the Cloud Connector to call the connectivity proxy and enable the data
exchange.
What is the relation of the connectivity proxy to the Connectivity service in SAP BTP?
The connectivity proxy is a distributed component that must be paired to an instance of the Connectivity
service in SAP BTP in order to function.
Cloud Connectors would still connect to the Connectivity service on SAP BTP and the connectivity proxy will
make use of those Cloud Connectors via the established pairing.
You can use the Destination service to store and retrieve on-premise destination configurations which can then
be used to construct a request to on-premise systems through the connectivity proxy.
Are there any client libraries that I can use with the connectivity proxy?
Can I port a cloud SDK application from the Cloud Foundry environment to Kubernetes and use
the connectivity proxy?
If you set up the connectivity proxy in an untrusted operational mode, the way the SDK works is well suited for
it.
However, since the SDK is created with the Cloud Foundry environment in mind, you would need to simulate
the VCAP_SERVICES environment to get it working.
Is there an equivalent to the lite plan from the Cloud Foundry environment? Is the lite plan
relevant for the connectivity proxy?
The lite plan is only relevant for the Cloud Foundry environment.
There is no lite plan for the connectivity proxy. Instead, you use an XSUAA-based OAuth client of your choice to
protect the connectivity proxy, and use it to issue tokens.
The transparent proxy routes to SAP BTP destinations configured in the Destination service. On-premise
applications must be exposed via Cloud Connector [page 343] (installed in the same network right next to the
on-premise system) and Connectivity Proxy for Kubernetes [page 738] (installed in the Kubernetes cluster).
The transparent proxy is delivered as Docker images and a Helm chart. You need to run the image on your
Kubernetes cluster with appropriate configurations. The Helm chart simplifies the installation process.
Related Information
Use the transparent proxy for Kubernetes in different SAP BTP communication scenarios.
The transparent proxy lightens the way your Kubernetes workloads use SAP BTP destinations of type Internet
and on-premise. It provides authentication, principal propagation, SOCKS5 handshake, and easy access to the
destination target systems by exposing them as Kubernetes services .
To target a given destination for handling by the transparent proxy, you should create a
custom Kubernetes resource of type destinations.destination.connectivity.api.sap that
references the given destination. If you do not create a custom Kubernetes resource of type
destinations.destination.connectivity.api.sap for a given destination, the destination will not be
handled by the transparent proxy.
Note
There should be no Kubernetes service in the namespace where the transparent proxy
is installed with a name equal to the name of the destination custom resource of type
destination.connectivity.api.sap, because the transparent proxy handles the creation of
Kubernetes services based on the information stored in these custom resources.
Related Information
Use the transparent proxy for Kubernetes to set up connections of type Internet.
The transparent proxy handles the HTTP(s) communication protocol for Internet destinations. As an
application developer, you must create an SAP BTP destination of type HTTP and proxy type Internet, for
example:
Sample Code
To target the destination with the name “example-dest-client-cert” for handling by the transparent proxy, you
should create the following Kubernetes resource in a namespace of your choice:
apiVersion: destination.connectivity.api.sap/v1
kind: Destination
metadata:
name: example-dest
spec:
destinationRef:
name: "example-dest-client-cert"
destinationServiceInstanceName: dest-service-instance-example // can be
ommited if config.destinationService.defaultInstanceName is provided
The transparent proxy monitors the available destinations in SAP BTP and compares them
with the existing destinations.destination.connectivity.api.sap Kubernetes resources in
the namespace where it is installed. Once a new SAP BTP destination is created for which a
destinations.destination.connectivity.api.sap resource exists, the transparent proxy changes its
configuration.
After the transparent proxy has executed successfully all necessary operations on the given SAP BTP
destination, the status of the destinations.destination.connectivity.api.sap Kubernetes resource
will be updated as shown below, and the DNS record "example-dest" will route to the configured URL in the
destination:
apiVersion: destination.connectivity.api.sap/v1
kind: Destination
metadata:
name: example-dest
spec:
destinationRef:
name: "example-dest-client-cert"
destinationServiceInstanceName: dest-service-instance-example // can be
ommited if config.destinationService.defaultInstanceName is provided
status:
conditions:
- lastUpdateTime: "2022-09-28T07:26:46Z"
message: Transparent Proxy is configured and Kubernetes service with name
"example-dest" is created.
reason: ConfigurationSuccessful
status: "True"
type: Available
Once done, the application can start consuming the destination from within the Kubernetes cluster, for
example:
curl example-dest.<destination-cr-namespace>
Note
The namespace is optional if you have created the destination custom resource (CR) in the namespace of
the application that will request it. For more information, see Kubernetes Namespaces and DNS .
HTTP
Property Description
ProxyType Internet
Property Description
Note
If you do not use it, the default value is 30 seconds.
If you decide to use it, the total timeout of the initial
request may be a bit longer than your chosen value, de-
pending on the latency of calls to the Destination Serv-
ice, which are needed to proxy the request.
URL.socketReadTimeoutInSeconds Period of time the HTTP client will wait for receiving a re-
sponse (data) from the server.
Note
If you do not use it, the default value is 30 seconds.
If you decide to use it, the total timeout of the initial
request may be a bit longer than your chosen value, de-
pending on the latency of calls to the Destination Serv-
ice, which are needed to proxy the request.
For example:
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Caution
If there are duplicate header keys passed from the re-
quest and contained in the destination, they are both
added to the request against the target system.
Sample Code
{
/// other destination properties
"URL.headers.foo": "bar"
}
Sample Code
For example:
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Caution
If there are duplicate header keys passed from the re-
quest and contained in the destination, they are both
added to the request against the target system.
Sample Code
{
/// other destination properties
"URL.queries.foo": "bar"
}
Sample Code
curl targetsystem?foo=bar2 //
will result to the following
final URL : targetsystem?
foo=bar2&foo=bar
The transparent proxy handles both HTTP and TCP communication protocols for on-premise destinations. To
use the on-premise scenarios, as an application developer you need to install a connectivity proxy and integrate
Sample Code
integration:
connectivityProxy:
serviceName: connectivity-proxy.<namespace>
serviceCredentials:
secretName: <the secret that contains the connectivity proxy instance
credentials>
secretNamespace: <the secret namespace>
Then, you must create an SAP BTP destination of proxy type OnPremise, for example:
Sample Code
{
"Name": "example-dest-onprem",
"Type": "HTTP",
"ProxyType": "OnPremise",
"URL": "http://virtualhost:4321",
"Authentication": "PrincipalPropagation"
}
To target the destination with name “example-dest-onprem” for handling by the transparent proxy, you should
create the following Kubernetes resource in a namespace of your choice:
apiVersion: destination.connectivity.api.sap/v1
kind: Destination
metadata:
name: example-dest
spec:
destinationRef:
name: "example-dest-onprem"
destinationServiceInstanceName: dest-service-instance-example // can be
ommited if config.destinationService.defaultInstanceName is provided
After the transparent proxy executes successfully all necessary operations on the given SAP BTP destination,
the status of the destinations.destination.connectivity.api.sap Kubernetes resource will be
updated as shown below, and the DNS record "example-dest" will route to the configured URL in the
destination:
apiVersion: destination.connectivity.api.sap/v1
kind: Destination
metadata:
name: example-dest
spec:
destinationRef:
name: "example-dest-onprem"
destinationServiceInstanceName: dest-service-instance-example // can be
ommited if config.destinationService.defaultInstanceName is provided
status:
conditions:
- lastUpdateTime: "2022-09-28T07:26:46Z"
Once done, the application can start consuming the destination from within the Kubernetes cluster, for
example:
Sample Code
curl example-dest.<destination-cr-namespace>
Note
The namespace is optional if you have created the destination custom resource (CR) in the namespace of
the application that will request it. For more information, see Kubernetes Namespaces and DNS .
HTTP
Property Description
Note
If you do not use it, the default value is 30 seconds.
If you decide to use it, the total timeout of the initial
request may be a bit longer than your chosen value, de-
pending on the latency of calls to the Destination Serv-
ice, which are needed to proxy the request.
URL.socketReadTimeoutInSeconds Period of time the HTTP client will wait for receiving a re-
sponse (data) from the server.
Note
If you do not use it, the default value is 30 seconds.
If you decide to use it, the total timeout of the initial
request may be a bit longer than your chosen value, de-
pending on the latency of calls to the Destination Serv-
ice, which are needed to proxy the request.
For example:
Sample Code
{
...
"URL.headers.<header-key-1>" :
"<header-value-1>",
...
"URL.headers.<header-key-N>":
"<header-value-N>",
}
Caution
If there are duplicate header keys passed from the re-
quest and contained in the destination, they are both
added to the request against the target system.
Sample Code
{
/// other destination properties
"URL.headers.foo": "bar"
}
Sample Code
For example:
Sample Code
{
...
"URL.queries.<query-key-1>" :
"<query-value-1>",
...
"URL.queries.<query-key-N>":
"<query-value-N>",
}
Caution
If there are duplicate header keys passed from the re-
quest and contained in the destination, they are both
added to the request against the target system.
Sample Code
{
/// other destination properties
"URL.queries.foo": "bar"
}
Sample Code
curl targetsystem?foo=bar2 //
will result to the following
final URL : targetsystem?
foo=bar2&foo=bar
TCP
ProxyType OnPremise
SOCKS5
The connectivity proxy provides a SOCKS5 proxy that you can use to access on-premise systems via TCP-
based protocols. SOCKS5 is the industry standard for proxying TCP-based traffic. For more information, see
RFC 1928 .
The transparent proxy performs the SOCKS5 handshake with the Connectivity Proxy [page 738] to enable the
out-of-the-box consumption of that (otherwise complex) functionality for the application developer.
For more information on how to directly set up the usage of the TCP protocol for cloud applications, see Using
the TCP Protocol for Cloud Applications [page 234].
You can use the “Find Destination” API to extend your destination with a destination fragment.
For more information, see Extending Destinations with Fragments [page 273].
Sample Code
{
"FragmentName": "example-fragment",
"URL": "https:/myotherapp.com"
}
Sample Code
apiVersion: destination.connectivity.api.sap/v1
kind: Destination
metadata:
name: example-dest
spec:
Note
Static reference of a fragment is only applicable for a destination custom resource that references a
particular destination. The Dynamic Lookup of Destinations [page 873] feature is not compatible with
this approach.
• Dynamically passing a fragment as an HTTP header is only compatible with the Dynamic Lookup of
Destinations [page 873] approach. You can check the examples given there.
The transparent proxy enriches your request by adding the credentials configured in the destination
configuration. Depending on the configured authentication type, the transparent proxy propagates them via
an Authorization or SAP-Connectivity-Authentication header. For more information, see Configuring
Authentication [page 87].
• For all authentication types, except ClientCertificateAuthentication, the transparent proxy enriches the
request by adding the necessary Authorization header to the request (extracting the authTokens
section from the Destination service response).
For more information on the fields in authTokens, see "Find Destination" Response Structure [page 259].
Client Assertion
Client assertion OAuth flows are supported, and the transparent proxy propagates the required headers to the
Destination service. For more information, see: Using Client Assertion with OAuth Flows [page 152].
ClientCertificateAuthentication
Note
For this authentication type, the transparent proxy manages the client certificate addition (client certificates
and CA certificates) by getting them from the destination and adding them to the HTTP client that is used to
establish the communication between client and server.
• Supported cert store types are p12, pfx, pem, der, cer, crt.
• Only RSA pkcs1 keys (starting with header RSA PRIVATE KEY and having headers with encryption
information) are currently supported for decryption from pem format. The following encryption algorithms
are supported:
• DES-CBC
• DES-EDE3-CBC
• AES-128-CBC
• AES-192-CBC
• AES-256-CBC
All key types should be supported if no encryption is present.
• KeyStore.Source=ClientProvided is not supported as the transparent proxy can manage only
certificates stored in the Destination service.
Principal Propagation
Principal propagation, also known as user propagation, lets you perform single sign-on (SSO) of the cloud user
towards an on-premise system.
The cloud user's identity is passed to the transparent proxy as a token represented by a JSON Web token
(JWT) via Authorization header of scheme Bearer. It is forwarded via the transparent proxy to the
connectivity proxy and then to the Cloud Connector, which validates and further processes it to establish
SSO with the on-premise system. The user is propagated via SAP-Connectivity-Authentication header.
The transparent proxy can operate in all namespaces in the cluster or only in namespaces labeled in
the following way: transparent-proxy.connectivity.api.sap/namespace:<namespace where Transparent Proxy is
installed>.
For this feature, you can set the Helm property config.managedNamespacesMode to "all" or "labelSelector".
The default value is "all". It means that the proxy operates with destination custom resources across all
namespaces in the cluster.
The transparent proxy can operate in all namespaces in the cluster, or only in namespaces labeled in
the following way: transparent-proxy.connectivity.api.sap/namespace:<namespace where Transparent Proxy is
installed>.
For this feature, you can set the Helm property config.managedNamespacesMode to all or
labelSelector.
The default value is all. This means that the proxy operates with destination custom resources across all
namespaces in the cluster.
1.4.2 Multitenancy
The transparent proxy works in two tenant modes: dedicated and shared. You can change the mode by
modifying the Helm property .config.tenantMode.
• In the dedicated tenant mode, the transparent proxy can expose destinations only from the subaccount
passed in the service key of the Destinations service. Requests to the transparent proxy remain
unchanged.
• In the shared tenant mode, the transparent proxy can expose destinations from any subscribed tenant to
the subaccount passed in the service key of the Destinations service.
When requesting a destination through the transparent proxy in shared tenant mode, you must pass the
X-Tenant-Subdomain: <tenant-subdomain> or X-Tenant-Id: <tenant-id> header.
A tenant is successfully onboarded when it is visible in the destination custom resource status. The onboarding
of a tenant is dynamic and happens during the first request on behalf of that tenant.
TCP
Tenants are configured in the destination custom resource (CR) as annotation with key "transparent-
proxy.connectivity.api.sap/tenant-subdomains" and value <all tenant subdomains> in form of a json array:
"transparent-proxy.connectivity.api.sap/tenant-subdomains":
'["tenantSubdomain1", "tenantSubdomain2", ...]'
For each tenant, a new instance of TCP proxy is created which is assigned only to that specific tenant and
destination. Also, a separate service is created with a prefix containing the tenant subdomain (for example, if
we have tenant subdomain "tenant1" and destination CR "dest", the service name will be "tenant1-dest".
Define the access control scope of the destination custom resources for the transparent proxy for Kubernetes.
The scoping of access to destinations exposed via destination custom resources is based on the network
policies concept in Kubernetes.
Prerequisites
Network policies are implemented by a CNI Plugin. To apply network policies, you must use a networking
solution that supports the NetworkPolicy Kubernetes resource.
Note
Creating a NetworkPolicy resource without having a corresponding controller running in the cluster, will
have no effect. In this case, access scoping will not work.
• If .Values.config.security.accessControl.destinations.defaultScope is set to
namespaced, the destinations exposed via destination custom resources that were created in a
namespace, are accessible only by the applications running in this namespace of the destination custom
resource.
• If .Values.config.security.accessControl.destinations.defaultScope is set to
clusterWide, the destinations exposed via destination custom resources are accessible from any
namespace in the cluster.
• The default value is namespaced.
Integrate the transparent proxy with other SAP BTP Connectivity services.
Destination Service Integration [page 824] Integrate the transparent proxy for Kubernetes with the SAP
BTP Destination service.
Connectivity Proxy Integration [page 825] Integrate the transparent proxy with the connectivity proxy
for Kubernetes.
Integrate the transparent proxy for Kubernetes with the SAP BTP Destination service.
The Destination service lets you declaratively model a technical connection as a destination configuration, and
via its REST APIs find the destination configuration that is required to access a remote service or system from
your Kubernetes workload. To integrate the transparent proxy with the Destination service, you must:
Integrate the transparent proxy with the connectivity proxy for Kubernetes.
The connectivity proxy is a Kubernetes component that connects workloads running on a Kubernetes cluster
to on-premise systems exposed via the Cloud Connector [page 343]. The transparent proxy itself doesn’t
provide direct connections to on-premise systems, that is, the connectivity proxy integration is mandatory
for the transparent proxy to route to on-premise destinations. To integrate the transparent proxy with the
connectivity proxy, you must:
1. Install a connectivity proxy instance or reuse an existing one. It must be installed in the same Kubernetes
cluster where the transparent proxy is installed.
For more information, see Lifecycle Management [page 770].
2. Follow the instructions under Lifecycle Management [page 825] and set up
the values.yaml for integrating with the connectivity proxy, providing the
config.integration.connectivityProxy.serviceName.
For more information, see Configuration Guide [page 829].
3. If the connectivity proxy runs in multi-region mode, you can link a transparent proxy configuration for
a Destination service instance with a connectivity proxy local region configuration. This can be done
at design time by adding an association. Doing this, you won't need to provide the HTTP header SAP-
Connectivity-Region-Configuration-Id on each request - the transparent proxy will automatically
pass it to the connectivity proxy:
config:
integration:
destinationService:
instances:
- name: <local-instance-name>
serviceCredentials:
secretKey: <secret-key>
secretName: <secret-name>
secretNamespace: <secret-namespace>
associateWith:
connectivityProxy:
locallyConfiguredRegionId: <locally-configured-conn-proxy-region-
id>
Related Information
Use the Helm chart to configure and manage the lifecycle of the transparent proxy.
The transparent proxy delivery includes a Helm chart that you can use for lifecycle management. The Helm
allows full configuration via the standard Helm method of a "values.yaml" file .
The latest helm chart version is 1.5.0 for both RBSC and DockerHub.
Registry: 73554900100900006891.helmsrv.cdn.repositories.cloud.sap
Tag: 1.5.0
Deploy (DockerHub)
To deploy the transparent proxy on a Kubernetes cluster, execute the following steps:
1. Add the Helm repository and pull the Helm chart from DockerHub.
You can directly install it with one command:
For additional information about using OCI registries with Helm, see Helm docs .
2. (Optional) Prepare the Docker image configuration for the repository authentication if want to use a
registry different from docker.io:
• Create docker secret:
Note
For x.509-based service keys with self-signed certificates, prepare a Kubernetes secret holding the
private key for the certificate.
Example:
If you are using another secret name or internal key in the secret, provide the required parameters
(config.integration.destinationService.instances[n].serviceCredentials.private
Key.secretName and
config.integration.destinationService.instances[n].serviceCredentials.private
Key.secretInternalKey) in the values.yaml.
4. Fill all other properties in values.yaml for your scenario, as described in Configuration Guide [page
829].
Caution
Do not extract the archive and modify the default values.yml file included there. Instead, use your
own values.yml and only include fields that should be overridden.
By default, mTLS encryption is enabled using cert-manager . You should check the
Configuration Guide [page 829] to modify the configuration settings according to your setup,
for example, referencing your cert-manager Issuer or ClusterIssuer.
Tip
Encryption for the transparent proxy components can also be disabled for test purposes or if you
have implemented your own mTLS sidecar solution, for example, Istio.
Note
In case of installation in a Kyma cluster, if you want to enable istio for the transparent proxy workloads,
you need to label the namespace where the transparent proxy is installed with:
Caution
Changing any of the values.yaml configurations using helm upgrade may result in the restart of some or
all transparent proxy components.
When you have a transparent proxy deployed on the cluster, you may want to maintain it by changing
configurations and/or changing its version. To do so, follow these steps:
1. Get the transparent proxy Helm chart. It can be the same version as the one currently installed or a
different version that you want to upgrade or downgrade.
2. Prepare the values.yaml file for your scenario, as described in Configuration [page 830]. Here you can
just modify the one you used previously by applying the changes you desire.
3. Use the Helm CLI to upgrade the transparent proxy:
If you need to remove the transparent proxy from your cluster or from a namespace, you can
delete all resources installed by Helm except for the Custom Resource Definition (CRD) of type
destinations.destination.connectivity.api.sap using the Helm CLI:
To deploy/update/undeploy the transparent proxy on a Kubernetes cluster via Landscaper, look at the
following Guided Tour with Landscaper . You will need an LAAS instance that you can request from the
corresponding team, or launch a local one using the Landscaper CLI.
Use the parameters below to configure the transparent proxy for Kubernetes.
Caution
Changing any of the properties below using helm upgrade may result in the restart of some or all
transparent proxy components.
Note
[n] in the table below means that the described property is part of an array.
s ProxyType=OnPre
mise through the con-
nectivity proxy. [1, 60]
seconds allowed.
If the configuration is
changed, it triggers a
restart.
If "labelSelector" is
set, the transpar-
ent proxy operates
only in namespaces
labeled with trans-
parent-proxy.connec-
tivity.api.sap/name-
space:<namespace
where TP is installed
in>.
Tip
We recommend
that you do not set
this value higher
than 5, if there are
frequent updates
on destinations.
It should be disabled
only in test environ-
ments or if you imple-
ment your own mTLS
solution like Istio for
example.
Note
Make sure you
install the cert-
manager in ad-
vance.
Only applicable
for cert-man-
ager.io (https://cert-
manager.io/docs/ ).
Only applicable
for cert-man-
ager.io (https://cert-
manager.io/docs/ ).
Only applicable
for cert-man-
ager.io (https://cert-
manager.io/docs/ ).
Note
You cannot enable
both horizontal
and vertical au-
toscaling. If you
want to use hori-
zontal autoscaling,
deployment.a
utoscaling.h
ttp.vertical
.enabled must
not be set to
true.
Note
You cannot enable
both horizontal
and vertical au-
toscaling. If you
want to use verti-
cal autoscaling,
deployment.a
utoscaling.h
ttp.horizont
al.enabled
must not be set to
true.
Note
There are 2 con-
tainers in each
HTTP proxy pod.
Note
There are 2 con-
tainers in each
HTTP proxy pod.
Note
There are 2 con-
tainers in each
HTTP proxy pod.
Note
There are 2 con-
tainers in each
HTTP proxy pod.
Note
You cannot enable
both horizontal
and vertical au-
toscaling. If you
want to use hori-
zontal autoscaling,
deployment.a
utoscaling.t
cp.vertical.
enabled must
not be set to
true.
Note
You cannot enable
both horizontal
and vertical au-
toscaling. If you
want to use verti-
cal autoscaling,
deployment.a
utoscaling.t
cp.horizonta
l.enabled
must not be set to
true.
By default there is no
minimum.
By default there is no
minimum.
By default there is no
maximum.
By default there is no
maximum.
If you want to pull docker images from the SAP Repository-Based Shipment Channel (RBSC), use the following
configuration:
• deployment.image.registry =
73554900100900006891.dockersrv.cdn.repositories.cloud.sap
• deployment.image.repository = com.sap.core.connectivity.transparent-proxy
• deployment.image.pullSecret = See Lifecycle Management [page 825].
When installing a transpartent proxy for Kubernetes, the first thing you need to decide is the sizing of the
installation.
Overview
To get the most out of the transparent proxy, you must configure it in a suitable way, fitting the scenarios in
which it plays a role.
The following table gives basic sizing guidance. The values listed in the CPU, Memory and sections correspond
to the properties, which should be defined in the values.yaml. For more information, see Configuration Guide
[page 829]:
S: • deployment.resources.ht • deployment.resources.ht
tp.requests.cpu: 0.2 tp.requests.memory: 256
The expected load is small - request
• deployment.resources.ht M
concurrency and size are low
tp.limits.cpu: 0.4 • deployment.resources.ht
tp.limits.memory: 512 M
M: • deployment.resources.ht • deployment.resources.ht
tp.requests.cpu: 0.4 tp.requests.memory: 512
The expected load is medium - request
• deployment.resources.ht M
concurrency and size are medium
tp.limits.cpu: 0.8 • deployment.resources.ht
tp.limits.memory: 1024
M
L: • deployment.resources.ht • deployment.resources.ht
tp.requests.cpu: 0.8 tp.requests.memory:
The expected load is large - request
• deployment.resources.ht 1024 M
concurrency and size are medium or
tp.limits.cpu: 1.6 • deployment.resources.ht
high
tp.limits.memory: 2048
M
S: • deployment.resources.tc • deployment.resources.tc
p.requests.cpu: 0.05 p.requests.memory: 32 M
The expected load is small - request
• deployment.resources.tc • deployment.resources.tc
concurrency and size are low
p.limits.cpu: 0.1 p.limits.memory: 64 M
M: • deployment.resources.tc • deployment.resources.tc
p.requests.cpu: 0.1 p.requests.memory: 64 M
The expected load is medium - request
• deployment.resources.tc • deployment.resources.tc
concurrency and size are medium
p.limits.cpu: 0.2 p.limits.memory: 128 M
L: • deployment.resources.tc • deployment.resources.tc
p.requests.cpu: 0.2 p.requests.memory: 128
The expected load is large - request
• deployment.resources.tc M
concurrency and size are medium or
p.limits.cpu: 0.4 • deployment.resources.tc
high
p.limits.memory: 256 M
The above-mentioned sizing recommendations for the TCP proxies are related to the connectivity proxy
software component, which also acts as a tunnel server to the on-premise systems it connects to. This
means that the connectivity proxy has to be properly sized as well, see Sizing Recommendations [page
790] (connectivity proxy for Kubernetes).
Note
General Note
These sizing recommendations are just a direction point. There are many factors that affect the
performance offered by the transparent proxy, related to the specifics of your concrete scenarios, expected
regular and intermittent load, and so on.
1.4.6 Monitoring
Check the availability, status, and destination custom resources of the transparent proxy for Kubernetes.
Availability Monitoring
The availability check of the transparent proxy is the minimal verification that can be done to ensure that the
transparent proxy is alive and working. This is done by a health check application deployed to the cluster during
installation of the transparent proxy.
Status Check
• If there are HTTP custom resources defined → is the sap-transp-proxy-http alive and running?
• If there are TCP custom resources defined → are all the sap-transp-proxy-tcp alive and running?
• Is the sap-transp-proxy-manager alive and running?
The overall status provided by the /status endpoint is formed in the following way:
The /destinationCRs check provides information for all destination custom resources in the namespace
where the transparent proxy is installed.
1.4.7 Troubleshooting
The transparent proxy consists of a transparent proxy manager, transparent HTTP proxy, and multiple
transparent TCP proxy instances.
5. To get the logs of the transparent proxy operator (installed only when Transparent Proxy is enabled as a
Kyma Module in the Kyma environment [page 867]) execute:
• transparent-proxy-manager.log
can be used for investigation purposes. For more information, see Recommended Actions [page 861].
When the default logging level is not sufficient for debugging the issue you are facing, you can change the log
level to get more insight about the problem.
Changing a log level is done without any downtime and requires no restarts. All you need to do is invoke a
simple command on the pod of a transparent proxy component where you need to gain more insight. Here are
some examples:
4. To change the log level of the transparent proxy health check, execute:
5. To change the log level of the transparent proxy operator (installed only when Transparent Proxy is enabled
as a Kyma Module in the Kyma environment [page 867]) execute:
When using kubectl, you can monitor CPU and memory of the transparent proxy's pods out of the box:
For more information on using kubectl, see Interacting with running Pods .
Issue Solution
The call to the Kubernetes service referencing your desti- Make sure the URL configured in the destination is valid.
nation returns 404. Wait for the next transparent proxy manager completion.
The call to the Kubernetes service referencing your desti- Check your request headers or OAuth Configuration fields
nation returns response with status code 400. Check also in the referenced Destination. Example for 'authorization'
'x-error-message', 'x-error-origin' and 'x-internal-error-code' header of scheme - 'Authorization: Bearer <base64-en-
response headers for more concrete information. coded-token>'
The call to the Kubernetes service referencing your desti- Make sure you add a valid 'x-token-service-tenant' request
nation returns response with status code 400 and has re- header to each destination that has 'tokenServiceURLType'
sponse header 'x-error-message' with value ''x-token-serv- property of type Common.
ice-tenant‘ request header is missing. It is required when
'tokenServiceURLType' property in the destination is of type
Common.' .
The call to the Kubernetes service referencing your desti- Make sure you pass either ‘x-client-assertion-destination-
nation returns response with status code 400 and has re- name‘ or ‘x-client-assertion‘ and ‘x-client-assertion-type‘ re-
sponse header 'x-error-message' with value 'Cannot mix quest headers and not the combination of all of them.
header ‘x-client-assertion-destination-name‘ with headers
‘x-client-assertion‘ and ‘x-client-assertion-type‘.' .
The call to the Kubernetes service referencing your des- Make sure you pass a valid 'authorization' header value that
tination returns response with status code 400 and has is appropriate for the referenced destination authentication
response header 'x-error-message' with value 'Destination type.
service returned unsuccessful status code. Check your re-
lated request headers.' and 'Destination service' value in the
'x-error-origin' header.
The call to the Kubernetes service referencing your des- Make sure you pass either x-tenant-id or x-tenant-subdo-
tination returns response with status code 400 and has main header in shared mode.
response header 'x-error-message' with value 'In multi-ten-
ant operational mode, either ‘x-tenant-id‘ or ‘x-tenant-sub-
domain‘ header is mandatory to be provided.'.
The call to the Kubernetes service referencing your desti- Make sure you pass either x-tenant-id or x-tenant-subdo-
nation returns response with status code 400 and has re- main header but not both at the same time in shared mode.
sponse header 'x-error-message' with value 'Both ‘x-tenant-
id‘ and ‘x-tenant-subdomain‘ headers are passed. Only one
should be provided.'.
The call to the Kubernetes service referencing your desti- In case the spec.destinationRef.name property is "*" the 'x-
nation returns response with status code 400 and has re- destination-name' header is mandatory provide reference to
sponse header 'x-error-message' with value ''x-destination- a valid destination.
name' header is missing but required for an endpoint expos-
ing multiple destinations via the same Destination CR.'.
The call to the Kubernetes service referencing your desti- The provided header value is wrong since the Destination
nation returns response with status code 400 and has re- Service allow maximum length for a destination name to be
sponse header 'x-error-message' with value 'The value of the 200 symbols.
header 'x-destination-name' exceeds the maximum allowed
length of 200'.
The call to the Kubernetes service referencing your desti- The provided header value for 'x-destination-name' has inva-
nation returns response with status code 400 and has re- lid characters. Check the name restrictions in the Destina-
sponse header 'x-error-message' with value 'The value of tion service documentation.
the header 'x-destination-name' contains characters that are
not allowed. Check the destination name restrictions in the
Destination service documentation.'.
The call to the Kubernetes service referencing your desti- The provided header value is wrong since the Destination
nation returns response with status code 400 and has re- Service allow maximum length for a fragment name to be
sponse header 'x-error-message' with value 'The value of the 200 symbols.
header 'x-fragment-name' exceeds the maximum allowed
length of 200'.
The call to the Kubernetes service referencing your desti- The provided header value for 'x-destination-name' has inva-
nation returns response with status code 400 and has re- lid characters. Check the name restrictions in the Destina-
sponse header 'x-error-message' with value 'The value of tion service documentation.
the header 'x-fragment-name' contains characters that are
not allowed. Check the destination name restrictions in the
Destination service documentation.'.
The call to the Kubernetes service referencing your desti- The Destination Custom Resource is bound to a specific
nation returns response with status code 400 and has re- destination via spec.destinationRef.name property. Either
sponse header 'x-error-message' with value 'The requested change the spec.destinationRef.name property in the CR to
endpoint is bound to a concrete destination via the respec- "*" or do not pass the 'x-destination-name' header.
tive Destination CR. 'x-destination-name' header is not sup-
ported for this endpoint. Contact the Transparent Proxy Ad-
ministrator.'.
The call to the Kubernetes service referencing your desti- The Destination Custom Resource is bound to a specific
nation returns response with status code 400 and has re- destination via spec.destinationRef.name property. Either
sponse header 'x-error-message' with value 'The requested change the spec.destinationRef.name property in the CR to
endpoint is bound to a concrete destination via the respec- "*" or set spec.fragmentRef.name to the desired fragment.
tive Destination CR. 'x-fragment-name' header is not sup-
ported for this endpoint. Contact the Transparent Proxy Ad-
ministrator.'.
The call to the Kubernetes service referencing your des- Check if your referenced destination exists in the specified
tination returns response with status code 502 and has Destination Service instance or contact the Destination Ad-
response header 'x-error-message' with value 'Destination ministrator.
'<name>' referred in Destination CR '<name>' is not found
in tenant '<tenant-subdomain>'. Inspect the Destination CR
or contact the Destination Administrator.' and 'Destination
service' value in the 'x-error-origin' header.
The call to the Kubernetes service referencing your des- Check if your referenced destination/fragment exists in the
tination returns response with status code 502 and has specified Destination Service instance or contact the Desti-
response header 'x-error-message' with value 'Destination nation Administrator.
'<name>' with '<fragmentName>' referred in Destination CR
'<name>' is not found in tenant '<tenant-subdomain>'. In-
spect the Destination CR or contact the Destination Admin-
istrator.' and 'Destination service' value in the 'x-error-origin'
header.
The call to the Kubernetes service referencing your desti- There could be an issue with the configuration fields of the
nation returns response with status code 502 and has re- destination. Contact the Destination Administrator.
sponse header 'x-error-message' with value 'Received invalid
configuration from Destination Service, requested via desti-
nation '<destinationName>'.' and 'Destination service' value
in the 'x-error-origin' header.
The call to the Kubernetes service referencing your desti- There could be an issue with the configuration fields of the
nation returns response with status code 502 and has re- destination/fragment. Contact the Destination Administra-
sponse header 'x-error-message' with value 'Received inva- tor.
lid configuration from Destination Service, requested via
destination '<destinationName>' and fragment '<fragment-
Name>'.' and 'Destination service' value in the 'x-error-ori-
gin' header.
The call to the Kubernetes service referencing your desti- Check Auth Configuration fields or your request headers in
nation returns response with status code 502 and has re- the referenced destination.
sponse header 'x-error-message' with value 'Invalid OAuth
configuration in destination '<name>' referred in Destination
CR '<name>'. Contact the Destination Administrator.' and
'Destination service' value in the 'x-error-origin' header.
The call to the Kubernetes service referencing your desti- Check Auth Configuration fields or your request headers in
nation returns response with status code 502 and has re- the referenced destination/fragment.
sponse header 'x-error-message' with value 'Invalid OAuth
configuration in destination '<name>' with fragment '<frag-
mentName>' referred in Destination CR '<name>'. Contact
the Destination Administrator.' and 'Destination service'
value in the 'x-error-origin' header.
The call to the Kubernetes service referencing your des- Make sure you provide correct Connectivity proxy service-
tination returns response with status code 502 and has Name in Helm Configurations.
response header 'x-error-message' with value 'On-Premise
technical connectivity is not configured. There is no value
provided for connectivity proxy service name.' and 'Trans-
parent Proxy' value in the 'x-error-origin' header.
The call to the Kubernetes service referencing your desti- Check if you pass a valid Destination Service instance in Des-
nation returns response with status code 502, and has re- tination CR or contact the Kubernetes cluster administrator
sponse header 'x-error-message' with value 'The Destination to check the Helm configuration.
service instance name in the Destination CR '%s' is not con-
figured as expected.' and 'Transparent Proxy' value in the
'x-error-origin' header.
The destinations.destination.connectivity.api.sap custom re- Create a destination with the name "testdest". Wait for the
source referencing your destination with the name "testdest" next Transparent Proxy Manager completion.
has the name "testservice". The call to the "testservice"
Kubernetes service returns an unsuccessful status code or
the Kubernetes service does not exist. The last condition
of type "Available" of the destinations.destination.connec-
tivity.api.sap custom resource has the status "False" with
the reason "ConfigurationError-ReferencedDestination-Not-
Found".
The destinations.destination.connectivity.api.sap custom re- Create a fragment with name "testfragment". Wait for the
source referencing your destination with the name "testdest" next Transparent Proxy Manager completion.
has the name "testservice" and references fragment called
"testfragment". The call to the "testservice" Kubernetes
service returns an unsuccessful status code or the Kuber-
netes service does not exist. The last condition of type "Avail-
able" of the destinations.destination.connectivity.api.sap
custom resource has the status "False" with the reason
"ConfigurationError-ReferencedFragment-NotFound".
The destinations.destination.connectivity.api.sap custom re- Create a fragment with name "testfragment". Wait for the
source referencing your destination with the name "testdest" next Transparent Proxy Manager completion.
has the name "testservice" and references fragment called
"testfragment". The call to the "testservice" Kubernetes
service returns an unsuccessful status code or the Kuber-
netes service does not exist. The last condition of type "Avail-
able" of the destinations.destination.connectivity.api.sap
custom resource has the status "False" with the reason
"ConfigurationError-ReferencedFragment-NotFound".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" Type field
source referencing your destination with the name "testdest" with supported values: "HTTP", "TCP". Wait for the next
has the name "testservice". The call to the "testservice" Transparent Proxy Manager completion.
Kubernetes service returns an unsuccessful status code or
the Kubernetes service does not exist. The last condition
of type "Available" of the destinations.destination.connectiv-
ity.api.sap custom resource has the status "False" with the
reason "ConfigurationError-ReferencedDestination-TypeNot-
Supported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" ProxyType
source referencing your destination with the name "testdest" field with supported values: "OnPremise", "Internet". Wait for
has the name "testservice". The call to the "testservice" the next Transparent Proxy Manager completion.
Kubernetes service returns an unsuccessful status code or
the Kubernetes service does not exist. The last condition
of type "Available" of the destinations.destination.connec-
tivity.api.sap custom resource has status "False" with the
reason "ConfigurationError-ReferencedDestination-ProxyTy-
peNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" or the frag-
source referencing your destination with the name "testd- ment with name "testfragment" ProxyType field with sup-
est" has the name "testservice" and references fragment ported values: "OnPremise", "Internet". Wait for the next
called "testfragment". The call to the "testservice" Kuber- Transparent Proxy Manager completion.
netes service returns an unsuccessful status code or the
Kubernetes service does not exist. The last condition of
type "Available" of the destinations.destination.connectiv-
ity.api.sap custom resource has status "False" with the
reason "ConfigurationError-ReferencedDestinationWithFrag-
ment-ProxyTypeNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" URL field
source referencing your destination with the name "testdest" with supported values: Should not be empty; If type is HTTP,
has the name "testservice". The call to the "testservice" URL should start with "http://" or "https://" or without pro-
Kubernetes service returns an unsuccessful status code tocol. Wait for the next Transparent Proxy Manager comple-
or the Kubernetes service does not exist. The last condi- tion.
tion of type "Available" of the destinations.destination.con-
nectivity.api.sap custom resource has the status "False"
with the reason "ConfigurationError-ReferencedDestination-
UrlNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" or the frag-
source referencing your destination with the name "testd- ment with name "testfragment" URL field with supported
est" has the name "testservice" and references fragment values: Should not be empty; If type is HTTP, URL should
called "testfragment". The call to the "testservice" Kuber- start with "http://" or "https://" or without protocol. Wait for
netes service returns an unsuccessful status code or the the next Transparent Proxy Manager completion.
Kubernetes service does not exist. The last condition of
type "Available" of the destinations.destination.connectiv-
ity.api.sap custom resource has the status "False" with the
reason "ConfigurationError-ReferencedDestinationWithFrag-
ment-UrlNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" Address
source referencing your destination with the name "testdest" field with supported values: Should not be empty; It should
has the name "testservice". The destination is of type TCP start with "tcp://" or without protocol. Wait for the next
and ProxyType OnPremise and there is no k8s service with Transparent Proxy Manager completion.
that name created.The last condition of type "Available" of
the destinations.destination.connectivity.api.sap custom re-
source has the status "False" with the reason "Configuratio-
nError-ReferencedDestination-TcpAddressNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" or the
source referencing your destination with the name "testdest" fragment with name "testfragment" Address field with sup-
has the name "testservice" and references fragment called ported values: Should not be empty; It should start with
"testfragment". The destination is of type TCP and Proxy- "tcp://" or without protocol. Wait for the next Transparent
Type OnPremise and there is no k8s service with that name Proxy Manager completion.
created.The last condition of type "Available" of the destina-
tions.destination.connectivity.api.sap custom resource has
the status "False" with the reason "ConfigurationError-Refer-
encedDestinationWithFragment-TcpAddressNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" URL field
source referencing your destination with the name "testdest" with supported values: Should not be empty; URL should
has the name "testservice". The call to the "testservice" not be loopback (loopback URLs are those, which contain
Kubernetes service returns an unsuccessful status code exactly one of the following hosts: "localhost", "127.0.0.1",
or the Kubernetes service does not exist. The last condi- "::1", "0:0:0:0:0:0:0:1"); Wait for the next Transparent Proxy
tion of type "Available" of the destinations.destination.con- Manager completion.
nectivity.api.sap custom resource has the status "False"
with the reason "ConfigurationError-ReferencedDestination-
LoopbackUrlNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" or the
source referencing your destination with the name "testd- fragment with name "testfragment" URL field with sup-
est" has the name "testservice" and references fragment ported values: Should not be empty; URL should not be
called "testfragment". The call to the "testservice" Kuber- loopback (loopback URLs are those, which contain exactly
netes service returns an unsuccessful status code or the one of the following hosts: "localhost", "127.0.0.1", "::1",
Kubernetes service does not exist. The last condition of "0:0:0:0:0:0:0:1"); Wait for the next Transparent Proxy Man-
type "Available" of the destinations.destination.connectiv- ager completion.
ity.api.sap custom resource has the status "False" with the
reason "ConfigurationError-ReferencedDestinationWithFrag-
ment-LoopbackUrlNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" KeyStore-
source referencing your destination with the name "testdest" Location field with supported values. Supported cert store
has the name "testservice". The call to the "testservice" types are p12, pfx, pem, der, cer, crt.
Kubernetes service returns an unsuccessful status code or
the Kubernetes service does not exist. The last condition
of type "Available" of the destinations.destination.connectiv-
ity.api.sap custom resource has the status "False" with the
reason "ConfigurationError-ReferencedDestination-KeyStor-
eInvalid".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" TrustStore-
source referencing your destination with the name "testdest" Location field with supported values. Supported cert store
has the name "testservice". The call to the "testservice" types are p12, pfx, pem, der, cer, crt.
Kubernetes service returns an unsuccessful status code
or the Kubernetes service does not exist. The last condi-
tion of type "Available" of the destinations.destination.con-
nectivity.api.sap custom resource has the status "False"
with the reason "ConfigurationError-ReferencedDestination-
TrustStoreInvalid".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" so that it
source referencing your destination with the name "testdest" has not URL that starts with "http://" and has TrustStoreLo-
has the name "testservice". The call to the "testservice" cation or KeyStoreLocation in the same time.
Kubernetes service returns an unsuccessful status code
or the Kubernetes service does not exist. The last condi-
tion of type "Available" of the destinations.destination.con-
nectivity.api.sap custom resource has the status "False"
with the reason "ConfigurationError-ReferencedDestination-
HttpUrlWithTrustStoreOrKeyStoreNotSupported".
The destinations.destination.connectivity.api.sap custom re- Update the destination with the name "testdest" or the frag-
source referencing your destination with the name "testd- ment with name "testfragment" so that it has not URL that
est" has the name "testservice" and references fragment starts with "http://" and has TrustStoreLocation or KeyStor-
called "testfragment". The call to the "testservice" Kuber- eLocation in the same time.
netes service returns an unsuccessful status code or the
Kubernetes service does not exist. The last condition of
type "Available" of the destinations.destination.connectiv-
ity.api.sap custom resource has the status "False" with the
reason "ConfigurationError-ReferencedDestination-HttpUrl-
WithTrustStoreOrKeyStoreNotSupported".
The destinations.destination.connectivity.api.sap custom re- Wait for the next Transparent Proxy Manager completion. If
source referencing your destination with the name "testdest" the issue remains, see Recommended Actions [page 861].
has the name "testservice". The call to the "testservice"
Kubernetes service returns an unsuccessful status code or
the Kubernetes service does not exist. The last condition
of type "Available" of the destinations.destination.connectiv-
ity.api.sap custom resource has the status "False" with the
reason "ConfigurationFailed."
The destinations.destination.connectivity.api.sap custom re- Dynamic custom resource should not have statically de-
source referencing your destination with the name "testd- clared fragments in spec.fragmentRef.name.
est" has the name "testservice" and is a dynamic custom
Fragment name should be provided only through the ''x-frag-
resource since it refers spec.destinationRef.name to "*". The
call to the "testservice" Kubernetes service returns an un- ment-name" header.
successful status code or the Kubernetes service does not
exist. The last condition of type "Available" of the destina-
tions.destination.connectivity.api.sap custom resource has
the status "False" with the reason "ConfigurationError-Frag-
mentSpecification-NotSupported" with message: "Technical
connectivity is not configured. Static declaration of a frag-
ment is not supported for dynamic destination custom re-
sources."
The destinations.destination.connectivity.api.sap custom re- Wait for the next Transparent Proxy Manager completion. If
source referencing your destination with the name "testdest" the issue remains, see Recommended Actions [page 861].
has the name "testservice". The call to the "testservice"
Kubernetes service returns an unsuccessful status code or
the Kubernetes service does not exist. The last condition
of type "Available" of the destinations.destination.connectiv-
ity.api.sap custom resource has the status "False" with the
reason "ServiceCreationFailed.
The call to the Kubernetes service referencing your destina- Wait for the next Transparent Proxy Manager completion. If
tion returns an unsuccessful status code or the Kubernetes the issue remains, see Recommended Actions [page 861].
service does not exist.
The last condition of type "Available" of the destinations.des- Change the name of your destinations.destination.connec-
tination.connectivity.api.sap custom resource has the sta- tivity.api.sap custom resource or remove the other one if you
tus "False" with the reason "ConfigurationError-Duplicate- have control of it.
DestinationCustomResource". There is a destinations.desti-
nation.connectivity.api.sap custom resource with the same
name in another namespace.
The last condition of type "Available" of the destinations.des- The Connectivity proxy should be configured with the same
tination.connectivity.api.sap custom resource has the status tenant mode as the Transparent proxy.
"False" with the reason "ConfigurationError-ProxyTenantMo-
deMismatch" and message "Technical connectivity is not
configured. Connectivity Proxy tenant mode \"dedicated\"
is not compatible with Transparent Proxy tenant mode
\"shared\"".
The last condition of type "Available" of the destinations.des- The Connectivity proxy should be configured with the same
tination.connectivity.api.sap custom resource has the status tenant mode as the Transparent proxy.
"False" with the reason "ConfigurationError-ProxyTenantMo-
deMismatch" and message "Technical connectivity is not
configured. Connectivity Proxy tenant mode \"shared\" is
not compatible with Transparent Proxy tenant mode \"dedi-
cated\"".
The last condition of type "Available" of the destina- Use proper json array formatting when specifying the TCP
tions.destination.connectivity.api.sap custom resource has tenants in your "transparent-proxy.connectivity.api.sap/ten-
the status "False" with the reason "ConfigurationError-Mul- ant-subdomains" annotation.
titenancyAnnotation-Invalid" and message "Technical con-
nectivity is not configured. \"transparent-proxy.connectiv-
ity.api.sap/tenant-subdomains\" annotation is invalid".
The last condition of type "Available" of the destinations.des- Check the transparent proxy manager for error logs. The
tination.connectivity.api.sap custom resource has the sta- tenants Tenant1, Tenant2 could not be onboarded because
tus "False" with the reason "FailedToApplyConfiguration" either the specified destination is not found in the destina-
and message "Configuration failed for tenants: [\"Tenant1\", tion service for the given tenants, is not valid in the desti-
\"Tenant2\"...]" where Tenant1, Tenant2, etc. are tenant nations service or creating k8s components for the given
subdomains specified in the "transparent-proxy.connectiv- destination/tenant failed.
ity.api.sap/tenant-subdomains" annotation.
The call to the Kubernetes service referencing your desti- The passed tenant in x-tenant-subdomain is not found. Make
nation returns response with status code 500, response sure you pass existing tenant in x-tenant-subdomain.
header 'x-error-message' with value "Tenant subdomain
'<tenantSubdomain>' could not be determined as valid
tenant for Destination service instance with name '<desti-
nationServiceInstanceName>' as provided in the Transpar-
ent Proxy deployment configuration. Contact the Tenant
administrator." response header 'x-error-origin' with value
"XSUAA", response header 'x-internal-error-code' with value
"404".
The call to the Kubernetes service referencing your desti- The passed tenant in x-tenant-id is not found. Make sure you
nation returns response with status code 500, response pass existing tenant in x-tenant-id.
header 'x-error-message' with value "Tenant id '<tenantSub-
domain>' could not be determined as valid tenant for Des-
tination service instance with name '<destinationServiceIn-
stanceName>' as provided in the Transparent Proxy deploy-
ment configuration. Contact the Tenant administrator." re-
sponse header 'x-error-origin' with value "XSUAA", response
header 'x-internal-error-code' with value "404".
The call to the Kubernetes service referencing your desti- The passed tenant in x-tenant-subdomain is not subscribed
nation returns response with status code 500, response to the provider. Make sure you pass subscribed to the pro-
header 'x-error-message' with value "Tenant subdomain vider tenant in x-tenant-subdomain.
'<tenantSubdomain>' is not subscribed to respective pro-
vider application using Destination service instance with
name '<destinationServiceInstanceName>', as provided in
the Transparent Proxy deployment configuration, under
provider tenant subdomain '<providerTenantSubdomain>'.
Contact the Destination Administrator." response header 'x-
error-origin' with value "XSUAA", response header 'x-inter-
nal-error-code' with value "401".
The call to the Kubernetes service referencing your desti- The passed tenant in x-tenant-id is not subscribed to the
nation returns response with status code 500, response provider. Make sure you pass subscribed to the provider ten-
header 'x-error-message' with value "Tenant id '<tenantId>' ant in x-tenant-id.
is not subscribed to respective provider application using
Destination service instance with name '<destinationServi-
ceInstanceName>', as provided in the Transparent Proxy de-
ployment configuration, under provider tenant subdomain
'<providerTenantSubdomain>'. Contact the Destination Ad-
ministrator." response header 'x-error-origin' with value
"XSUAA", response header 'x-internal-error-code' with value
"401".
The call to the Kubernetes service referencing your desti- The referenced destination is not found in the destination
nation returns response with status code 502, response service for the given tenant.
header 'x-error-message' with value "Destination '<destina-
tionName>' referred in Destination CR '<crName'> is not
found in tenant '<tenantName>'. Inspect the Destination CR
or contact the Destination Administrator." response header
'x-error-origin' with value "Destination Service", response
header 'x-internal-error-code' with value "404".
The call to the Kubernetes service referencing your desti- The referenced destination or fragment is not found in the
nation returns response with status code 502, response destination service for the given tenant.
header 'x-error-message' with value "Destination '<destina-
tionName>' with fragment '<fragmentName>' referred in
Destination CR '<crName'> is not found in tenant '<tenant-
Name>'. Inspect the Destination CR or contact the Desti-
nation Administrator." response header 'x-error-origin' with
value "Destination Service", response header 'x-internal-er-
ror-code' with value "404".
The call to the Kubernetes service referencing your desti- The referenced destination is not found in the destination
nation returns response with status code 502, response service for the given tenant.
header 'x-error-message' with value "Destination '<destina-
tionName>' not found in tenant '<tenantName>'. Contact
the Destination Administrator." response header 'x-error-ori-
gin' with value "Destination Service", response header 'x-in-
ternal-error-code' with value "404".
The call to the Kubernetes service referencing your desti- The referenced destination or fragment is not found in the
nation returns response with status code 502, response destination service for the given tenant.
header 'x-error-message' with value "Destination '<destina-
tionName>' with fragment '<fragmentName>' not found in
tenant '<tenantName>'. Contact the Destination Administra-
tor." response header 'x-error-origin' with value "Destination
Service", response header 'x-internal-error-code' with value
"404".
The call to the Kubernetes service referencing your desti- The provided header should be passed only once.
nation returns response with status code 400, response
header 'x-error-message' with value "The '<headerKey>'
header should be passed only once. Check your request
parameters." response header 'x-error-origin' with value
"Transparent Proxy",
Applying helm configuration fails with "When .Values.con- Set value to .Values.config.security.communication.inter-
fig.security.communication.internal.encryptionEnabled is nal.certManager.issuerRef.name when .Values.config.secur-
true, .Values.config.security.communication.internal.cert- ity.communication.internal.encryptionEnabled is true.
Manager.issuerRef.name cannot be empty".
Applying helm configuration fails with "Issuer of version Make sure that Issuer of version cert.gardener.cloud/v1al-
cert.gardener.cloud/v1alpha1 with name 'test' in namespace pha1 with name 'test' in namespace 'test' exists
'test' not found".
Applying helm configuration fails with "Issuer of version Make sure that Issuer of version cert-manager.io/v1 with
cert-manager.io/v1 with name 'test' in namespace 'test' not name 'test' in namespace 'test' exists
found".
Applying helm configuration fails with "ClusterIssuer of ver- Make sure that ClusterIssuer of version cert-manager.io/v1
sion cert-manager.io/v1 with name 'test' not found". with name 'test' exists
The destinations.destination.connectivity.api.sap custom re- There is a Kubernetes resource with the name of the Des-
source referencing your destination with the name "testd- tination custom resource present in the same namespace.
est" has the name "testservice". The last condition of Rename one of the resources to remove the collision.
type "Available" of the destinations.destination.connectiv-
ity.api.sap custom resource has the status "False" with the
reason "ConfigurationError-K8sResourceWithNameAlready-
Present".
Applying helm configration fails with "Istio not installed on The configuration for integration in Istio service mesh is pro-
the cluster. Revise your Istio integration configuration." vided but Istio is not installed on the cluster. Either install
Istio on the cluster or remove the configuration for integra-
tion in Istio service mesh.
Applying helm configuration fails with "An error occurred It is mandatory to integrate either with Istio or cert-man-
because certain necessary configuration settings are miss- ager. This is done by configuring at least one of the
ing. Ensure that either 'config.security.communication.inter- two properties:: config.integration.serviceMesh.istio.istio-
nal.encryptionEnabled' or 'config.integration.serviceMesh.is- integration and config.security.communication.internal.en-
tio.istio-integration' is provided.". cryptionEnabled. config.integration.serviceMesh.istio.istio-
integration makes config.security.communication.inter-
nal.encryptionEnabled optional. If "encryptionEnabled" is
set to true and config.integration.serviceMesh.istio.istio-in-
tegration is missing then config.security.communication.in-
ternalCommunication.certManager.issuerRef.name and
config.security.communication.internalCommunication.cert-
Manager.issuerRef.kind have to be provided.
Applying helm configuration fails with 'Connectivity Proxy Make sure that the transparent proxy and the referenced
tenant mode "shared" is not compatible with Transparent connectivity proxy are in the same tenant mode.
Proxy tenant mode "dedicated". Both must be in the same
mode.'.
Applying helm configuration fails with 'Connectivity Proxy Make sure that the transparent proxy and the referenced
tenant mode "dedicated" is not compatible with Transparent connectivity proxy are in the same tenant mode.
Proxy tenant mode "shared". Both must be in the same
mode.'.
Applying helm configuration fails with "Connectivity Proxy The connectivity proxy multi region config map does not
multi region ConfigMap with name <connectivityProxyMul- exist.
tiRegionConfigMapName> in namespace <connectivityProx-
yMultiRegionConfigMapNamespace> does not exist!".
The call to the Kubernetes service referencing your desti- The region id <regionId> passed as SAP-Connectivity-Re-
nation returns response with status code 502, response gion-Configuration-Id header or statically configured for the
header 'x-error-message' with value "Region with ID <re- given destination service instance in the transparent proxy
gionId> is not found in the Connectivity Proxy configura- configurations is not found in the connectivity proxy multi
tion.", response header 'x-error-origin' with value " Transpar- region configurations. Make sure that the used region id is
ent Proxy". existing in the connectivity proxy multi region configurations.
The call to the Kubernetes service referencing your desti- You should either pass SAP-Connectivity-Region-Configura-
nation returns response with status code 400, response tion-Id header or statically configure the region for the given
header 'x-error-message' with value "Connectivity Proxy re- destination service instance in the transparent proxy config-
gion ID is not specified. Either pass it as a 'SAP-Connectivity- urations.
Region-Configuration-Id' or set it in the Transparent Proxy
configuration.", response header 'x-error-origin' with value "
Transparent Proxy".
The call to the Kubernetes service referencing your desti- The connectivity proxy multi region config map does not
nation returns response with status code 502, response exist.
header 'x-error-message' with value "Invalid Connectivity
proxy instance configuration. Contact the local Kubernetes
cluster administrator to inspect the Connectivity Proxy de-
ployment configuration.", response header 'x-error-origin'
with value " Transparent Proxy".
The call to the Kubernetes service referencing your desti- The secret referenced by the connectivity proxy region con-
nation returns response with status code 502, response figuration named <regionName> does not exist or is invalid.
header 'x-error-message' with value "The referenced Con-
nectivity Proxy instance configuration for region named <re-
gionName> is not found or is invalid.".
To resolve issues with the transparent proxy for Kubernetes, follow the recommendations below.
Pre-Intervention Steps
Before doing any restarts or modifications, it is important that you collect all relevant information.
1. Check the status of the transparent proxy pods and collect the outcome. Example via kubectl:
1. Get the status of the transparent proxy manager executions:
Recovery Attempt
The exact action to take here depends on the check result in step 2 of section Pre-Intervention Steps [page
861]. Choose one of the two options below, according to the outcome of your checks.
Check Succeeds
This indicates that the transparent proxy is currently considered operational. However, it might still have
trouble when used for real scenarios (depending on how you detect the outage).
k delete po -l transparent-proxy.connectivity.api.sap/component=http -n
<installation-namespace>
k delete po -l transparent-proxy.connectivity.api.sap/component=tcp -n
<installation-namespace>
5. Collect the logs from the transparent proxy components after the restart completes.
6. Check if outage is still ongoing:
1. If no, issue is resolved, proceed with Request Root Cause Analysis(RCA) [page 863].
2. If yes, issue is not resolved, proceed with Request Help from SAP [page 863].
This indicates that the transparent proxy itself is indeed having issues. Please perform the following steps:
k delete po -l transparent-proxy.connectivity.api.sap/component=http -n
<installation-namespace>
k delete po -l transparent-proxy.connectivity.api.sap/component=tcp -n
<installation-namespace>
4. Collect logs from the transparent proxy after the restart completes.
5. Check if outage is still ongoing:
1. If no, issue is resolved, proceed with Request Root Cause Analysis(RCA) [page 863].
2. If yes, issue is not resolved, proceed with Request Help from SAP [page 863].
If you cannot resolve the issue and require help from SAP, follow this procedure:
1. Open an incident on the support component BC-CP-CON-K8S-TP for the transparent proxy (with the
appropriate priority and impact stated).
2. Provide all the collected information in the incident, including all the logs, timestamps of the events that
occurred, a summary of the taken actions, the version of the transparent proxy you are using, and so on.
3. Engage your SAP contacts to help with this.
4. Continue working in parallel to identify as much information as possible or to find a temporary measure to
mitigate the outage.
Once the issue is resolved, the next step is figuring out what exactly caused the issue and if there is something
that can be done to prevent it from happening in the future.
Follow this procedure for requesting RCA for the issue you experienced:
1. Open an incident on the support component for the transparent proxy on the support component BC-CP-
CON-K8S-TP.
2. Provide all the collected information in the incident, including all the logs, timestamps of the events that
occurred, a summary of the taken actions, the version of the transparent proxy you are using, and so on.
1.4.9 Resilience
Even though the transparent proxy is not a cloud service, it offers options to improve resilience and robustness
of the whole system benefiting from the transparent proxy functionality.
The software architecture of the transparent proxy in terms of HTTP and TCP proxies is loosely decoupled,
meaning that if there are failures in one of them it would not affect the other. Thus they are independent of
each other and errors regarding for example the transparent HTTP proxy should not affect the work of the
transparent TCP proxy instances.
Overview
The transparent proxy can work in an active-active high-availability setup. In this setup, there are at least two
transparent proxy instances, both running actively and simultaneously.
The main purpose of an active-active deployment is to provide high availability and allow zero-downtime
maintenance as well as horizontal-scaling capabilities.
Note
The load balancing strategy for distributing traffic to the connectivity proxy pods depends on the kube-
proxy mode. The strategies used by the different kube-proxy modes are described in the Kubernetes
documentation. This configuration is done on cluster level.
Scaling
1. For horizontal scaling of the TCP and HTTP proxy, update the values.yaml, and more concretely
deployment.replicas.tcp for HTTP and deployment.replicas.tcp for TCP, as described in the
Configuration [page 830] section.
2. For vertical scaling, update the respective deployment.resources.limits and
deployment.resources.requests properties for CPU and Memory described in the Configuration
[page 830] section.
3. To enable autoscaling of the transparent proxy, update the respective deployment.autoscaling.*
properties.
Note
Horizontal and vertical autoscaling cannot be activated simultaneously. You have to choose one or the
other.
Multi-AZ
If the Kubernetes cluster is configured with two or more nodes running in different availability zones (AZ),
the transparent proxy automatically deploys and runs in the AZs for the Kubernetes cluster that matches the
nodes.
Once you have installed the transparent proxy in your cluster, you can perform the following steps to verify it is
running successfully.
Note
You may have to wait for a few seconds before all the components are started and can be consumed.
To check the status of the transparent proxy's components, execute the following, replacing the namespace
placeholder:
or the following commands when your transaparent proxy is started within Istio Service Mesh:
Example:
{
"Status": "ok", // possible values: "ok|warning|critical"
"Components": {
"sap-transp-proxy-http": {
"Status": "ok", // possible values: "ok|warning|critical"
"Message": "" // optional if status is not ok.
},
"sap-transp-proxy-tcp": {
"Status": "ok", // possible values: "ok|warning|critical"
"Message": "", // optional if status is not ok.
"AffectedDeployments": []// optional if status is not ok. Lists all
sap-transp-proxy-tcp deployments that have unready pods.
},
"sap-transp-proxy-manager": { "Status": "ok", // possible values: "ok|
warning|critical"
"Message": "" // optional if status is not ok.
}
}
}
Example:
{
"Status": "ok",
"CustomResources": [{
"Name": "httpexample",
"DestinationRef": {
"name" : "httpdestination"
},
"CreationTimestamp": "2022-08-26T11:05:30Z",
"Available": "True",
"Message": "Technical connectivity is configured. Kubernetes service
with name httpexample is created.",
"Reason": "ConfigurationSuccessful"
}, {
"Name": "tcpexample",
"DestinationRef": {
"name" : "tcpdestination"
},
"CreationTimestamp": "2022-08-26T11:05:34Z",
"Available": "True",
"Message": "Technical connectivity is configured. Kubernetes service
with name tcpexample is created.",
"Reason": "ConfigurationSuccessful"
If you encounter problems with any of the above steps, see Troubleshooting [page 848] and Recommended
Actions [page 861] for further investigation.
Now you can proceed with consuming the transparent proxy from your Kubernetes applications.
This documentation serves as a reference for the transparent proxy enablement in the Kyma environment and
the respective transparent proxy operator, integrated with Kyma's Lifecycle Manager .
The transparent proxy operator continuously observes the state of the system and the desired state defined by
the transparent proxy custom resource. It then makes necessary adjustments to the system (like creating,
updating, or deleting resources) to achieve the desired state, and regularly monitors the health of the
transparent proxy, ensuring it runs optimally according to the configurations defined in the custom resource.
Module Enablement
With the enablement of the transparent proxy module, a default transparent proxy custom resource is created
in the sap-transp-proxy-system namespace.
Caution
The sap-transp-proxy-system namespace is intended for the monitoring of the transparent proxy
resources, do not use it for deploying workloads there.
apiVersion: operator.kyma-project.io/v1alpha1
kind: TransparentProxy
metadata:
name: transparent-proxy
namespace: sap-transp-proxy-system
spec:
config:
security:
communication:
internal:
encryptionEnabled: true
integration:
serviceMesh:
istio:
You can update it to meet your requirements. To do this, either use kubectl edit or create a YAML file containing
the transparent proxy custom resource and edit the spec section with values according to the Configuration
Guide [page 829].
Sample Code
If you want to create a second instance of the transparent proxy, you can create a transparent proxy custom
resource in another namespace.
Dependencies
The transparent proxy supports integration with the Istio service mesh (for more information, see
Configuration Guide [page 829]).
If the transparent proxy is configured to integrate in the Istio mesh and Istio is present on the cluster, the
integration with the certificate manager becomes optional, otherwise it is mandatory.
If encryptionEnabled is set to "false", but there is integration in the Istio service mesh, the transparent proxy
custom resource will assume the state "Ready" with message "installation is ready. Although encryptionEnabled
is set to false, the traffic will be encrypted by Istio.". You can use both, integration with Istio service mesh and
with the certificate manager.
If you don't integrate the transparent proxy with a service mesh, you must encrypt the traffic between
the micro components. To do that, set encryptionEnabled to true. This property configures encryption
between the transparent proxy micro components internally. Additionally, you should provide the necessary
certificate manager configuration to make sure encryption works as expected.
The transparent proxy will load all resources of api version services.cloud.sap.com/v1 and kind
ServiceInstance having spec.serviceOfferingName: destination, created in namespace sap-
transp-proxy-system, as destination service instances. In addition, you can configure more destination
service instances directly in the transparent proxy custom resource.
If no resources of api version services.cloud.sap.com/v1 and kind ServiceInstance exist in namespace sap-
transp-proxy-system, and no destination service instances are directly specified in the transparent proxy
Troubleshooting
For custom resources having a Warning state, refer to this table to find a solution for specific issues.
Check the Configuration Guide [page 829] after identifying your misconfiguration in the transparent proxy
custom resource conditions.
An error occurred because certain necessary configuration Integration either with Istio or cert-manager is mandatory.
• config.integration.serviceMesh.istio.
istio-integration makes
config.security.communication.interna
l.encryptionEnabled optional.
• If "encryptionEnabled" is set to true and
config.integration.serviceMesh.istio.
istio-integration is missing , you must provide
config.security.communication.interna
lCommunication.certManager.issuerRef.
name and
config.security.communication.interna
lCommunication.certManager.issuerRef.
kind.
Custom resource with name "<name>" already exists in this There could be only 1 transparent proxy custom resource
namespace. in a single namespace. Delete the unnecessary custom re-
source or move it to another namespace.
When If
config.security.communication.internal.e config.security.communication.internalCo
ncryptionEnabled is true, mmunication.encryptionEnabled is set to true,
config.security.communication.internal.c then
ertManager.issuerRef.name cannot be empty. config.security.communication.internal.c
ertManager.issuerRef.name cannot be empty.
config.security.communication.internal.c If
ertManager.issuerRef.namespace and config.security.communication.internalCo
config.security.communication.internal.c mmunication.encryptionEnabled is set to true, you
ertManager.issuerRef.kind are both passed. must pass either
config.security.communication.internal.c
Use ertManager.issuerRef.kind or
config.security.communication.internal.c config.security.communication.internal.c
ertManager.issuerRef.kind when your issuer is of ertManager.issuerRef.namespace, not both.
type cert-manager.io and
config.security.communication.internal.c
ertManager.issuerRef.namespace when your is-
suer is of type cert.gardener.cloud.
Neither If
config.security.communication.internal.c config.security.communication.internalCo
ertManager.issuerRef.namespace nor mmunication.encryptionEnabled is set to true, you
config.security.communication.internal.c must pass either
ertManager.issuerRef.kind is passed. config.security.communication.internal.c
ertManager.issuerRef.kind or
Use config.security.communication.internal.c
config.security.communication.internal.c ertManager.issuerRef.namespace, both cannot
ertManager.issuerRef.kind when your issuer is of be empty.
type cert-manager.io and
config.security.communication.internal.c
ertManager.issuerRef.namespace when your is-
suer is of type cert.gardener.cloud.
config.security.communication.internalCo If
mmunication.certManager.issuerRef properties: config.security.communication.internalCo
[kind and name] should be provided. mmunication.encryptionEnabled is set to true, a
valid reference to an existing Issuer or ClusterIssuer
must be provided to secure the internal transparent proxy
communication by mTLS.
Resource with api version cert-manager.io/v1 and kind "<Is- The referenced Issuer or ClusterIssuer is not found
suer|ClusterIssuer>" with name "<name>" not found. in the cluster. If it is of type Issuer, check the namespace
and name. For ClusterIssuer, check the name.
Resource with api version cert.gardener.cloud/v1alpha1 and The referenced Issuer with api version cert.gar-
kind "Issuer" in namespace "<namespace>" with name dener.cloud/v1alpha1 is not found. Check the specified
"<name>" not found. namespace to see if the resource exists.
Provide at least one Destination service instance in The transparent proxy should have at least one Destination
config.integration.destinationServiceInt service instance configured in
egration.instances, or create and bind an instance in config.integration.destinationServiceInt
the sap-transp-proxy-system namespace using the egration.instances, or resources of api version "serv-
SAP BTP Service Operator. ices.cloud.sap.com/v1" and kind "ServiceInstance" should
exist in the "sap-transp-proxy-system" namespace.
secretName should be provided for Destination service The given destination service instance should have valid ref-
instance: "<name>". erence to a secret holding the service credentials for con-
suming the Destination service.
secret with name "<name>" not found in namespace: The provided secret cannot be found in the referenced
"<namespace>". namespace.Examples:
• The referenced secret holding the service credentials
for the Destination service is not found for the refer-
enced destination service instance. Check the
secretName and secretNamespace properties
provided in the serviceCredentials section for
the given instance in
config.integration.destinationService
.instances.
• The referenced secret holding the service credentials
for the connectivity proxy is not found. Check the se-
cretName and secretNamespace properties pro-
vided in the serviceCredentials section for the
given instance in
config.integration.connectivityProxy.
serviceCredentials.
configMap "connectivity-proxy" not found in namespace The given config map is not present in the same namespace
"<namespace>". as the service defined in
config.integration.connectivityProxy.ser
viceName.
Key "connectivity-proxy-config.yml" not found in The referenced config map does not contain the expected
configMap`s Data from configMap: "connectivity- "connectivity-proxy-config.yml" Data key.
proxy" in namespace "<namespace>".
Connectivity proxy tenant mode "<tenant_mode>" is not The tenant mode defined in the connectivity proxy`s
compatible with transparent proxy tenant mode "<ten- configmap is different from the one defined in
ant_mode>". config.tenantMode of the transparent proxy custom
resource. The tenant modes of the connectivity proxy and
transparent proxy must be equal (shared & shared or dedi-
cated & dedicated).
Istio not installed on the cluster. Revise your Istio integration The configuration for integration in Istio service mesh is pro-
configuration. vided, but Istio is not installed on the cluster. Either install
Istio on the cluster or remove the configuration for integra-
tion in Istio service mesh.
Create a single custom resource to look up one ore more destinations dynamically with the transparent proxy
for Kubernetes.
The transparent proxy lets you perform a dynamic lookup of one or multiple destinations from a destination
service instance and its tenants. To consume a destination in that way, you only have to create a single custom
resource.
apiVersion: destination.connectivity.api.sap/v1
kind: Destination
metadata:
name: dynamic-destination
spec:
destinationRef:
name: "*"
destinationServiceInstanceName: dest-service-instance //not mandatory
In the example above, a destination custom resource has spec.destinationRef.name = "*", which
indicates that this destination would accept dynamic lookup and only a single Kubernetes service will
be created with name dynamic-destination which works as an entry point for all destinations from
destinationServiceInstanceName with name: dest-service-instance.
The transparent proxy identifies the destination for which a request should be configured and dispatched by
obtaining the destination name (destination identifier) from a custom header called X-Destination-Name.
Note
This header is specifically intended to identify a destination within a Destination service instance and its
tenants.
The destination can also be combined by optionally providing a custom header called X-Fragment-Name.
Example Destination:
{
"Name": "example-dest-client-cert",
"Type": "HTTP",
"ProxyType": "Internet",
"URL": "https:/myapp.com",
"Authentication": "ClientCertificateAuthentication",
"KeyStorePassword": "Abcd1234",
"KeyStoreLocation": "cert.jks"
Example Fragment:
{
"FragmentName": "example-fragment",
"URL": "https:/myotherapp.com"
}
If you want to call the example-dest-client-cert destination through the Kubernetes service named
dynamic-destination, you can execute the command below:
If you want to call example-dest-client-cert destination, enriching it with the properties from fragment
example-fragment through the Kubernetes service named dynamic-destination, you can execute the
command below:
The text discusses the use of a reverse proxy as an alternative approach to connect on-premise services to SAP
BTP. While it allows for reuse of existing network infrastructure, it exposes services to potential attacks and
requires significant involvement from your IT department. The Cloud Connector is recommended as a more
secure and efficient solution, providing TLS tunneling and fine-grained access control.
An alternative approach compared to the TLS tunnel solution that is provided by the Cloud Connector is to
expose on-premise services and applications via a reverse proxy to the Internet. This method typically uses a
reverse proxy setup in a customer's "demilitarized zone" (DMZ) subnetwork. The reverse proxy setup does the
following:
The figure below shows the minimal overall network topology of this approach.
On-premise services that are accessible via a reverse proxy are callable from SAP BTP like other HTTP services
available on the Internet. When you use destinations to call those services, make sure the configuration of the
ProxyType parameter is set to Internet.
Depending on your scenario, you may benefit from the reverse proxy:
• Network infrastructure (such as a reverse proxy and ADC services): since it already exists in your
network landscape, you can reuse it to connect to SAP BTP. There's no need to set up and operate new
components on your (customer) side.
• A reverse proxy is independent of the cloud solution you are using.
• It acts as single entry point to your corporate network.
Disadvantages
• The reverse proxy approach leaves exposed services generally accessible via the Internet. This makes
them vulnerable to attacks from anywhere in the world. In particular, Denial-of-Service attacks are
possible and difficult to protect against. To prevent attacks of this type and others, you must implement
the highest security in the DMZ and reverse proxy. For the productive deployment of a hybrid cloud/on-
premise application, this approach usually requires intense involvement of the customer's IT department
and a longer period of implementation.
• If the reverse proxy allows filtering, or restricts accepted source IP addresses, you can set only one IP
address to be used for all SAP BTP outbound communications.
A reverse proxy does not exclusively restrict the access to cloud applications belonging to a customer,
although it does filter any callers that are not running on the cloud. Basically, any application running on
the cloud would pass this filter.
• The SAP-proprietary RFC protocol is supported only if WebSocket RFC can be used for communication
with the ABAP system. WebSocket RFC is available as of S/4HANA release 1909. A cloud application
cannot call older on-premise ABAP systems directly without using application proxies on top of ABAP in
between.
• No easy support of principal propagation authentication, which lets you forward the cloud user identity to
on-premise systems.
• You cannot implement projects close to your line of business (LoB).
Note
Using the Cloud Connector mitigates all of these issues. As it establishes the TLS tunnel to SAP BTP using
a reverse invoke approach, there is no need to configure the DMZ or external firewall of a customer network
for inbound traffic. Attacks from the Internet are not possible. With its simple setup and fine-grained access
control of exposed systems and resources, the Cloud Connector allows a high level of security and fast
Support information for SAP BTP Connectivity and the Cloud Connector.
Troubleshooting
Locate the problem or error you have encountered and follow the recommended steps:
If you cannot find a solution to your issue, collect and provide the following specific, issue-relevant information
to SAP Support:
You can submit this information by creating a customer ticket in the SAP CSS system using the following
components:
Component Purpose
Connectivity Service
Destinations
BC-CP-DEST-CF For general issues with the Destination service in the SAP
BTP Cloud Foundry environment, like:
• REST API
• Instance creation, etc.
BC-CP-DEST-CF-CLIBS For client library issues with the Destination service in the
SAP BTP Cloud Foundry environment.
• Management tools
• Client libraries, etc.
Cloud Connector
If you experience a more serious issue that cannot be resolved using only traces and logs, SAP Support may
request access to the Cloud Connector. Follow the instructions in these SAP notes:
Find information about SAP BTP Connectivity releases, versioning and upgrades.
Release Cycles
Updates of the Connectivity service are published as required, within the regular, bi-weekly SAP BTP release
cycle.
New releases of the Cloud Connector are published when new features or important bug fixes are delivered,
available on the Cloud Tools page.
Cloud Connector versions follow the <major>.<minor>.<micro> versioning schema. The Cloud Connector
stays fully compatible within a major version. Within a minor version, the Cloud Connector will stay with the
same feature set. Higher minor versions usually support additional features compared to lower minor versions.
Micro versions generally consist of patches to a <master>.<minor> version to deliver bug fixes.
For each supported major version of the Cloud Connector, only one <major>.<minor>.<micro> version will
be provided and supported on the Cloud Tools page. This means that users must upgrade their existing Cloud
Connectors to get a patch for a bug or to make use of new features.
New versions of the Cloud Connector are announced in the Release Notes of SAP BTP. We recommend
that Cloud Connector administrators regularly check the release notes for Cloud Connector updates. New
versions of the Cloud Connector can be applied by using the Cloud Connector upgrade capabilities. For more
information, see Upgrade [page 721].
We recommend that you first apply upgrades in a test landscape to validate that the running applications
are working.
There are no manual user actions required in the Cloud Connector when the SAP BTP is updated.
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
• Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
• The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
• SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
• Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering an SAP-hosted Web site. By using
such links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Bias-Free Language
SAP supports a culture of diversity and inclusion. Whenever possible, we use unbiased language in our documentation to refer to people of all cultures, ethnicities,
genders, and abilities.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.