HC450
HC450
.
.
PARTICIPANT HANDBOOK
INSTRUCTOR-LED TRAINING
.
Course Version: 01
Course Duration: 3 Day(s)
Material Number: 50155527
SAP Copyrights, Trademarks and
Disclaimers
No part of this publication may be reproduced or transmitted in any form or for any
purpose without the express permission of SAP SE or an SAP affiliate company.
SAP and other SAP products and services mentioned herein as well as their
respective logos are trademarks or registered trademarks of SAP SE (or an SAP
affiliate company) in Germany and other countries. Please see https://
www.sap.com/corporate/en/legal/copyright.html for additional trademark
information and notices.
Some software products marketed by SAP SE and its distributors contain proprietary
software components of other software vendors.
National product specifications may vary.
These materials may have been machine translated and may contain grammatical
errors or inaccuracies.
These materials are provided by SAP SE or an SAP affiliate company for
informational purposes only, without representation or warranty of any kind, and SAP
SE or its affiliated companies shall not be liable for errors or omissions with respect
to the materials. The only warranties for SAP SE or SAP affiliate company products
and services are those that are set forth in the express warranty statements
accompanying such products and services, if any. Nothing herein should be
construed as constituting an additional warranty.
In particular, SAP SE or its affiliated companies have no obligation to pursue any
course of business outlined in this document or any related presentation, or to
develop or release any functionality mentioned therein. This document, or any related
presentation, and SAP SE’s or its affiliated companies’ strategy and possible future
developments, products, and/or platform directions and functionality are all subject
to change and may be changed by SAP SE or its affiliated companies at any time for
any reason without notice. The information in this document is not a commitment,
promise, or legal obligation to deliver any material, code, or functionality. All forward-
looking statements are subject to various risks and uncertainties that could cause
actual results to differ materially from expectations. Readers are cautioned not to
place undue reliance on these forward-looking statements, which speak only as of
their dates, and they should not be relied upon in making purchasing decisions.
Typographic Conventions
Demonstration
Procedure
Warning or Caution
Hint
Facilitated Discussion
93 Unit 5: Creating the Persistence Data Model Using Core Data Services
TARGET AUDIENCE
This course is intended for the following audiences:
● Developer
Lesson 1
Introducing the Use case for Application Development for SAP HANA Cloud 3
Lesson 2
Creating your trial SAP.com account 9
Exercise 1: Create SAP.com account 13
Lesson 3
Logging on to your SAP HANA Cloud Trial Account 17
Lesson 4
Deploying a SAP HANA Cloud Database Instance 22
Exercise 2: Create a SAP HANA Cloud Database Instance 33
Lesson 5
Introducing the Application Architecture in SAP HANA Cloud 37
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Understand the process overview of application development on SAP HANA Cloud
Describe Process Overview of the Application Development for SAP HANA Cloud
The application development and integration area of SAP Business Technology Platform (SAP
BTP) plays a pivotal role in enabling customers and partners to build, integrate, and extend
business processes efficiently,
As you will see from the overall Architecture, the tools and the systems available provide a
high level of flexibility in Cloud and multi cloud deployment scenario.
Let’s gets started with the Application Development on SAP HANA Cloud by introducing SAP
BTP, its components and its link to the Application Architecture for SAP HANA Cloud
For this course we will start with one scenario out of the many possible scenarios to provide
the Overall process of the Application Development for SAP HANA Cloud as Database and
SAP BTP as a platform using SAP Cloud Application Programming Model for development.
Scenario
Your organization has embarked on a Cloud Journey and has chosen SAP Business
Technology Platform and its services for the usages to realize the requirements. The Project
is split into phases and as part of phase 1 there Is a need to build applications using SAP
HANA Cloud as Data Persistence layer and Fiori as the tool to build UI.
SAP Business Technology Platform is a unified platform that offers the flexibility to connect to
a broad set of technologies, data, and processes. Business-centric services spanning
database and data management, analytics, application development and integration, and
intelligent technologies help to quickly turn data into customer business value.
The applications will access the SAP HANA Cloud database as a layer for data storage, SAP
HANA Cloud is a Service offering within the SAP Business Technology Platform. Below slides
shows a quick overview of the flow of the scenario and components/ services needed.
In this course will be using the SAP Business Application Studio and SAP Cloud Application
Programming Model Framework as Development tools and Framework.
SAP provides SAP Cloud Application programming Model (CAP) as the Framework to reduce
the technical and foundational activities by giving this to the framework so you as a developer
can focus on the Business Domain and the relevant activities.
SAP Business Application studio is our IDE of choice for the development on SAP BTP.
In this First use case we want to familiarize you to SAP HANA Cloud , the New Development
Tool (SAP Business Application Studio), the SAP Cloud Application Programming Model
(CAP) and Application Security.
Following image provides the Process that is followed for the Application development for
SAP HANA Cloud.
Figure 1: Scenario 1 - Familiarize with the Tools and Systems for SAP HANA Cloud Application development
● Create UI Layer
Watch this video to learn about the tools and systems for SAP HANA Cloud Application
development.
Video: Scenario 1 - Familiarize with the Tools and Systems for SAP HANA Cloud
Application development
For more information on Scenario 1 - Familiarize with the Tools and Systems for
SAP HANA Cloud Application development , please view the video in the lesson
Introducing the Use case for Application Development for SAP HANA Cloud in
your online course.
In this first scenario, We will create a Product Supplier Fiori List Report Application using the
Fiori Generator available in SAP Business Application Studio to Perform CRUD operations.
The data will be Persisted in SAP HANA Cloud and Exposed to Fiori Application using OData
V4. The development will follow the CAP Framework, we also introduce the usage of custom
handlers in SAP Cloud Application Programming Model.
Note:
There are several Use cases for Application development for SAP HANA Cloud. In
this First use case we want to familiarize you with SAP HANA Cloud , the New
Development Tool (SAP Business Application Studio) ,the Cloud Application
Programming Model (CAP) and Application Security.
Watch this video to learn about the development layers in SAP HANA Cloud.
The Development environment for SAP HANA Cloud is made of three layers where various
activities are performed by different personas.
● Database Development
Database development is performed in the Database layer, in the SAP HANA database by
Database developers.
Database developers build a persistence model or design an analytic model and
understand inter-relationship of the data in SAP HANA Cloud.
● Application Development
Application Development is performed in the application server layer by Application
Programmers.
Application Programmers develop the code for the business-logic component, for
example, in JavaScript (Node.js or JavaScript) or Java, Python or Custom language and
run-time.
● Client User Interface Development
Client User Interface is performed in the UI layer by Client UI developers.
The user-interface (UI) client developer designs and creates client applications which bind
business logic (from the application developer) to controls, events, and views in the client
application user interface.
Figure 4: Courses You Need to Develop a Full-Stack Application in SAP HANA Cloud
Course HC450 is the starting point to application development, covering the creation of the
core application, the database persistence and the integration of the various application
layers.
Other courses will then allow the developer to complete his/her skill-set and be able to
develop the "full-stack" application.
In particular:
● Course HC300 deep dives the development of the Analytical views you may need to
integrate within your applications.
● The SAP Cloud Application Programming Model (CAP) learning journey provides extensive
information on the Cloud Application Programming Model and "SAPUI5 and SAP Fiori"
learning journey covers the development of Fiori/ UI5 user interfaces. We recommend you
consider both learning journeys for a end to end knowledge experience.
SAP software is based on several open standards, in particular with respect to development
languages and communication protocols.
The knowledge of these is a prerequisite for this course and it is a must-have to develop
applications in SAP HANA.
In particular you must have a good knowledge of:
● HTTP (http://wikipedia.org/wiki/Hypertext_Transfer_Protocol)
● HTML (http://wikipedia.org/wiki/HTML)
● JavaScript (http://wikipedia.org/wiki/JavaScript)
● Node.js (http://wikipedia.org/wiki/Node.js)
● Express.js (http://wikipedia.org/wiki/Express.js)
● SQL (http://wikipedia.org/wiki/SQL)
LESSON SUMMARY
You should now be able to:
● Understand the process overview of application development on SAP HANA Cloud
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Create your personal SAP.com account.
Business Case
Your company wants to make the move to the cloud and perform the digital transformation
that is required by this decision. In your role, as a database administrator, you need to have a
clear insight in the SAP HANA Cloud capabilities and features to be able to integrate these in
to your companies digital transformation vision.
To get a better insight in the capabilities of SAP HANA Cloud, you want to create a SAP.com
to be able to explore the SAP HANA Cloud functionality in your SAP Business Technology
Platform trial instance.
SAP HANA Cloud is available as a free trial, but you need a SAP.com account before you can
log-on to the SAP BTP to explore the SAP HANA Cloud features. Creating a SAP.com account
only takes a few minutes. You can start the sign up process by following these steps:
3. On theRegistration page, fill out all the required fields, provide a strong password and
specify contact preferences. Confirm that you read the SAP.com Terms and Conditions
and click the Register button to complete the registration.
As a last step in the registration procedure, you need to go to the e-mail account you
specified. There you will find a verification e-mail from SAP. Open that e-mail and press the
Click here to activate your account button to finish the verification process and
activate your SAP.com account.
As soon as your SAP.com account is successfully activated, you will be forwarded to your
SAP.com dashboard. You can use the SAP.com dashboard to change your account setting
(e.g., set your profile to private (default) or public, add a profile picture, or write a BLOG post).
Changing your account settings is optional, the account you have created is fully operational.
Log-on to SAP.com
Before you can use the SAP Business Technology Platform (BTP) to explore the SAP HANA
Cloud features you need to log-on to the SAP BTP using your personal SAP.com account.
Note:
If you don't create a sap.com account in advance, then the account will be
automatically created as soon as you create a free SAP HANA Cloud trial.
Summary
In this lesson, you have learned how to sign up, activate and log-on to a free SAP.com
account. You will use this SAP.com account to create your free SAP Business Technology
Platform (BTP) instance. In the SAP BTP instance you then can explore the SAP HANA Cloud,
SAP HANA database features. You have also learned the procedure to use a SAP BTP trial
instance via your enterprise SAP BTP account.
LESSON SUMMARY
You should now be able to:
● Create your personal SAP.com account.
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Logon to your SAP HANA Cloud Trial account
In the SAP Learning Card Create your SAP Business Technology Platform Trial Account you
created an free SAP Business Technology Platform trial account. Now you will learn how you
can log-on to this SAP HANA Trial account.
Hint:
Select the Remember me check-box to allow the SAP ID Service to set a
cookie and will be logged on automatically for the next three months.
3. On first log-on the SAP Business Technology Platform Trial will show you important
system messages in a little pop-up window. Read and acknowledge this system message.
The trial account has a standard lifespan of 30 days, but can be extended up to 365 days.
After 365 days, your account is automatically deleted.
Caution:
You will not be able to log-on and access your data after your trial account has
been deleted. So make sure you don't store important data in your trial account.
After your trial account has been deleted, you can create a new fresh and empty trial account.
You have now logged on to your SAP Business Technology Platform trial account and can now
start exploring all of it's features including SAP HANA Cloud trial.
Have fun exploring the SAP Business Technology Platform Trial.
Log-on to the enterprise SAP Business Technology Platform and use the trial
This section explains how you can log-on to the SAP Business Technology Platform trial when
your company already has an SAP Business Technology Platform enterprise license. A
prerequisite is that you also have an account to access the enterprise SAP Business
Technology platform.
In the SAP Learning Card Create your SAP Business Technology Platform Trial Account you
created an free SAP Business Technology Platform trial account. You might have done this
using a personal e-mail account. If so, use the in previously explained section Log-on to SAP
Business Technology Platform Trial method.
If you used your business e-mail account, then use the method explained below.
Now you will learn how you can log-on to your SAP Business Technology Platform Trial
account with your business e-mail account.
Hint:
Select the Remember me check-box to allow the SAP ID Service to set a
cookie and will be logged on automatically for the next three months.
3. When Two-Factor Authentication is activated for your account, you will receive an
additional time limited passcode that you need to enter before you can log-on to your
enterprise SAP Business Technology Platform account.
Your have now logged on to your enterprise SAP Business Technology Platform landing page.
Follow the following steps to access the free SAP Business Technology Platform trial account.
1. On your enterprise SAP Business Technology Platform landing locate the Trial
Homebutton.
The rest of the log-on procedure is identical to the previously explained section Log-on to SAP
Business Technology Platform Trial method.
Have fun exploring the SAP Business Technology Platform Trial.
Log-on to SAP.com
Before you can use the SAP Business Technology Platform (BTP) to explore the SAP HANA
Cloud features you need to log-on to the SAP BTP using your personal SAP.com account.
LESSON SUMMARY
You should now be able to:
● Logon to your SAP HANA Cloud Trial account
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Deploy a SAP HANA Cloud Database
Business Case
As a database administrator, you want to deploy a SAP HANA Cloud database instance in the
BTP trail to explore the SAP HANA Cloud capabilities. Your BI modeling colleagues will use
this database to learn how they can build their analytic reports.
Use the following path to navigate from the SAP BTP Cockpit to the SAP HANA Cloud area:
● On the SAP BTP Cockpit Global Account page, select the trial subaccount in the
Subaccounts area. Next on the Subaccount: trial - Overview page, select the dev space in
the Spaces area. The breadcrumbs area (1) shows your location in the SAP BTP.
● On the Space: dev - Applications page, in the menu panel select the SAP HANA Cloud (2)
button.
On the SAP HANA Cloud overview page you get an overview of the status of all your SAP
HANA Cloud, SAP HANA Database instances. In the trial SAP BTP account you can only
create one SAP HANA Cloud, SAP HANA Database instance. In an enterprise SAP BTP
account the number of database instances depends on your entitlements in the license
agreement with SAP.
● To open the SAP HANA Cloud Central where you can create a new SAP HANA Cloud, SAP
HANA Database instance select the Manage SAP HANA Cloud (3) button.
The SAP HANA Cloud Central is the place to provision and manage your SAP HANA Cloud
instances. In a new SAP BTP account the SAP HANA Cloud Central is empty, but with the
Create (1) button you can start the Create Instance Wizard and deploy your first SAP HANA
Cloud instance.
In the first step of the Create Instance Wizard, you need to decide which type of instance you
want to create. The following options are available:
● SAP HANA Cloud, SAP HANA Database provides an in-memory and multi-model database
system to store and analyze relational but also document data which allows for real-time
data analytics and transactional processing in one combined system. The in-memory
database can handle OLAP and OLTP workloads, setup hybrid Extension to on-premise
SAP HANA systems and can be associated with a data lake instance.
● SAP HANA Cloud, Data lake SAP HANA data lake efficiently and securely stores, manages,
and analyzes large amounts of structured, semi-structured and unstructured data. The
data lake instance manages access to files in the data lake through Files component and
can be used for high performance analysis on petabyte volumes of relational data with
HANA Cloud, data lake Relational Engine.
Caution:
In this learning material the steps to create a SAP HANA Cloud, SAP HANA
Database will be explained. The follow-up steps for the other choices will be
different.
In the step you need to provide the location, instance name, description and the administrator
password.
● In the (1) Location area you need to fill-in the location space, which is the Cloud Foundry
Organization name and the Cloud Foundry Space name. If your company has multiple
Organizations and Spaces created for different projects, then select the correct
Organization and Space from the drop-down menus.
● In the Basics area you need to specify the database instance name and description of the
SAP HANA database instance you want to create. An instance in SAP HANA Cloud is like a
tenant for a on-premise SAP HANA database and isn't restricted to 3 characters.
Note:
As of QRC 03/2022 the database administrator can also choose which version
to install. The selection is two releases in the past plus the current release. This
means with release QRC 03/2022, you could also select QRC 02/2022 or QRC
01/2022 for installation.
You also need to specify the password for the database administrator user account
DBADMIN that will be created and assigned to your SAP BTP account. The password need
to be at least 8 characters long, with one UPPER and two lower case letters and one
number.
There is no SYSTEM user available, you need to use the DBADMIN user for the initial setup
tasks. The DBAMIN is the "super" user for your SAP HANA Cloud, SAP HANA Database
instance. This user shouldn't be used on a daily basis. Create new user accounts and roles for
all users accessing the database.
Specify the required RAM and disk storage (1) for this SAP HANA Cloud database instance.
The number of vCPUs depends on the amount of RAM you specify.
In the SAP BTP trial account you can't change the size of the memory and disk storage
allocated to the database instance. This is limited to 30 GB RAM and 120 GB disk space.
A SAP HANA Cloud, SAP HANA Database instance in an enterprise SAP BTP account can
allocate up to 5790 GB RAM, 16000 GB disk space and 440 vCPUs.
Note:
The required disk space and virtual CPUs (vCPUs) can't be changed manually, but
are automatically assigned to the database instance. The amount of disk space
and vCPUs depends on the assigned RAM.
In this step, you can specify which (1) Availability Zone you want to use. The availability zone
can be set automatically, or you can choose your zone manually.
The supported memory size varies by availability zone. If the availability zone in which you
want the instance placed is not selectable from the list, lower the memory size of the instance.
Keeping the setting to automatic will make sure that the best zone for your database size and
performance is selected.
You can also setup (2) replica databases of your SAP HANA database instance. A replica
database is used to improve the availability of your SAP HANA database instance. A replica
database is automatically kept up to date during normal operations and if a takeover is
required it will be performed automatically.
You can setup maximum of 2 replicas for SAP HANA Cloud. One synchronous replica in the
same availability zone, and one asynchronous replica in a different availability zone.
Caution:
Adding additional replicas to your instance incurs additional costs based on the
size (CPU, memory, disk) of the replica database. You can use the SAP HANA
Cloud Capacity Unit Estimator to estimate the number of capacity units per
month required for your particular use case. Link to the SAP HANA Cloud
Capacity Unit Estimator
In the SAP HANA Database Advanced Settings step, you can setup some additional features.
In the (1) Allowed Connections area, you can choose which IP addresses are allowed to
connect to your SAP HANA Cloud database instance.
Animation
For more information on this topic please view the animation in the lesson
Deploying a SAP HANA Cloud Database Instance in your online course.
● Allow specific IP addresses and IP ranges. Use this option when there should be only
access possible from your corporate network.
The Cloud Connector: You also need to decide whether you want your SAP HANA Cloud, SAP
HANA Database to connect to your on-premise remote data sources through the cloud
connector.
The Cloud Connector serves as a link between SAP Business Technology Platform (BTP)
applications and your on-premise systems. It allows you use your existing on-premise assets
without exposing the entire internal landscape to the outside world. It combines an easy setup
with a clear configuration of the systems that are exposed to the SAP BTP.
The Cloud Connector runs as on-premise agent in a secured network and acts as a reverse
invoke proxy between the on-premise network and SAP BTP. It provides fine-grained control
over your On-premise systems and resources that can be accessed by cloud applications. It
also allows you control over which cloud applications can use the Cloud Connector.
Note:
In the SAP HANA Cloud Central, you can use the Edit option in the Actions menu
to activate the cloud connector after the SAP HANA Cloud database instance was
deployed.
The Cloud Connector provides the following features for business-critical enterprise
scenarios:
● Automatic recovery of broken connections.
● Audit logging of inbound traffic and configuration changes.
● High-availability setup.
In the enterprise SAP BTP account, you can also add a SAP HANA Script server and a SAP
HANA Document Store as additional features to the SAP HANA database instance. When
adding these features to a SAP HANA database instance may require more vCPUs (compute)
and increase your licensing cost.
In the 6th and final step of the Create Instance Wizard you can decide to add an integrated
data lake instance to your SAP HANA database instance. This integrated data lake allows you
to ingest, store, and analyze high volumes of data, economically and securely. You can access
and manage the integrated data lake from your SAP HANA database.
Integrated SAP HANA Data Lake: The Integrated SAP HANA Data Lake gives you the option to
store the older and rarely used, but still important, cold data in a data lake controlled by the
SAP HANA database instance. This database instance already stores your hot data in
memory and the warm data in the SAP HANA Native Storage Extension (NSE). With the
addition of a data lake to the SAP HANA Cloud database instance you have access to all your
data from one single framework.
The SAP HANA Cloud database instance is being created. Refresh the screen every few
minutes to see an updated status.
What is setup during the creation process of SAP HANA Cloud, SAP HANA database:
● A SAP HANA database instance is created with data at rest encryption enabled.
● The SAP HANA database instance is connected to SAP HANA cockpit.
● The SAP HANA database instance is connected to SAP HANA database explorer.
● The database user DBADMIN is created for administration purposes.
● The SAP HANA database is connected to backup infrastructure via the Backint interface.
● A backup cycle of 15 backup generations is setup, and the initial backup is created.
As soon as all these tasks are performed the SAP HANA Cloud, SAP HANA trial database is
available for you to explore.
To create an SAP HANA Cloud database using the Cloud Foundry CLI execute the following
steps:
Note:
If you use the SAP HANA Cloud trial account with one organization and space
you can omit the -o, -s and -a options.
Example:
cf create-service hana-cloud hana HC200-Trial-Database -c '{ "data":
{"memory": 32, "systempassword": "Welcome1"}}'
The above command will create a database service named HC200-Trial-Database with
32GB memory and the DBADMIN user will get the password Welcome1.
Note:
In an SAP HANA Cloud trial account you can only create a 30GB database
instance.
Have a look at the Create an SAP HANA Database Instance Using the CLI page to have a
complete list of parameters that can be used to create a database instance.
Summary
In this lesson, you have learned how to deploy a SAP HANA Cloud database instance using the
SAP BTP and Cloud Foundry CLI.
Simulation
For more information on this topic please view the simulation in the lesson
Deploying a SAP HANA Cloud Database Instance in your online course.
Simulation
For more information on this topic please view the simulation in the lesson
Deploying a SAP HANA Cloud Database Instance in your online course.
LESSON SUMMARY
You should now be able to:
● Deploy a SAP HANA Cloud Database
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the basic concepts of application architecture in SAP HANA
Applications are deployed to the target platform by using the push operation of the platform
API. For this reason, in Cloud Foundry parlance, applications are "pushed" to the platform.
Pushing an application works as follows:
2. Programs called buildpacks are executed to create archives that create the self-contained
and ready-to-run executable applications.
Some of the most typical activities executed by buildpacks include downloading any required
libraries and other dependencies, and configuring the application. Different buildpacks exist
for the different target runtime environments, such as HTML5, Node.js, Java, Python, and
SAP HANA database.
Applications run within a fixed two level hierarchical structure made of Organizations and
Spaces.
Watch this video to learn about Organization and Spaces.
The Platform provides Services. Services provide features that the applications can consume.
For example, the following functions are accessible as services:
● The SAP HANA database
● The XSUAA identity provider
● The SAPUI5 core library
LESSON SUMMARY
You should now be able to:
● Describe the basic concepts of application architecture in SAP HANA
Learning Assessment
1. Which of the below is the correct sequence when using Business Technology Platform and
its components for Application development?
Choose the correct answer.
X A Create SAP BTP Trial Account, Create SAP HANA Cloud Trial Account,Subscribe
to Business Application Studio Trial, Create CAP Project using SAP Business
Application Studio
X B Create SAP HANA Cloud Trial Account, Create SAP BTP Trial Account, Subscribe
to Business Application Studio Trial, Create CAP Project using SAP Business
Application Studio
X C Create SAP BTP Trial Account, Install SAP HANA Cloud, Subscribe to Business
Application Studio Trial, Create CAP Project using SAP Business Application Studio
X D Install SAP HANA Cloud, Create SAP BTP Trial Account, Subscribe to Business
Application Studio Trial, Create CAP Project using SAP Business Application Studio
2. Which of the Persona’s are involved in Application development in SAP HANA Cloud?
Choose the correct answers.
X A Application programmer
X B Database developer
X C UI programmer
X D Software programmer
3. When you create a personal account on SAP.com, your profile visibility is set to Public.
Choose the correct answers.
X A True
X B False
X True
X False
5. In the SAP BTP trial account, you can freely define the memory, compute and storage size
as long as you stay below 30 GB memory, 2 vCPUs compute units and 120 GB of storage.
Choose the correct answer.
X A True
X B False
X A Language independent
X B Admin separation
X C Open Source
X D Transport Management
Lesson 1
Using the SAP Cloud Application Programming Model 45
Lesson 2
Getting Started with SAP Business Application Studio 49
Lesson 3
Creating a New Project in SAP Business Application Studio 51
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Use the SAP Cloud Application Programming model
Figure 27: SAP Cloud Application Programming Model - Flexibility and Openness
The “SAP Cloud Application Programming Model” (CAP) is an open and opinionated,
framework of languages, libraries, and tools for building enterprise-grade services and
applications.
SAP Cloud Application Programming Model is opinionated as it guides developers through
proven best practices and a great wealth of out-of-the-box solutions to recurring tasks and
remains open to choose technologies, select your Architectural Pattern or use CAP or parts of
CAP.
Projects benefit from a primary focus on domain, significantly accelerated development, and
safeguarded investments in a world of rapidly changing cloud technologies.
The SAP Cloud Application Programming Model (CAP) is the go-to programming model for
business applications on SAP HANA Cloud and SAP Business Technology Platform.
As you can see in this graphic, CAP supports both open-source and SAP tools and
technologies
is our IDE of choice as well as other tools tailored for the new programming model such as
CDS Editors, Code Assists, Outline Views, new Project Explorer, etc. can also be used.
The Purpose of SAP Cloud Application Programming Model is to focus on the domain
problems as can be seen the image the developers need to focus on the areas highlighted
above.
This is achieved by
Close collaboration of domain experts and developers to:
● Declaratively capture domain knowledge in CDS models
● Fuels generic runtimes to serve recurring tasks automatically
● Minimizing boilerplate code to real custom logic
Reuse existing Serv- CDS Service SDKs ● Constructing and handling http requests
ices
● Providing APIs in Java / Node.js
● Mash-ups with local data
LESSON SUMMARY
You should now be able to:
● Use the SAP Cloud Application Programming model
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe SAP Business Application Studio and how it is used for development in SAP
HANA Cloud
SAP Business Application Studio is based on Visual Studio Code, which is an open-source
software project started by Microsoft. It is widely adopted by the development community
and is popular choice for developers because it allows them to plug-in their favorite
extensions that provide additional tooling for their development projects. SAP and third
parties provide many plug-ins for different types of SAP development projects to increase
developer productivity.
SAP Business Application Studio provides text and graphical editors to create development
artifacts, plus a command line interface (CLI).
A key feature of SAP Business Application Studio is the native Git support for source file
version management. Git controls are embedded into the Business Application Studio.
Note:
Web IDE for SAP HANA can also be used for modeling in SAP HANA Cloud but is
not recommended as it misses many additional productivity aids and features that
supports the modeler. Web IDE will not be developed further and Business
Application Studio will be the tool that receives all new features. Business
Application Studio is the recommended tool for data modeling in SAP HANA
Cloud.
LESSON SUMMARY
You should now be able to:
● Describe SAP Business Application Studio and how it is used for development in SAP
HANA Cloud
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Create a modeling project in SAP Business Application Studio
Figure 31: File Structure in the Repository of SAP Business Application Studio
Watch this video to learn about file structure in the repository of SAP Business Application
Studio.
When a project is opened in the Explorer view, you can navigate its content (sub-folders and
files), open files with the editor, delete or move files, and so on.
The ribbon at the top of the Explorer view shows the name of the folder (capitalized). It is
opened as a workspace, meaning that specific Business Application Studio settings can be
associated to the folder.
LESSON SUMMARY
You should now be able to:
● Create a modeling project in SAP Business Application Studio
Learning Assessment
1. You are programming using the . You need to define an OData service. What is the
extension of the file you create?
Choose the correct answer.
X A .xsodata
X B .cds
X C .hdbcds
X D .odata
2. Which is the IDE of choice when using CAP on SAP Business technology Platform?
Choose the correct answer.
X B SAP WebIDE
X C Visual Studio
X D Jupyter Notebook
3. Which of the two libraries are available for the Service SDKs?
Choose the correct answers.
X A Node.js
X B Java
X C PAL
X D ML
X A Database Explorer
X C Web IDE
5. Which option do you use if you want to create a brand new project?
Choose the correct answers.
X B Import
Lesson 1
Introducing the Multi-Target Application 57
Lesson 2
Describing the MTA Development Descriptor File mta.yaml 63
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the basic concepts about the MTA development project
Note:
A module does not necessarily need to have code for execution in a runtime
container. Instead, it could contain other artifacts required to make an application
run.
Development tools provided by SAP allow you to manage multiple applications (as "modules")
in a unique development project, and deploy them via a unique archive file.
In SAP HANA Cloud we use SAP Business Application Studio as our preferred tool for
Application development using the Cloud Application programming Model paradigm.
In SAP Business Application Studio we have options to either create a Basic MTA project or
an SAP Cloud Application Programming Model (CAP) project. Throughout this course we will
be developing the Business application and the related artifacts using the project of type CAP.
In the SAP Business Application Studio, you create a CAP project which is a Multi-Target
Application (MTA) project.
Within the MTA project, you have three folders
● db :
for the database level schema model.
● srv :
for the service definition layer
● app :
for UI artifacts
When deployed, every module of the project will become a separate application in Cloud
Foundry and each module will have its own buildpack, development language, and runtime
environment.
For example:
● HTML5 modules are served as static files and executed in the Web Browser
● Node.js and Java modules are executed in the Application runtime example Cloud
Foundry.
● SAP HANA database modules generate database objects in the SAP HANA database.
Watch this video to learn about the MTA Design Time and run Time relationship
Each dev-space type fits an Intelligent Enterprise development use case. Each of these dev-
space types comes prepackaged with the tools and runtimes relevant for its scenario. The
most commonly used ones for Application Development scenarios are summarised in the
following image.
In addition to the pre configured extensions provided for the Dev space there is possibility to
add additional extensions, as we plan to consume the HANA Artifacts within the CAP
Application, we will create the Dev Space with the Additional extensions for HANA (SAP HANA
Calculation View Editor and SAP HANA Tools) as seen in the following figure.
Once the Dev Space is ready, we have two options to create the project and of different types.
Using Terminal window command cds init <project name>
Alternatively we can use the wizard to create projects using the Project template either from
Welcome page or via the View: Find Command option in the menu bar.
The Wizard consists of the following user Inputs,
● Name :
Name of the Project is mandatory and unique and is valid across platforms.
● Runtime :
Two runtimes are available from the dropdowns , Node.js and Java. Once a runtime is
selected and confirmed , this cannot be changed. In case a change is required a new
project need to be created.
A set of other features are provided during project creation and this features enrich the
capabilities of the project. The capabilities can be added later by using a terminal command
example cds add mta for creating the mta.yaml.
List of features currently available,
● CI/CD Pipeline Integration
● Configuration of SAP HANA deployment
● MTA based SAP Business Technology Platform deployment
● Multitenancy
● Cloud Foundry Native deployment
For a quick start Basic sample files are provided , this can be added by selecting the option
Build sample files.
We will create the CAP Project using the Template as the purpose of the CAP project is to
ensure all relevant parts of the applications are within one Project to ensure dependencies are
handled efficiently and are recommended for Application development activities.
LESSON SUMMARY
You should now be able to:
● Describe the basic concepts about the MTA development project
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the information contained in the MTA development descriptor mta.yaml file
2. Declare resources the application depends upon at runtime and/or deployment time
(benefit: tools can allocate and bind such resources)
3. Define configuration variables (and their relation), whose values distinguish different
deployments of the application (benefit: tools can bind sub-components, can automate
deployment based on default settings, or request missing mandatory values interactively
The MTA model is the formal contract between developers (using development tools) and the
MTA deployer.
The deployer is a tool that consumes a description of the MTA model and translates it into
target platform specific “native” commands for provisioning runtime containers, creating and
binding resources (for instance, “service instances” on Cloud Foundry or SAP XS Advanced),
and installing, running and updating the application modules.
As seen in the image the application developer uses development tools to create the modules
of an MTA and the corresponding MTA descriptor (mta.yaml).
The application can then be distributed in the form of an MTA archive including the MTA
deployment descriptor (mtad.yaml). This specification describes an archive format as a
convenient distribution file. As indicated by the direct arrow from development tools to
deployer,
MTA deployers may also accept the pure contents of this format, namely the directory
structure of files with a deployment descriptor (mtad.yaml).
An administrator optionally augments the MTA model in the deployment descriptor with an
extension descriptor(mtaext.yaml),and uses the MTA deployer to orchestrate the actual
deployment.
Note:
In case your projects was initiated as a non MTA project but later needs the
mta.yaml file. This can be added using cds add mta command
Global Elements
● _schema-version
Specifies the version of the MTA descriptor in the following schema:
<major>.<minor>.<patch> Indicating a major version is enough
● ID
Mandatory string to identify the application
● Description
Optional description text
● Version
Mandatory version of the application: <major>.<minor>.<patch>
● Provider
Optional string to specify the organization providing the application
● Copyright
Optional copyright information
Modules
Within the MTA development descriptor, the modules element declares the source modules
of the MTA project.
● Name
Mandatory name of the module. Unique in the descriptor file
● Type
Mandatory content of the module, for example, HDB, Node.js, JAVA, HTML5
● Path
Mandatory file system path starting from the applications root director
● Description
Optional description text
● Requires
Optional section containing required sources of other modules
● Provides
Optional section containing configuration data used by other modules
● Properties
Optional named variable containing application-specific configuration data
● Parameters
Optional named variable to be used by the deployer, for example, the amount of memory
for the module.
Resources
● Properties
Optional named variable containing application-specific configuration data
● Parameters
Optional named variable to be used by the deployer, for example, the amount of memory
for the module
Parameters
Parameters are reserved variables which influence the behavior during the deployment
process and/or during runtime. A parameter can either be read-only, which is the case for
most system parameters, read/write, or write-only.
We can specify the memory and disk-quota parameters for the user_ui module, which advises
the deployer to grant memory and disk-space to the application as defined in the section.
In addition, we refer to the service-name and default-url parameters which are filled by the
system, using the placeholder notation ${<parameter_name>}. During the deployment, the
parameter value is determined and the placeholder is replaced with the actual value.
Here is a list of commonly used parameters:
In the figure above you can find some of the available parameters. You can find the full list in
the MTA Deployment Descriptor Syntax section of the SAP HANA Developer Guide:
MTA Editor
As an alternative to the code editor for the MTA file, the SAP Business Application Studio also
provides the possibility to use the MTA editor to make changes. You can find the form-based
editor on the context menu of the mta.yaml file.
LESSON SUMMARY
You should now be able to:
● Describe the information contained in the MTA development descriptor mta.yaml file
Learning Assessment
2. Which of the below parameters are mandatory in the Modules section of mta.yaml?
Choose the correct answers.
X A name
X B requires
X C type
X D path
Lesson 1
Working with GIT Within SAP Business Application Studio 77
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Use the native Git integration of SAP Business Application Studio
Overview of Git
Why Should You Use a Version Control System?
Think about an Cloud Foundry application. This application, with its different modules (SAP
HANA database modules, UI, and control flow modules), consists of a number of files,
organized in different folders.
Now, suppose the last released version of the application is V2.1.0, and you are currently
preparing new features for the upcoming minor release V2.2.0, which is scheduled to release
in a few weeks. You might have imported the MTA project in your SAP Business Application
Studio workspace and started developing the new features that are planned for V2.2.0. This
development means that there are new files, as well as modifications to existing files, and
maybe the deletion of a few existing files as well.
At some point, you receive an important e-mail from support saying that a bug in V2.1.0 needs
an urgent fix. This cannot wait until the next minor version, and requires a patch (V2.1.1).
Watch this video to learn about a typical scenario in Version Control.
So, how do you handle this request? At the moment, your current development is not finished
or tested, but you need to patch your version 2.1.0. Of course, you want to start from the last
release (you do not want to include any part of the future functionality into the patch), but
your patch might affect some files that you already modified as part of the development of
new features.
This is where a version control system comes into play. It allows you to keep a complete
change history by using milestones during the development of your code, at a very fine-
grained level if necessary. You can also branch your code, which means, you can develop and
test different features in different parallel development threads (branches). If a feature
branch is good to go, you can merge this branch with the main branch. If it is not, you can
continue your development, or even get rid of this branch if you realize that a development
option for a feature was not relevant and you need to think about it again.
To learn more about version control systems, view the following page: https://
www.atlassian.com/git/tutorials/what-is-version-control.
Git in a Nutshell
Git is a Distributed Version Control System (D-VCS) used to manage source code in software
development, or more generally to manage the lifecycle of any set of files.
It allows one or several developers to work locally with their own copy of the Git repository,
which contrasts with a traditional client/server architecture.
Note:
Git can also be used even out of a collaboration context, to help you control the
development of a project on which you work alone, thanks to a number of
capabilities.
The architecture of Git is distributed. It is somewhere between purely local and purely central.
A local architecture would make it difficult to several developers to collaborate. A centralized
one allows collaboration, but in some cases, a developer need to block a piece of code on the
central repository while working on it.
Instead of this, Git is designed so that every developer can keep the entire history of a project
(or only a part of it, depending of their needs), locally on their computer.
Git is a free software distributed under GNU GPL (General Public License).
In a classical Git architecture, developers work in a local repository on their computer, and
connect on a regular basis to a central repository to share their contribution and get the
contribution from other developers on the project.
Git can also easily support several remote repositories for a single project. If needed, it is
possible for a sub-team of developers to have their own sub-team repository to collaborate,
and synchronize their changes with the central repository when needed.
Watch this video to learn about the Distributed Git Architecture.
Note:
In Git, it is even possible for a developer to define the local repository of another
developer as a remote repository and to synchronize his development. This
requires a network access and the relevant authorizations.
The shared Git repositories can be hosted on the own company’s IT infrastructure, or on Git
hosting services such as GitHub (one of the most popular), Helix (Perforce), Bitbucket
(Altassian), and many more. A lot of companies offering Git hosting services also provide
additional services such as code review or issue tracking.
These are not all real areas, in the sense that a given file is not necessarily materialized in each
of them. Let’s explain this with a diagram.
The way to manage files in a local Git repository is very straightforward, relying on a small
number of actions.
When you create a file, it is initially stored in your Working directory. You can also modify (or
even delete, if needed) a file that you have previously exposed in your Working Directory with
a Check Out command.
At this stage, your changes, even if they are saved in your working directory, are not yet part
of the Git history. To update the Git history, you must perform what is called a Commit. This is
what will create a new point in Git history (a so-called Commit), referencing a number of
changes since the previous commit.
Watch this video to learn about the basic Git Workflow.
But what changes exactly do you want to include in your next commit? This is where the
Staging Area comes into play. By staging a file, you mark this new or modified file so that it is
included in the next Commit. You can of course stage several files (this is very common), or
even stage all the current changes of your working directory.
Let’s put it another way: The staging area is the “virtual” place where you put all the
modifications that you want to include in your next commit.
Note:
So, when your working directory contains changes that you do not want to include
in your next commit, you just need to make sure you do not stage the
corresponding files before committing.
Over time, all the commits you execute in your project are added to the history, and each
commit (except the initial one) references its parent commit.
Note:
Actually, you will see later on that in the case of a Merge operation, a commit can
have two parent commits.
With each commit, Git keeps record of the commit date, the identity of the developer who
executed it, and useful information about which files where affected (added, modified, or
deleted).
For many different purposes, you can create a new branch and commit your changes to one
of the existing branch.
Let’s describe the diagram above. After commit C3, a new branch Feature32 has been created
to support the development of a new feature. Two commits, C5 and C6, have been made to
this branch. In the meantime, additional changes have been committed in the master branch
(commit C4).
Prerequisites
1. Account on https://github.com :
Log on to https:// github.com and follow the account creation process step by step.
2. Repository on https://github.com :
Once Account is created navigate to repositories in the menu and create a repository.
Once this prerequisites are met you can start to link the Local Repository to Github repository
process in the SAP Business Application Studio in your Dev Space.
Note:
This process is carried out on the root folder Terminal → New Terminal →
cd projects
git config –list is used to retrieve the user credentials saved during
configurtaion.
User Name and Password for Git is asked you can enter it here and proceed, to avoid this
step for every push we have below options.
● Using Token :
SAP Business Application studio support Personnel access tokens instead of
passwords
Personal access tokens (PATs) are an alternative to using passwords for
authentication to GitHub when using the GitHub API or the command line i.e Personal
access tokens function like ordinary OAuth access tokens. They can be used instead of
a password for Git over HTTPS
Personal access tokens can only be used for HTTPS Git operations. If your repository
uses an SSH remote URL, you will need to switch the remote from SSH to HTTPS
You should create a personal access token to use in place of a password with the
Github UI or with the command line
To create a token, follow the instructions described in the GitHub documentation
Creating personal access token
f. Select Scopes, Permissions, to use your token to access repositories from the
command line, select repo.
Username: your_username
Password: your_token
Note:
Other option exists but not covered here. Example usage of .netrc.
Staging or discarding can also be done for the entire set of modifications.
The next step is to commit your changes, which will add a next commit to the history of the
branch.
Note:
The concept of branches in Git, already introduced, will be discussed in more
details later on. For now, let’s just consider that you are working on a single local
branch, for example, master.
To materialize this workflow, each file is assigned a Git status. This status is represented by
an icon in the SAP Business Application Studio for SAP HANA workspace and/or the File
Status area of the Git pane. The list of possible file statuses is as follows:
Figure 60: Git Status of Files in the SAP Business Application Studio
Note:
The Conflict (C) status will be discussed later on.
Committing Changes
When you commit changes, you add these changes to the Git history and provide a commit
description. From this moment on, the Git History pane will include this commit, and provide
the identity of who committed, the date, the commit description, and details on which files
were affected by the commit. Note that committing changes does not modify your working
directory. The committed files are still available for further modification, and the new or
modified files you haven’t committed yet are still here for an upcoming commit. You can also
discard the changes to these uncommitted files.
The commit description provides important information to allow the readability of your
changes, in particular when you collaborate with other developers. You can find a number of
blogs and tutorials on how to write a good commit message. The following are
recommendations for writing a commit message:
● Start with a relatively short subject (50 characters max), using imperative mood
● Separate the subject and body with a blank line
● Provide info about what the change does, and why (for example, what issue it solves)
● If relevant, provide the URL of a related issue or specification in another system (for
example, JIRA)
It is possible to amend a previous commit. This allows you to replace the very last commit by
a new one, after you have staged additional files. The original commit description is displayed
so that you can modify it before committing again.
Caution:
You must not amend commits that have been shared with other developers,
because this would modify a (shared) history on which others might have already
based their work.
More information on Git capabilities for SAP Business Application Studio can be found at
https://help.sap.com/viewer/9d1db9835307451daa8c930fbd9ab264/Cloud/en-US/
265962e20eee43f499516de9011ac2e3.html
LESSON SUMMARY
You should now be able to:
● Use the native Git integration of SAP Business Application Studio
Learning Assessment
1. What is Git?
Choose the correct answer.
2. Which of the below option is used to Move the staged files to the Local Repository
Choose the correct answer.
X A Commit
X B Push
X C Push to
X D Move to
Lesson 1
Introducing the SAP HANA Database Module 95
Lesson 2
Introducing Domain Modeling in SAP Cloud Application Programming Model 101
Lesson 3
Introducing Core Data Services 107
Lesson 4
Using the Core Data Services Entity 111
Lesson 5
Using the CDS Association 113
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the main features of the SAP HANA Database module
1. The user interface is written using HTML5 and rendered/executed in a web browser.
2. The HTML5 program communicates with the router via the HTTP protocol. The router is a
program running on the server. It does not execute business logic, it is just a dispatcher of
HTTP messages.
3. The router dispatches the HTTP message to the proper application, running on the server.
This application is built out of the corresponding Node.js (Java, Python, and so on)
module, coded by the developer within the MTA project in the Business Application
Studio.
4. The Node.js application communicates with the database via a SAP HANA service, that is
connected to a SAP HANA Deployment Infrastructure (HDI) container stored in the
database.
5. Database objects (Tables, Views, Procedures, and so on) are stored in the HDI container.
6. The procedures in the database use the SQLScript (alternatively the R) language, that is
specialized in highly parallel and data intensive processing.
Watch this video to learn about the runtime structure of the full-stack application.
At design time, in the SAP Business Application Studio, an MTA project is created.
The database content of the MTA is defined using the db folder also called as SAP HANA
database module.
This module contains the design time definitions of all the objects to be created in the
database, for example:
● Domain Models (Persistence).cds
● Tables and Views(.hdbtable and .hdbview)
● Calculation Views(.hdbcalculationview)
● Procedures (.hdbprocedure)
Note:
The HANA components have to be selected during the creation of the CAP project
alternatively they can be added later. This components are required to be able to
create the HANA Native Artifacts. In addition additional configuration is required
to enable the development of Native HANA Artifacts.
A package.json file for configuration for the MTA project is required for example, to set the
version and the options of the deploy program. In case of CAP projects this file is shared by all
three modules.
Additionally datafolder is required in case the data has to be provided by files (.csv). The
system identifies the files based on its placement under datafolder under dbfolder and
generates the .hdbtabledata files for deployment.
A service of type HANA (technical name com.sap.xs.hdi-container) needs to exist for the HDI
container to be accessible from Cloud Foundry. When you create a new SAP HANA database
module, a service is also added by default to the mta.yaml file, together with the configuration
required for the service to be accessible by the module.
When the module is built, a service instance (of type HANA) is created in Cloud Foundry. This
service instance is connected to the HDI container that is created in the SAP HANA database.
Using the Hidden Configuration Files in the SAP HANA Database Module
The database artifact .hdiconfig specifies the plug-ins and the version to use when deploying a
particular type of database artifact to an HDI container. The artifact type is defined in the file
suffix, for example, .hdbcds or .hdbsynonym. The plug-in definition ensures that the
appropriate runtime object is created from the specified design time artifact.
In SAP HANA HDI, name-space rules are defined in one or more file resources
named .hdinamespace. The file resources must be located in the design time folder to which
the naming rules apply, or the root folder of a hierarchy to which the naming rules apply. The
content of the .hdinamespace file is specified according to the JSON format.
You can change the path defined in the name and, for the sub-folder, you can choose between
append (add sub-folder name to object name space) and ignore (do not add sub-folder name
to the object name space).
Note:
This concept will be used when working with the HANA Native Artifacts. In case
of .cds we define the namespace within the file.
LESSON SUMMARY
You should now be able to:
● Describe the main features of the SAP HANA Database module
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe domain modeling in SAP Cloud Application Programming Model
Domain Models describe the static, data-related aspects of a problem domain in terms of
entity-relationship models.
As seen in the image Domain models serve as the basis for persistence models which are
deployed to databases as well as for service definitions. Sometimes, domain models might
also be exposed for reuse in other projects.
Persistence Models
A set of entities mapped from a domain model and deployed to the database
Services
Exposed interfaces based on a domain model
Consumers
Other services or UIs which calls services via an API
Watch this video to learn about Domain Modelling.
The goal is to
Keep your domain models clean, concise, comprehensible by
Factoring out technical aspects Separating Concerns, e.g.
● Fiori Markup
● Authorization
● Persistence
Figure 70: Domain Modelling - Best Practices Namsepaces and Enterprise Features
The programming model provides the option to define our own reusable objects called
aspects as well as provides some common aspects that can be reused.
Aspects :
Aspects are the reusable models that can be defined ones and then reused in other entities.
Aspects allow factoring out cross-cutting or technical concerns into separate models or
files,which generally facilitates keeping core domain models clean and comprehensible.
Sometimes, domain models may as well be exposed for reuse in other projects The SAP
Cloud Application Programming Model is shipped with a prebuilt model named common and
it provides common aspects and types ready for reuse. It is recommended to take advantage
of using the provided common types and aspects.
Reusability provides many benefits like making the models concise and comprehensible by
applied classic conceptual modeling methods and also fosters interoperability between all
applications and optimize implementations and runtime performance. It will make use of
proven best practices captured from real applications .
Common Aspects :
The cuid, managed, and temporal are the three aspects conveniently built in and ready to be
used. The common aspects are declared by "using" statement.
Aspect cuid is a shortcut to add a universally unique primary key to your definitions.
Aspect managed is used to add the four audit dimensions created by ,created at data,latest
changed by and changed at
Aspect temporal enables temporal data, which allows maintaining information relating to
past, present, and future application time. This aspect basically adds two elements, validFrom
and validTo, to the entity. It also adds a tag annotation that is recognized by the built-in
support for temporal data. This built-in support covers handling date-effective records and
time slices, including time travel.
Common Types :
In addition to the common aspects the programming model offers common types, These are
countries, currencies, and languages. All three of them are implemented in the same manner.
the declaration is similar to the common aspects by "using".
LESSON SUMMARY
You should now be able to:
● Describe domain modeling in SAP Cloud Application Programming Model
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Explain the basic concepts of Core Data Services
Below are few concepts from Domain Modelling that we will be using in the subsequent
chapters for a complete, reference – Please refer to https://cap.cloud.sap/docs/
● Namespace :
are used to help getting to unique names , this helps to keep the code clean by avoiding
fully qualified names.
● Entities :
represent data that can be read and written by consumers, uniquely identified by their
primary keys.
For Example, Products and Suppliers are defined as Entities
● Types :
describe the types of elements within entities.
Various built-in types are provided. For example Integer, Decimal and UUID as seen in the
screen shot – UUID are autogenerated ID’s
● Associations :
capture relationships between entities.
For example, Entity Products is Associated with Entity Supplier
● Compositions :
are used to model document structures through "contained-in" relationships.
For example, in the definition of Suppliers, the Products composition refers to the
Products entity.
In this case 1 supplier can provide multiple products but 1 product will only have 1 unique
supplier.
Core Data Services (CDS) is a technique used to create the persistency layer (the database
layer) of a software application.
During software development the data model is defined via a Data Definition Language within
a file with extension .cds.
When the application is built and then deployed, a set of database tables is generated in the
database reproducing the data structure described in the file.
If the .cds file is modified and the program is re-built, the created database objects are
automatically changed to realize the new model.
The figure above is an example of a .cds file, created in the SAP Business Application Studio .
The figure above shows the created database tables that you can display with the Database
Explorer.
The figure above shows the created the table definition as created during deployment to the
database this can be checked with the Database Explorer.
LESSON SUMMARY
You should now be able to:
● Explain the basic concepts of Core Data Services
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Use Entity in Core Data Services
The CDS Entity triggers (upon build) the generation of a database table with the
corresponding name and structure.
LESSON SUMMARY
You should now be able to:
● Use Entity in Core Data Services
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Use association in Core Data Services
CDS Association
In the generated database table, the key fields are reported as a reference to the remote
table.
LESSON SUMMARY
You should now be able to:
● Use association in Core Data Services
Learning Assessment
X A srv
X B src
X C app
X D web
X A db
X B src
X C srv
X D dbc
X A localization
X B localized
X C i18n
X D contexts
X A localized
X B t54n
X C i18n
X D localized data
5. When using CAP in the SAP Business Application Studio, you want to use Core Data
Services to define the persistence layer. Which extension do you use for the design time
file?
Choose the correct answer.
X A .hdbddl
X B .hdbtable
X C .cds
X D .hdbcds
X A Database Table
X B Database View
X C Calculation View
Lesson 1
Introducing Deployment Options of Persistence Models 119
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe deployment options of persistence models
2. npm install :
npm install reads the dependencies from your package.json and download all the specified
modules into your project from the internet.
3. Build :
The content needed for the models i.e. Tables, view, data are placed into a separate /gen
folder in your project. This keeps the sources separate from the deployable/executable
portions of your project. You can see this addition to the project structure in the main
project navigation area. The command cds build is used.
4. Compile :
This step is optional but helps to check the definitions that are used during the creation of
Persistence layer in SQL and HANA and the objects created during the actual deployment.
The command's cds compile schema.cds – to HANA or cds compile schema.cds – to sql
are used at the db level.
Note:
HANA deployment shows usage of ..hdbcds table but .hdbcds is not supported
in SAP HANA Cloud, so we need to perform minor modification to enable
HANA Deployment. This will be covered in the next Lesson on SAP HANA
Cloud deployment.
5. Deploy :
In case of Local deployment,
using cds deploy The Sqlite in-memory db is used by default.
In case Local deployment with persistence is required we use the
cds deploy --to sqlite is used.
Watch this video to learn about the process flow for Local Deployment
Watch this video to learn about the process flow for SAP HANA Cloud Deployment.
LESSON SUMMARY
You should now be able to:
● Describe deployment options of persistence models
Learning Assessment
1. Which Database does SAP CAP automatically bootstraps by default when using Local
deployment option?
Choose the correct answer.
X A SQLite in-memory
X B SQLite
X D SAP HANA
Lesson 1
Introducing OData Services 127
Lesson 2
Exposing an OData Entity Set with ODATA 131
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe basic concepts of OData
Open Data Protocol (OData) is an OASIS standard that defines the best practice for building
and consuming RESTful APIs. OData helps you to focus on your business logic, while building
RESTful APIs, without having to worry about the approaches to define request and response
headers, status codes, HTTP methods, URL conventions, media types, payload formats,
query options, and so on. It is an open standard, defined by the OASIS consortium.
OData also guides you in tracking changes, defining functions or actions for reusable
procedures, sending asynchronous or batch requests, and so on. Additionally, OData provides
a facility for extension to fulfill any custom needs of your RESTful APIs.
OData RESTful APIs are easy to consume. The OData metadata, a machine-readable
description of the data model of the APIs, enables the creation of powerful generic client
proxies and tools. Some of these can help you interact with OData, even without knowing
anything about the protocol.
OData Basics
One of the main features of OData is that it uses the existing HTTP verbs GET, PUT, POST,
and DELETE against addressable resources identified in the URI. Conceptually, OData is a way
of performing database-style create, read, update, and delete operations on resources by
using HTTP verbs:
● GET: Get the resource (a collection of entities, a single entity, a structural property, etc.).
● POST: Create a new resource.
● PUT: Update an existing resource by replacing it with a complete instance.
● PATCH: Update an existing resource by replacing part of its properties with a partial
instance.
● DELETE: Remove the resource.
● OData protocol
Enables a client to query an OData service. The OData protocol is a set of interactions,
which includes the usual REST-based create, read, update, and delete operations, along
with an OData-defined query language. The OData service sends data in either of the
following ways:
- XML-based format defined by Atom/AtomPub
- JSON
● OData client libraries
Enable access to data via the OData protocol. Since most OData clients are applications,
pre-built libraries for making OData requests and getting results reduces and simplifies
work for the developers who create those applications.
A broad selection of OData client libraries are already widely available, for example:
Android, Java, JavaScript, PHP, Ruby, and the best known mobile platforms.
● OData services
Exposes an end point that allows access to data in the SAP HANA database.
An OData service is a logical data model; it describes entities (resources) using
associations and operations. The most important point is that the OData service forms a
kind of contract between the UI and the backend system side, helping to bring together
developers on both sides.
There are two types of document associated with each OData service:
- the service document
- the service metadata document
The service document lists entity sets, functions, and singletons that can be retrieved.
Clients can use the service document to navigate the model in a hypermedia-driven
fashion. The service document is available at http://<host>:<port>/<service>/.
The metadata document describes the types, sets, functions and actions understood by
the OData service. Clients can use the metadata document to understand how to query
and interact with entities in the service. The service metadata document is available at
http://<host>:<port>/<service>/$metadata. The URL will return XML metadata of
the service (Entity data model). The response of a service metadata document only
supports XML.
LESSON SUMMARY
You should now be able to:
● Describe basic concepts of OData
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Create a simple OData service, using OData to expose a single entity set
At design time, within the srv folder, you create an .cds file where you indicate the exposed
database object, for example, table, view, calculation view, and so on, and the name of the
Entity Set in the service.
On build, the application provides an OData service, exposing the object content under the
given entity set name.
LESSON SUMMARY
You should now be able to:
● Create a simple OData service, using OData to expose a single entity set
Learning Assessment
1. What is OData?
Choose the correct answer.
X A A software tool
X C A protocol
X D A document format
2. Which file extension do you use for the definition file for an OData service?
Choose the correct answer.
X A .data
X B .odata
X C .xsodata
X D .cds
Lesson 1
Introducing SQLScript 137
Lesson 2
Creating an SQLScript Procedure 143
Lesson 3
Calling a Stored Procedure in BAS 147
Lesson 4
Debugging SQLScript 149
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Explain the basic concepts of SQLScript
What is SQLScript?
SQLScript Extends Standard SQL
SQLScript is a set of extensions added on top of standard SQL. It is used to exploit the
specific features of SAP HANA.
By using these extensions, SQLScript allows much more pushdown of the data–intensive
processing to the SAP HANA database, which otherwise would have to be done at the
application level.
Applications benefit most from the potential of SAP HANA when they perform as many data-
intensive computations in the database as possible. This avoids loading large amounts of data
into an application server separate from SAP HANA, and leverages fast column operations,
query optimization, and parallel execution. This can be achieved to a certain extent if the
applications use advanced SQL statements. However, sometimes you may want to push more
logic into the database than is possible when using individual SQL statements, or make the
logic more readable and maintainable. Therefore, SQLScript has been introduced to assist
with this task.
SQLScript Advantages
Compared to standard SQL, SQLScript provides the following advantages:
● Using SQLScript, complex logic can be broken down into smaller chunks of code. This
encourages a modular programming style which means better code reuse. Standard SQL
only allows the definition of SQL views to structure complex queries, and SQL views have
no parameters.
● SQLScript supports local variables for intermediate results with implicitly-defined types.
With standard SQL, it would be required to define globally visible views even for
intermediate steps.
● SQLScript has flow control logic such as if-then-else clauses that are not available in
standard SQL.
● Stored procedures can return multiple results, while a standard SQL query returns only
one result set.
Pushing the processing to SAP HANA is beneficial because there are lots of opportunities for
SAP HANA to optimize the execution with in–memory, parallel processing.
Standard SQL does not provide sufficient syntax to push many calculations to the database
and as a result, the application layer has to take on this duty. This means huge amounts of
data must be copied between the database server and the application server.
Declarative logic allows the developer to declare the data selection via SELECT statements, as
follows:
● The developer defines the what.
● The engine defines the how, and executes accordingly.
Imperative logic allows the developer to control the flow of the logic within SQLScript using
the following:
● Scalar variable manipulation
● DDL/DML logic
● WHILE loops
● Branching logic based on some conditions, for example IF/ELSE
Code Pushdown
We need to change the way we think about application development. Previously, for example
in ABAP, we did a SELECT * INTO TABLE and brought thousands of rows back to the
application server, then looped over it and did some processing. The new model suggests a
different approach, where we take the data-intensive logic and process that logic in the
database closer to the data, and then only send back to the application layer what is
necessary for presentation to the end user for further processing. This is referred to as code
pushdown. The SAP Business Suite on SAP HANA does this in a couple of ways, for example,
in ABAP, we can now leverage SAP HANA views by exposing them to the ABAP dictionary, via
external views. We can also expose SQLScript procedures and call them directly from ABAP,
using the CALL DATABASE PROCEDURE statement.
LESSON SUMMARY
You should now be able to:
● Explain the basic concepts of SQLScript
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Create an SQLScript procedure
Procedure Name
Header Properties
Script Body
LESSON SUMMARY
You should now be able to:
● Create an SQLScript procedure
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Call a stored procedure in SAP HANA database explorer
You can call a Procedure from Database Explorer, by following the below steps.
1. Open the SAP HANA Database Explorer directly from Business Application Studio. This
will take you to the container directly.
3. Select the Search field and type in your Stored Procedure name or parts of it. Select
your procedure.
4. Right click on its name and select Generate Call Statement so that you can
interactively fill in your input parameters and call it.
5. Execute the Procedure. If there are input parameters defined for your Stored Procedure
you can pass it in the call by position or by name. Recommended is to call the procedure
with input parameters by name so the order of passing input parameters can be changed
without effects on the result set. Additionally debugging might be easier on the long run
because you directly see in the code which value is to be passed to which input parameter.
LESSON SUMMARY
You should now be able to:
● Call a stored procedure in SAP HANA database explorer
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Debug SQLScript using the DB Explorer
Debugging SQLScript
1. To open the debug panel, choose the Debugger icon on the right side of the screen.
2. Use the Attach Debugger icon (plug icon next to the Active Session dropdown box) to link
the debugger to your session.
3. Specify the debug target which is the database connection you want to use and confirm by
selecting OK.
You will see a success message appear in the upper right corner of the DB Explorer.
4. Open the procedure for debugging from the context menu: Context Menu → Open for
Debugging.
6. From the Context Menu of the procedure use the Generate CALL Statement
8. Use the controls in the Debugger panel to Resume, Step Over, Step In, or Step Out.
LESSON SUMMARY
You should now be able to:
● Debug SQLScript using the DB Explorer
Learning Assessment
1. Compared to standard SQL, which of the following are the advantages of SQLScript?
Choose the correct answers.
X B It enables complex logic to be broken up into smaller chunks of code that enables
modular programming, reuse, and a better understanding by functional abstraction.
X D One can make use of table variables structure complex SQL statements.
2. In SAP HANA, you create a stored procedure. What can you set in the header properties?
Choose the correct answers.
X A Global variables
X B Read/write access
X C Storage type
X D Programming language
X E Security
3. How can you call a Procedure created using Business Application Studio.
Choose the correct answer.
X A In Database Explorer
X A Attach Debugger
Lesson 1
Introducing the Node.js Module 155
Lesson 2
Creating and Deploying a Basic OData service using the SAP Cloud Application Programming 167
Model Model
UNIT OBJECTIVES
● Describe introductory concepts required to use Node.js in the SAP Business Application
Studio
● Create, run, export, and deploy a Node.js module saying Hello World
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe introductory concepts required to use Node.js in the SAP Business Application
Studio
What is Node.js?
The following list describes Node.js:
The Node run-time environment includes everything you need to execute a program written in
JavaScript.
Synchronous processing stops the execution until a response is retrieved from a called API.
After the response is retrieved, the program continues.
Asynchronous processing calls an API but doesn’t wait for the response. The response is
processed later by the calling program using a callback method.
Watch this video to learn about Synchronous versus Asynchronous processing.
The event loop executes tasks from the event queue and starts the callbacks.
If the queue is empty, the event loop process stops and gives back system resource.
Node modules run in an asynchronous mode by default. If you need synchronous processing,
you can use the Node library.
Figure 107: SAP Cloud Application Programming Model Project with Node.js runtime option
When using the SAP Business Application Studio to create an SAP Cloud Application
Programming Model Project, the initial screen provides the run times option for Java and
Node.js.
Selecting Node.js enables the use of Node.js for Application Development. Adding Node.js
installs the required libraries during the creation of the project for further use.
The build, deployment, and runtime dependencies of a JavaScript application are described in
the package.json file. The package.json file is mandatory for JavaScript applications and it is
located in the general section of the project.
As well as the application name and version, dependencies to other Node.js modules, the
Node.js version, run scripts, and the main program are configured.
The scripts section contains the different run commands, which are executed for different
tasks.
The application package descriptors are as follows:
● name
The name of the JavaScript application whose package prerequisites and dependencies
are defined in this package description.
● description
A short description of the JavaScript application, whose package prerequisites and
dependencies are defined in this package description.
● private
Use the private property to indicate if access to the package specified in name is
restricted. Private packages are not published by npm.
● version
The version of the JavaScript application, whose package prerequisites and dependencies
are defined in this package description.
● repository
The absolute path to the repository used by the JavaScript application, whose package
prerequisites and dependencies are defined in the package description.
● dependencies
A list of dependencies that apply to the JavaScript application, whose package
prerequisites and dependencies are defined in the package description.
● engines
The runtime engines used by the application specified in the name property.
● scripts
The script containing the command used to start the JavaScript application, along with
any additional (required) options and parameters.
Semantic Versioning
When defining dependencies and runtime engines in the package description, you can specify
a range of versions. For example: >1.0.3, <=1.2.5, ^1.0.5 (compatible with version 1.0.5), or
1.2.x (any 1.2 version), or 1.1.0 - 1.3.12 (any version including or between the specified
versions).
Describing Which Actions are Executed in Cloud Foundry When You Run an
Application
Applications are deployed to the target platform by using the push operation of the platform
API. For this reason, in Cloud Foundry parlance, applications are "pushed" to the platform.
Pushing an application works as follows:
2. Buildpacks are executed to create archives, that create the self-contained and ready-to-
run executable applications (downloading any required libraries and other dependencies,
configuring the application). Different buildpacks exist for the different target runtime
environments, such as Java or JavaScript/ Node.js.
3. Applications are started as separate processes. At runtime, the applications need the
connection information for the service instances to which they are bound. The
applications obtain this information from process-specific environment variables, which
are resolved by the platform in a process known as service wiring.
Bindings can only be created between applications and service instances in the same
space.
The node module can be run from the terminal window. Terminal cds run
The build process completes automatically when starting the module. It is not necessary to
execute it separately.
The node module can be run from the terminal window. Terminal cds watch
The build process completes automatically when starting the module. It is not necessary to
execute it separately.
cds watch command monitor changes to the files and completes the build process
automatically and starts the module.
Using the Node.js Run Configurations in the SAP Business Application Studio
Run Configurations : In the SAP Business Application Studio run configurations, the start
scripts are available for you to execute, different run configurations can be used in the SAP
Business Application Studio to start the node application.
Note:
By default, the run configuration is created for the "development" profile. If you
configured an additional profile for your application, you can create a run
configuration that activates and uses this profile.
The dependencies of the application are calculated according to the profile
selected.
Bind Dependencies
Figure 115: Bind Process for Local and SAP HANA Cloud DB
In the Run Configurations view, you can see the available dependencies as defined in the
package.json file. You can bind or unbind these dependencies to a specific Cloud Foundry
service instance or to your local database.
Note:
The following Cloud Foundry service types are supported for binding:
● hana ( managed-hana is not supported)
● auditlog
● application-logs
Binding to SAP HANA Cloud Database : ,if you are binding to SAP HANA Cloud DB , the deploy
task is created, you may be prompted to deploy. You can deploy manually by running the
Deploy task.
Binding to Cloud Foundry service : ,if you are binding to Cloud Foundry service ,
If not already logged in, you are prompted to log in to Cloud Foundry.
A list of all available services that match your dependency type are displayed in the command
palette
Bind and mock an external OData service :
if you are binding to external OData service ,
If not already logged in, you are prompted to log in to Cloud Foundry. A list of all available
destinations from your subaccount is displayed in the command palette.
To mock all OData services that are not bound to a destination:
Turn on the property to mock external OData services. All dependencies of type OData that
are not bound to a destination appear in the Run Configuration tree marked as mocked.
LESSON SUMMARY
You should now be able to:
● Describe introductory concepts required to use Node.js in the SAP Business Application
Studio
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Create, run, export, and deploy a Node.js module saying Hello World
Creating the Hello World OData service using the SAP Cloud Application
Programming Model Model
Creation of a Simple Hello World OData service using SAP Cloud Application Programming
Model is performed in 4 simple stages.
Watch this video to learn how to create the Hello World OData service using the SAP Cloud
Application Programming Model Model.
● Define a Service :
We define a service using CDS
● Implement It :
We implement the code using example,
Node.js express.js handlers style
Node.js es6 classes style.
● Run It :
You can run it using command line commands, cds run or cds watch or cds serve
world.cds
● Consume It :
Consumption is using the local browser http://localhost:4004/say/
hello(to='world')
Code Reference
service srv{ function hello (to:String) returns String; }
module.exports = class srv { hello(req) { return `Hello $
{req.data.to}!` } }
LESSON SUMMARY
You should now be able to:
● Create, run, export, and deploy a Node.js module saying Hello World
Learning Assessment
1. What is Node.js?
Choose the correct answer.
2. Which is the correct sequence for creating the Hello World OData service using the SAP
CAP Model?
Choose the correct answer.
Lesson 1
Introducing Event Handlers for Custom Logic 173
Lesson 2
Explaining Error Handling 183
Lesson 3
Debugging the Node.js Code 187
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Explain event handlers for custom logic
● Run SQL in the database with Node.js
Custom Code is the logic that you can add to the application to express things like input
validations,additional calculations, call to other services, to the database and more.
We have 4 types of API's as seen in the picture,
● Construct, Reflection API :
This deals with constructing and looking things up in services or connecting to other
required services. Not commonly used. Usually, you will not be confronted too much with
these.
● Querying API :
This is a query API, through which you can send synchronous queries to services,including
databases.
● Messaging API :
This is the asynchronous counterpart of the query API, with which services can send
messages to one another.
● Event Handling :
These are used to register custom event handlers.
If you need a specific service to react to a specific event, you register an event handler using
srv.<phase>(<event>), where
● <phase> is one of on, before, or after (see section Event Phases) and
Once a service has an event handler for a specific event, it becomes a consumer for that
event. Using srv.emit(<event>), a service can send arbitrary events. These events then
get consumed by other services that have event handlers registered for the respective event.
There are basically three moments for which you can hook event handlers to a service.
You can add them before the framework code is executed using .before, after it
using .after or instead of it using .on, which means replacing their framework
implementation.
Watch this video to learn about the custom event handler phases.
In the first example, code is added before Orders are created through a post or put request or
so. For example, one might verify that there's enough stock or execute any check before the
actual request is processed.
In a second example, the code is executed after books are read.
And in the third example, code is registered to run instead of the generic framework handler.
In the case of an external service, there wouldn't be anything the framework could do,so you
must provide an on handler, but you can also use the on hook to override generic CRUD
handlers. However, it is recommended to use a generic implementation and only diverge from
it if you need different logic.
Also know that you can add more than one event handler for the same event, for example two
before handlers for the same order creation event.
Likewise, a single handler can handle multiple events. For example, just omit the book string in
the second example,to have a handle that is called after all entities are read, no matter they
are Books, Authors, or Orders.
Request objects are passed in the event handlers , they provides all sorts of information about
the context, like the request data or the method,like get post and so on.
Request objects are also used to provide error messages back to the client, or to register
another set of handlers for the request lifecycle, like when the request has completed,
succeeded, or failed.
capire - Core Services APIs (cloud.sap)
There are many ways to register Javascript implementation to the Framework, the most
commonly used is the
● Option 1 :
In this method the Javascript file is placed next to the CDS (.cds) file used to define the
service, the Javascript file needs to have the same name as the .cds file, this way the
framework hooks up the implementation to the service file.
● Option 2 :
Here we set the link through the impl annotation in your CDS model file (.cds), where the
respective service implementation can be found. This is useful if you have diverging file
names or you want to make it very explicit that the two files belong together.
● Option 3 & 4 :
These are fairly advanced, and used with the CDS serve API to bootstrap your services on
your own.
● Option 5 :
Is used when dealing with external services.
We will be using Option 1 for Registering the handlers as shown in the process flow
Figure 122: Process Flow for the Event Handler- Definition and Testing
Requests to a node service occur via the HTTP protocol. An application such as an SAPUI5
front-end can initiate these requests.
The node module receives the incoming HTTP request via the application router and executes
a defined method, depending on the request method and registered application path.
The program can access the information sent in the HTTP request. For example, the
parameters, and the request content.
During the server program execution, the HTTP response is prepared. It is then sent back to
the requester once the program execution terminates.
Back on the client side, the requester can further process the HTTP response.
To access SAP HANA database content, such as tables, procedures, and views, from a
Node.js module, the @sap/hdbext module is used.
During the application deployment, the information in the Application Deployment Descriptor
(mtad.yaml) is used to bind the database service instance to the Node.js module.
The binding information is available in the environment variables of the application. The
module @sap/xsenv is used to retrieve the service instance from the environment and passed
on to the module sap-hdbext.
The module, @sap/hdbext, sends the query to the database and provides a callback function
to retrieve the results.
Service bindings for an application can be retrieved, using the command XS ENV
<APPLICATIONNAME>.
The attributes of a bound service can be used in a service query.
@sap/hdbext is a small Node.js package, which extends the functionality of the hdb package.
hdb is a JavaScript client for Node.js, which implements the SAP HANA database SQL
command network protocol.
hdbext.middleware connects to SAP HANA automatically on each access to the specified
path, - / in this case.
Afterwards the connection is available in req.db. This is the client object of SAP HANA
database driver. The connection is closed automatically at the end of the request.
The result set is passed to the callback function and available in the rows array. In the
example shown in the figure above, the first record is sent back to the client via the response
object.
https://www.npmjs.com/package/@sap/hdbext
We will see the process to Implement a CAP Function from for a Procedure.
2. Just adding the function doesn't do anything. We need to use the service handler exit in
cat-service.js again to implement the call to the Stored Procedure. This logic will
implement the exit handler for this function which in turn uses the standard @sap/hdbext
module to call the Stored Procedure from HANA.
3. We used two additional in our code, We need to add the two HANA modules (sap-hdbext-
promisfied and @sap/hdbext) that we used in the code to our root package.json.
4. As we updated the package.json we will run the npm install to install the dependencies
8. The CAP preview UI doesn't list functions or actions, however. Just click on the /catalog
link for the entire service.
9. Manually add /get_supplier_info() to the end of the URL. If it works correctly it will show
the output.
LESSON SUMMARY
You should now be able to:
● Explain event handlers for custom logic
● Run SQL in the database with Node.js
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Explain error handling
● Operational errors
These occur during runtime (e.g. when sending a request to a faulty remote system). They
must be corrected.
Guidelines
The key takeaways for programming errors are::
- Fail loudly: Do not hide errors and continue silently. Ensure to log unexpected errors
correctly. Don't catch errors you can't handle.
- Don't develop in a defensive fashion. Focus on your business logic and only handle
errors when you know they will occur. Use try/catch blocks only when necessary.
Never try to catch and handle unexpected errors, rejections of promises, etc. If it is
unexpected, you cannot handle it correctly. If you could, it would be expected (and should
already be handled). Even if your apps should be stateless, you can never be 100% sure
that a shared resource was not affected by the unexpected error. Therefore, you should
never allow an app to continue running after such an event, especially for multi-tenant
apps where there is a risk of information disclosure.
Following this will make your code shorter, clearer and simpler.
/**
* The service implementation with all service handlers
*/
module.exports = cds.service.impl(async function () {
/**
* Custom error handler
*
* throw a new error with: throw new Error('something bad happened');
*
**/
this.on("error", (err, req) => {
switch (err.message) {
case "UNIQUE_CONSTRAINT_VIOLATION":
err.message = "The entry already exists.";
break;
default:
err.message =
"An error occured. Please retry. Technical error message: " +
err.message;
break;
}
});
});
This handler now steps in whenever this exception gets triggered and overrides it with an
alternative error message:
LESSON SUMMARY
You should now be able to:
● Explain error handling
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Debug Node.js code using the SAP Business Application Studio debugger
You can debug programs directly from the SAP Business Application Studio. The debugger
can be attached to a running Node.js module. After this is done, the debugger will be able to
stop program execution at the breakpoints.
To attach the debugger:
1. Open the debug panel on the left side of the SAP Business Application Studio by clicking
theDebug icon or using the Menu Option Run → Start Debugging
2. Choose the Dropdown box at the top, and choose which node module to which you want to
connect the debugger.
After you have successfully attached the debugger to the Node.js module, you can set
breakpoints by selecting the line numbers to the left of the code.
If you execute your application using the applications link in the Run console, the debugger
stops when it reaches the breakpoints that you set before.
Use the buttons on top of the debug panel to resume, step over, step in, step out, or
deactivate all breakpoints. You can also detach the debugger from your Node.js module.
LESSON SUMMARY
You should now be able to:
● Debug Node.js code using the SAP Business Application Studio debugger
Learning Assessment
X A Error Handling
X B Event Handling
X C Transport Handling
3. In SAP HANA, the debugger needs to be attached to a Node.js module before it stops at
any breakpoint.
Determine whether this statement is true or false.
X True
X False
Lesson 1
Introducing UI5 193
Lesson 2
Creating the UI Using the SAP Fiori Master-Detail Template 195
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Explain what UI5 is
Introducing UI5
LESSON SUMMARY
You should now be able to:
● Explain what UI5 is
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Describe the built-in capabilities for SAP Fiori using SAP Cloud Application Programming
Model project
Describing the Built in capabilities for SAP Fiori using SAP Cloud Application
Programming Model project
Building UI is the last step in Application development and the one which is of high importance
as this is where the User interaction takes place. Going back to the fundamentals of SAP
Cloud Application Programming Model to avoid complexity of the Technical activities, CAP
provides out-of-the-box support for SAP Fiori Elements frontends.
This gives a quick start and can be further enhanced to User needs, This enhancement is
achieved by certain features provided by CAP which we will discuss in this unit.
The UI related Artifacts are developed and maintained under the app folder.
The UI generated by CAP as a quick start generally uses the metadata from the dataset, this
fields are not always the ones that are required on the UI, as we need user friendly common
terminology to help the adaptability of the solution , this is supplemented by the translation of
the static data to make the application more dynamic with no language constraints.
SAP Fiori elements apps are generic frontends, which construct and render the pages and
controls based on annotated metadata documents. The annotations provide semantic
annotations used to render such content, for example:
As shown in the image we use Annotations for the CatalogService Entity Products with @ UI
Then define the Selection Fields and the LineItems,the ones shown in the initial output.
We have seen in the unit for Domain Modelling the usage of i18n files for Localization of static
texts, this is one the key elements for the enhancement. We use this as a reference as seen in
the image above..
We use @odata.draft.enabled in the service file to enable draft for an entity exposed by a
service.
Annotation Placement : Although Placement of Annotations can be added to the models with
no constraints, it is recommended to place them in the app folder, in our example we have
defined a new .cds file under the srv folder. Having a separate file for this requirements helps
separating concerns and eases maintenance.
Note:
For ease of readability the above image only has one section of the Application
code. The actual output is as shown in the next image.
The SAP Fiori List Report Object Page template generates a SAPUI5 app displaying the data
for Products.
A list report is used to view and work with a large set of items. This floorplan offers powerful
features for finding and acting on relevant items. It is often used as an entry point for
navigating to the item details, which are usually shown on an object page.
The application is generated based on a pre-existing OData service providing the data.
The wizard needs the CAP project name and the OData service name, it reads the metadata,
and allows you to personalize the application based on the collected information.
You don't need to provide any technical detail.
The final result is a Fiori-compliant application, connected to the OData service, capable to
browse the data.
We will consume the newly generated App in the sandbox that was created earlier.
Watch this video to learn how to generate a Fiori List Report Application.
LESSON SUMMARY
You should now be able to:
● Describe the built-in capabilities for SAP Fiori using SAP Cloud Application Programming
Model project
Learning Assessment
1. What is OpenUI5?
Choose the correct answer.
X C An open-source project
2. You are creating the index.html file of an UI5 application. What does the bootstrap
contain?
Choose the correct answer.
X C A guideline document that explains the best practice to create a Fiori compliant
List Report Application
X D A wizard that generates a UI5 List Report based on an existing OData service
Lesson 1
Introducing Application Security 203
Lesson 2
Explaining Platform Security 205
Lesson 3
Explaining Application Security 213
UNIT OBJECTIVES
● Figure out how the domain model works and name the main services
● Differentiate between authentication and authorization and explain how both are used in
SAP BTP
● Explain application security
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Figure out how the domain model works and name the main services
Identity Management
● SAP BTP Identity Provisioning service
● SAP Identity Management
We focus on the services Identity Authentication and Identity Provisioning that help us to
work safely.
● Simplify and secure cloud-based access to business processes, applications, and data with
state-of-the-art authentication mechanisms, single sign-on, on-premise integration, and
convenient self-service options
● These are:
- Two-factor authentication
- Customer and partner onboarding
- Secure integration
LESSON SUMMARY
You should now be able to:
● Figure out how the domain model works and name the main services
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Differentiate between authentication and authorization and explain how both are used in
SAP BTP
Platform Security
In this lesson we will cover the following topics:
SAP BTP uses out of the box, the Identity Providers (IDP) for user authentication. He has the
role of a user store:
(1) External Authentication providers (SAML).
(2) Authenticated identity is used inside SAP BTP.
SAP BTP Identity Authentication can use on-premise IDP's (AD, LDAP, SAP) for user
authentication:
● Re-use existing IDP, easy to implement.
● Maintain central user repository (no user sync needed).
You can change the SAML 2.0 IDP Provider on subaccount level.
Here is the SAML 2.0 Response from IDP to SAP BTP after successful login.
● Two-factor authentication
● Delegated logon
Platform Users Member on Global - and sub- Platform IDP, on Global Ac-
account, members on space count Level
level
Business User User that use business apps. IDP on subaccount Level
No user identities are held on the SAP BTP. However, domain-dependent system and service
role and groups are used.
These roles and groups are either created directly on the SAP BTP, for example, or existing
ones are imported and mapped to the Platform Roles or Groups. This is done with the SAP
Cloud Identity Provisioning Service.
You can identify the following user types. A developer can also be a business user.
However, the service roles still have to be assigned to the corresponding user. You can see
how this can be done in the case of your own service in the next Lesson with a Business User.
In order to use existing roles and groups, for example in the SAP BTP, these can be mapped
manually via the SAP Identity Provisioning Service.
LESSON SUMMARY
You should now be able to:
● Differentiate between authentication and authorization and explain how both are used in
SAP BTP
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Explain application security
Approuter
Approuter – is Node JS library that is available in Public NPM. It represents a single entry
point to your application.
Its tasks are:
● The Approuter dispatches requests to our back end microservices, thus acting as a
reverse proxy. The back end microservices should not be directly accessible by the client.
● The Approuter can serve static content such as web pages, SAPUI5, or another client-side
code.
● The Approuter manages the authentication flows for our entire application.
For authentication (who the user is) and authorization (what the user is allowed to do), the
App Router takes all incoming, unauthenticated request and initiates an OAuth2 flow
(authorization code grant) with the Extended Services for User Account and Authentication
(XSUAA) service of the SAP BTP in the Cloud Foundry environment .
Main Properties on root level
● authenticationMethod
This property indicates which authentication will be applied for this xs-app.json. Can be
none (means that all routes are not protected) and route (authentication type will be
chosen according to definition in a particular route). The default value is route .
● logout
By using this property, you can define two important thing about your business application
central logout handling.
logoutEndpoint – contains some internal path. When accessing this path your application
will trigger central logout procedure. Triggering central logout will destroy user session
data in Approuter, call XSUAA in order to remove user session on their side and also will
call logout paths of destinations that are defined for this specific application (please refer
to property destinations).
logoutpage – can be internal path or absolute external URL. Value of this field describes so
called “landing page” page address, that user will be redirected in the browser after central
logout.
● destinations
This property indicates destinations endpoints that need to be called during session/
central logout in order to destroy sessions on their side.
● services
The same as destinations. Services can implement their specific logout logic and
approuter will trigger these endpoints during central logout / session timeout scenario.
Routing
One of important capabilities of AppRouter is to be a reverse proxy for your application. In
order to achieve that you need to model correctly property routes. That property is an array
of Objects. Each object represents one particular route.
Install via Service Marketplace
The App Router is a Node.js component, distributed via the publicly available SAP NPM
registry
The Application Router's Design-Time Descriptor: xs-app.json
The SAP Authorization and Trust Management service lets you manage user authorizations
and trust to identity providers. Identity providers are the user base for applications. You can
use an identity authentication tenant, an SAP on-premise system, or a custom corporate
identity provider. User authorizations are managed using technical roles at the application
level, which can be aggregated into business-level groups and role collections for large-scale
cloud scenarios.
Note:
You can set up and run your own application router or you can use the application
router that is managed by SAP (for more information see Managed Application
Router).
SAP recommends running your standalone app router only in advanced cases, for
example when application router extensibility is required
The managed application router enables you to access and run HTML5
applications in a cloud environment without the need to maintain your own
runtime infrastructure.
The managed application router is the HTML5 applications runtime capability that
is provided by the following products:
● SAP Work Zone
● SAP Launchpad service
● SAP Cloud Portal
To use the managed application router, you must be subscribed to one of these
services.
OAuthCloud Foundry Applications use OAuth 2.0, When business users access an
application, the application router acts as OAuth client and redirects their request to the
OAuth authorization server for authentication. Runtime containers act as resource servers,
using the container security API of the relevant container (for example, Java, Nodejs) to
validate the token issued by the OAuth authorization server.
JSON Web Token (JWT)A JSON Web Token (JWT) is an open standard (RFC 7519) that
defines a compact and self\u0002contained way for securely transmitting information
between parties as a JSON object. This information can be verified and trusted because it is
digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/
private key pair using RSA or ECDSA.
Although JWTs can be encrypted to also provide secrecy between parties, we will focus on
signed tokens. Signed tokens can verify the integrity of the claims contained within it, while
encrypted tokens hide those claims from other parties. When tokens are signed using public/
private key pairs, the signature also certifies that only the party holding the private key is the
one that signed it.
When should you use JSON Web Tokens?
Here are some scenarios where JSON Web Tokens are useful:
Authorization
This is the most common scenario for using JWT. Once the user is logged in, each subsequent
request will include the JWT, allowing the user to access routes, services, and resources that
are permitted with that token. Single Sign On is a feature that widely
uses JWT nowadays, because of its small overhead and its ability to be easily used across
different domains.
Information Exchange
JSON Web Tokens are a good way of securely transmitting information between parties.
Because JWTs can be signed-for example, using public/private key pairs-you can be sure the
senders are who they say they are. Additionally, as the signature is calculated using
the header and the payload, you can also verify that the content hasn't been tampered with.
Note:
This graphic only applies to SAP Business Technology Platform cloud
management tools Feature Set B. Check out User and Member Management in
the SAP Help Portal pages to better understand the differences between Feature
Set A and Feature Set B in regard to user management.
In Feature Set B, your SAP BTP global account has its own XSUAA tenant. This XSUAA tenant
by default has a trust relationship to the SAP ID Service. The SAP ID Service manages a large
base of users, that have created a user account with SAP. You can add users, that exist in the
SAP ID Service, as members to your global and subaccount. In order for these users to be
able to perform administrative tasks, they need to be assigned with corresponding role-
collections. There is a set of default platform role-collections, like Global Account
Scopes
Scopes are arbitrary values that express authorizations / access rights in an application or
service. Scopes need to be prefixed with an xsappname to make them uniquely identifiable.
Roles
Roles are entities that hold several scopes. Scopes can be put in multiple roles, so you are not
limited to have scopes sitting in just one role.
Role-Collections
Role-collections contain one or more roles. A role can be used in multiple role-collections. But
it is totally fine to have for example a role-collection called Admin that only has an admin role.
Role-collections are stored as an assignment in the XSUAA and are THE entity that can be
assigned to a certain business user.
How does it work in practice?
In the diagram you can see that there are different personas. One is the developer working
within a project and space. The other persona is an admin taking care of the CF account as a
security admin.
When you as a developer build a new business application, you define scopes and pre-bundle
them in role-templates.
You perform these definitions in the so called application security descriptor (xs-
security.json) file.
You use the xs-security.json file to create an instance of the XSUAA service, which is bound to
the corresponding business application(s).
The role-template definitions translate into roles. You as an administrator assemble these
roles into role-collections and assign them to the business users of your application.
Watch this video to learn more about the scope, role-collections, and roles. In this video, you
can see that there are different personas. One is the developer working within a project and
space. The other persona is an admin taking care of the CF account as a security admin.
What is a xs-security.json? To simplify things, let’s just call the xs-security.json the
“declaration of your app’s security”.
The following xs-security.json is an excerpt of the office supplies application being built in our
example.
{
"xsappname": "HC_OFF_SUPPLIES",
"tenant-mode": "dedicated",
"scopes": [
{
"name": "$XSAPPNAME.Vendor",
"description": "Supplier"
},
{
"name": "$XSAPPNAME.ProcurementManager",
"description": "Manager"
}
],
"attributes": [],
"role-templates": [
{
"name": "Vendor",
"description": "Supplier",
"scope-references": [
"$XSAPPNAME.Vendor"
],
"attribute-references": []
},
{
"name": "ProcurementManager",
"description": "Manager",
"scope-references": [
"$XSAPPNAME.ProcurementManager"
],
"attribute-references": []
}
]
}
You have to tell the XSUAA service how to call your application (xsappname) and further
define your scopes and role-templates. The scopes are being used within the application to
check concrete permissions whenever a user tries to perform a certain action.
Note:
CDS-based authorization deliberately refrains from using technical concepts such
as scopes as in OAuth in favor of user roles, which are closer to the conceptual
domain of business applications. This also results in much smaller JWT tokens.
Pseudo Roles
Frequently, it’s required to define access rules that aren’t based on an application-specific
user role, but rather on the authentication level of the request.
For instance, a service could be accessible not only for identified, but also for anonymous (for
example, unauthenticated) users. Such roles are called pseudo roles as they aren’t assigned
by user administration, but are added at runtime automatically.
The following predefined pseudo roles are currently supported by SAP Cloud Application
Programming Model:
● authenticated-user refers to (named or unnamed) users who have presented a valid
authentication claim such as a logon token.
● system-user denotes an unnamed user used for technical communication.
● any refers to all users including anonymous ones (that means, public access without
authentication).
Restrictions
By default, CDS services have no access control. Hence, depending on the configured
authentication, CDS services are initially open for anonymous users.
To protect resources according to your business needs, you can define restrictions that make
the runtime enforce proper access control. Alternatively, you can add custom authorization
logic by means of authorization enforcement API.
Restrictions can be defined on different CDS resources:
● Services
● Entities
● (un)bound actions and functions
You can influence the scope of a restriction by choosing an adequate hierarchy level in the
CDS model.
For instance, a restriction on service level applies to all entities in the service.
Additional restrictions on entities or actions can further limit authorized requests. See section
combined restrictions for more details. Beside the scope, restrictions can limit access to
resources with regards to different dimensions:
● The event of the request, that is, the type of the operation (what?)
● The roles of the user (who?)
● Filter-condition on instances to operate on (which?)
Note:
Note that both annotations introduce access control on entity level. In contrast,
for sake of input validation, you can make use of @readonly also on property level.
};
A privilege is met, if and only if all properties are fulfilled for the current request. In the
following example, only orders can be read by an Vendor who meets Buyer element of the
instance:
entity Suppliers @(restrict: [
{ grant: 'READ', to: 'Vendor', where: 'Buyer = $user' }
]) {/*...*/}
If a privilege contains several events, only one of them needs to match the request event to
comply with the privilege. The same holds, if there are multiple roles defined in the to
property:
service Catalogservice @(restrict: [
{ grant:['READ', 'WRITE'], to: ['Vendor', 'ProcurementManager'] }
]) {/*...*/}
In this example, all users that have role Vendor or ProcurementManger can read or write on
Catalogservice.
You can build restrictions based on multiple privileges:
entity Suppliers @(restrict: [
{ grant: ['READ','WRITE'], to: 'Vendor' },
{ grant: 'READ', where: 'buyer = $user' }
]) {/*...*/}
A request passes such a restriction if at least one of the privileges is met. In this example,
Admin users can read and write entity Suppliers. But also a user can read all Suppliers, which
have a buyer property that matches the request user.
Similarly, the filter conditions of matched privileges are combined with logical OR:
entity Suppliers @(restrict: [
{ grant: 'READ', to: 'Vendor', where: 'country = $user.country' },
{ grant: ['READ','WRITE'], where: 'CreatedBy = $user' },
]) {/*...*/}
Here an Vendor user can read all Suppliers with matching country or which has been created
by him- or herself.
Annotations such as @requires or @readonly are just convenience shortcuts for @restrict, for
example:
● @requires: 'Vendor' is equivalent to @restrict: [{grant:'*', to: 'Vendor'}]
● @readonly is the same as @restrict: [{ grant:'READ' }]
Supported Combinations with CDS Resources Restrictions can be defined on different types
of CDS resources, but there are some limitations with regards to supported privileges:
Note:
1 Node.js supports static expressions that don’t have any reference to the model
such as
where: $user.level = 2.
Note:
As a result of the derived authorization rules for draft entities, you don’t need to
take care of draft events when designing the CDS authorization model.
Note:
There’s no logout functionality yet. To clear the basic authentication login data
from the browser cache, you can either clear the browser cache or simply close all
browser windows.
LESSON SUMMARY
You should now be able to:
● Explain application security
Learning Assessment
X A Identity Provider
X B Integration Provider
X C Extension Provider
X A aspect
X B role
X C scope
X D privilege
Lesson 1
Using the Job Scheduling Service 229
UNIT OBJECTIVES
LESSON OBJECTIVES
After completing this lesson, you will be able to:
● Understand the Job Scheduling service
Some words and explanations of the terms we are using in Job Scheduling Service try to clear
up things and make it easier to get speed up when you start working with the documentation:
What are the prerequisites for using the Job Scheduling Service ?
Having talked about the terminology we can approach now the practical integration of the
SAP Job Scheduling Service.
Let us point out the prerequisites in detail. You can take it as a kind of checklist.
● BTP Account
BTP access, global account- and subaccount has to be set up appropriately.
● Role
For this kind of Service the Global Account Administrator role is necessary.
● Quota
The contract with SAP has to allow the usage of this service.
● Space
The location of the Job Scheduling Service with all applications is the level Space.
● Space Role
For binding the service instance to your applications you need to have the Space
Developer or alternatively-the Space Manager Role.
● Application
The SAP Job Scheduling Service does not take care of applications. This has to be set up
before it is possible to use this service.
● Endpoint
Accordingly to applications also its endpoints have to be configured and tested positively
before using the Scheduling Service.
● Location
It is compulsory to have Applications and the Job Scheduling Service Instance in the same
space.
In the Cloud Foundry environment the binding between services and applications can take
place in two different ways:
As shown in the image above the steps should be followed in the given order for the SAP Job
Scheduling service to work.
In this example we are showing the scenario when using new application
Note:
If you already have an xsuaa service instance, remember to update it first with
the updated xs-security.json file.
Note:
During the Binding process remember to bind xsuaa first.
Note:
If you don't bind your application to the SAP Job Scheduling service instance,
you can't call the SAP Job Scheduling service from your own application.
Watch this video to learn more about the job execution modes.
As seen in the image there are two types of job execution mode that can be configured for the
job scheduling service . The SAP Job Scheduling service executes jobs that support action
endpoints in a synchronous mode or in an asynchronous (or batch) mode
Synchronous Mode Synchronous requests are used if the SAP Job Scheduling service calls
the action endpoint of the application and the application logic is executed in a short span of
time.
An example of this would be an exchange rate application that fetches the latest currency
exchange rates frequently using scheduled jobs.
Asynchronous Mode Asynchronous requests are used for job runs with a large span of time.
We can use this type of run for uploading large amounts of data or for migrating a database
Synchronous Mode Process Overview
● When the scheduler invokes the endpoint, the application must return the response with
an appropriate HTTP status code, indicating success or failure.
● To indicate success, the application must use a suitable standard status code between
200 and 399, except 202-ACCEPTED.
● To indicate an execution failure, the application must use one of the server error codes as
outlined in the HTTP protocol specification.
● When the scheduler invokes the endpoint, it passes the request headers values for the Job
ID, Job Schedule ID, Job Run ID, and the SAP Job Scheduling service Host URI,
respectively.
- x-sap-job-id- Job ID
- x-sap-job-schedule-id - Job Schedule ID,
- x-sap-job-run-id- Job Run ID
- x-sap-scheduler-host- SAP Job Scheduling service Host URI
The application must extract the header values and store them using a suitable
mechanism, such as in-memory storage that uses caches or libraries, or persistent
storage that uses a database.
● The application must return an acknowledgment response with the HTTP status code 202-
ACCEPTED.
The response indicates to the scheduler that the application has accepted and is
processing the request. If the application returns a server error code, the scheduler
interprets it as a failure of the job run.
● After the application completes the job processing, it must invoke the Update of Job Run
Log API to indicate success or failure and (optionally) create log text for the job run.
● If the application doesn't invoke the Update of Job Run Log API, the scheduler isn't notified
of the status of the job run and, after a configurable time interval, reverts the job to the
status UNKNOWN.
Note:
Cloud Foundry tasks always run asynchronously.
Watch this video to learn about the Job and Schedule relationship.
Create a Job
To create a Job follow the steps as shown on the left side of the above image.
● Name- Name of the job. Name must not contain special characters or only numbers.
● Action - The fully qualified URL endpoint to be called when the job runs.
● HTTP Method- The HTTP method to be used to call the job action endpoint URL.
Allowed values are “GET”, “POST” , “PUT”, and “DELETE”.
● Start Time- Start time for the job. The scheduler checks if a start time for a schedule is
available apart from the start time available for a job. The schedule start time is used
for determining the start of the schedule run. If the schedule start time is not available,
the start time of the job is used.
● End Time- End time for the job. The scheduler checks if an end time for a schedule is
available apart from the end time available for a job. The schedule end time is used for
determining the end of the schedule run. If the schedule end time is not available, the
end time of the job is used.
After we have created a job, we navigate into that job and we can begin creating schedules for
it. Follow the below steps.
● Pattern- Chose one from the available options(cron, time, repeatInterval, and
repeatAt) .
Note:
This is the only mandatory field during creation of Job Schedule however it
is best practise to have an input for all of the fields provided.
● Data (JSON)- Data value is passed to the job action endpoint when invoked by Job
Scheduler.
For the HTTP method “PUT” or “POST”, the data parameters are sent in the request
body while invoking the endpoint. For the HTTP method “GET” or “DELETE”, the data
parameters are sent as query strings .
● Active- Activation status of the job schedule. The allowed values are true or false.
There are two schedule types that can be used for scheduling jobs
One Time Schedule - This consists of running a job once and it becomes inactive then after.
Further within the one time schedule type we have two inputs that the schedule configuration
will accept.
● The first valid input is human readable text which is shown at the top right of the above
image.
● The second valid input are date string formats which are SO-8601 or IETF compliant and is
shown at the bottom right of the above image.
Below steps will cover the steps taken to create an authentication method so that our
endpoint is protect with oAuth 2.0
In the previous section we saw the steps to create the Job Scheduler instance and xsuaa
Instance, now we will see how we can enable the Security components. We will follow the
same sequence.
● During the creation of the Job Scheduler service instance include the XSUAA support
when configuring it with the service plan standard .
This will allow our job scheduler privileges to be able to call REST endpoint with a JWT
token
● During the creation of the xsuaa instance include the JSON shown, this requires the callee
to have the right authorizations to call it.
{
"xsappname": "<app name>",
"scopes": [{
"name": "$XSAPPNAME.Jobs",
"description": "SAP Job Scheduling
service Scope",
"grant-as-authority-to-apps": [
"$XSSERVICENAME(<jobscheduler instance name>)"
]
}]
}
.When we are granting these authorizations to a user, normally we grant the scope by putting
it into a role and assigning the role to the user
Since our job scheduler is not a user but a service, our application grants this scope to the job
scheduler at creation instead.
● Next we can deploy our app and then bind the XSUAA instance first to our app
● At the time of binding the XSUAA instance passes the credentials to our app and also gets
to know that we are also granting some permissions to a Job Scheduler instance
● Finally when we bind the Job Scheduler instance to our application the Job Scheduler will
accept the granted authority and will be ready to invoke our protected REST endpoint as a
trusted client
Note:
As mentioned previously the sequence is very important. This will allow the
credential exchanges to execute successfully to grant our job scheduler instance
the dummy scope and execute the apps endpoint
LESSON SUMMARY
You should now be able to:
● Understand the Job Scheduling service
Learning Assessment
1. What are the prerequisites for using the Job Scheduling Service ?
Choose the correct answers.
X A BTP Account
X B Role
X C Quota
X D HDI Container
2. When the scheduler invokes the endpoint, the application which HTTP status code is
returned to indicate success.
Choose the correct answers.
X A 200
X B 201
X C 500
X D 400