S4HANA Architecture Guideline v1805
S4HANA Architecture Guideline v1805
Stefan Elfner
Andreas Kemmler
Katherina Bick
Baré Said
Peter Eberlein
Christoph Luettge Overarching
Axel Herbst
Tobias Stein
Felix Wente
Gregor Tielsch
Christian Conradi
Markus Goebel
Architecture Conceptual
Christoph Birkenhauer
Andreas Huppert
Thomas Nitschke
Heiko Steffen
Document
Philipp Steves
Andreas Poth for
Volker Wiechers
Christoph Mecke
Jens Freund
Ralf Krenzke
Peter Dejon
Manfred Hirsch
Winfried Schleier
Nicolai Jordt
Ralf Kau
Bernhard Drabant Cloud (CL)
Torsten Kamenz
Carsten Ziegler
Holger Bohle
On Premise (OP)
Peter Dell
Martin Schmid Version 1805
Renzo Colle
Carsten Pluder
Danny von Mitschke-C.
Martin Mayer
Stephan Toebben
Jochen Boeder INTERNAL
Peter Enrico Vittoria
Architecture Guideline S/4HANA
Contents
0 Introduction and launch of our new product S/4HANA .................... 9
0.1 Evolution of Strategy ....................................................................................... 9
0.1.1 Public Cloud (PC) S/4HANA PC .......................................................................... 9
0.1.2 Managed Cloud (MC) S/4HANA MC ................................................................... 9
0.1.3 One Cloud (CL) S/4HANA CL .............................................................................. 9
0.1.4 On Premise (OP) S/4HANA OP ........................................................................... 9
0.2 Consequences to this guideline ..................................................................... 10
1 About This Guideline ......................................................................... 11
1.1 Statement of Directive for S/4HANA .............................................................. 11
1.1.1 Semantic Compatibility ........................................................................................... 11
1.1.2 Feature Completeness ........................................................................................... 11
1.2 Scope of the Architecture Guideline for S/4HANA ......................................... 12
1.3 Rules and their meanings .............................................................................. 12
1.4 Deviation process .......................................................................................... 13
1.4.1 Deviations ............................................................................................................... 13
1.4.2 Roles and Responsibilities ...................................................................................... 14
1.5 Contact Persons............................................................................................ 14
2 Principle of ONE – The Key Driver for Simplification ..................... 15
2.1 Elimination of Redundancy ............................................................................ 15
2.2 Zero Redundancy in Frameworks.................................................................. 15
2.3 Zero Redundancy in Application Data ........................................................... 15
3 S/4HANA Deployments and its Scope .............................................. 17
3.1 Deployment options of S/4HANA................................................................... 17
3.2 Core, Staging and Deprecated Scope ........................................................... 17
3.2.1 Optional Scope ....................................................................................................... 18
3.2.2 Onboarding and provisioning of formerly deleted Software Components .............. 18
3.2.3 Rules for availability of Scope................................................................................. 18
3.3 Managing Release Scope and Functional Scope by Switching Code ............ 19
3.3.1 Customizing ............................................................................................................ 19
3.3.2 Handling Business Functions ................................................................................. 19
3.3.3 Feature Toggles ...................................................................................................... 20
3.4 Shipments of S/4HANA ................................................................................. 21
3.5 Software Component Stack of S/4HANA ....................................................... 21
3.5.1 Categories of the S/4HANA Stack (valid for Figure 3-2, 3-3 and 3-4) .................... 21
3.5.2 Software Component Merge and relabeling to S4CORE and SAPSCORE ........... 22
3.5.3 Reasons for Software Component Merge .............................................................. 23
3.5.3.1 Elimination of Interfaces, Extensions or BAdIs ................................................ 24
3.5.3.2 Simplification of Quarantine Process ............................................................... 24
3.5.3.3 No different deployment options / combinations.............................................. 24
3.5.4 Comparison of INFINITY and sINFINITY Codeline ................................................ 24
3.6 Most recent Stack Definitions and dependencies for S/4HANA ..................... 24
3.7 Activities to be done right after the software component merge ..................... 25
3.7.1 Pending Database field extension .......................................................................... 25
3.7.2 Elimination of Client-Dependency or Client-Independency .................................... 25
3.7.3 Include Split of ABAP Includes to optimize automated tests .................................. 25
3.7.4 Change of Master Language for all components to English ................................... 25
3.7.5 Handling of Business Functions ............................................................................. 26
3.7.6 Convert Pool Tables / Cluster Tables ..................................................................... 26
3.7.7 Eliminate Match Code Tables ................................................................................. 26
3.7.8 Adjust Date-Dependent Tables............................................................................... 26
3.8 Deprecation Process ..................................................................................... 27
3.9 Target Architecture for S/4HANA Applications............................................... 27
4 User Experience and UI Development .............................................. 29
4.1 Basic Principles and Goals ............................................................................ 29
4.2 UI Technologies for S/4HANA ....................................................................... 29
4.3 Automatic Processing of UIs (Batch Input) .................................................... 29
4.4 Test Automation ............................................................................................ 30
4.5 Semantic Compatibility and Fiori Apps .......................................................... 30
4.6 Transactional FIORI Apps ............................................................................. 31
4.7 Cross Release Interoperability for S/4HANA ................................................. 32
5 Application Development .................................................................. 33
5.1 Usage of HANA Content ............................................................................... 33
5.2 Activation of Optional Scope ......................................................................... 33
5.3 Ensuring Multi Tenancy Capabilities .............................................................. 33
5.3.1 The ABAP Cloud Platform and S/4HANA Cloud Multitenancy Concept ................ 37
5.4 Ensuring S/4HANA Cloud Environment ......................................................... 39
5.5 Changes in Data Models ............................................................................... 40
5.6 Changes in User Interfaces ........................................................................... 41
5.7 Deleting Repository Objects .......................................................................... 42
5.8 Tables classification, delivery class and buffering ......................................... 43
5.9 Ensuring Testability of S/4HANA ................................................................... 43
5.10 Configuration for Applications ........................................................................ 44
5.11 Security for Applications provided in the Cloud.............................................. 44
5.12 Central System Settings ................................................................................ 44
5.12.1 Time Zone ............................................................................................................... 44
5.12.2 SET UPDATE TASK LOCAL .................................................................................. 44
5.12.3 Differentiation between Deployment Options ......................................................... 45
5.12.4 Deleting exceeded log entries from SBAL .............................................................. 46
5.13 Optimistic Concurrency Control ..................................................................... 46
5.14 ABAP Domain Conversion Routines ............................................................. 47
5.15 Database Table Layout ................................................................................. 48
5.16 Generating of Development Objects / Source Code Generators.................... 48
5.17 Object Types and Unique IDs ........................................................................ 49
Revision Log
Ver. Date Who Remarks Chap.
2017-11-20 Actualizations in Chapter 3.1 and 3.2 3.1
Stefan Elfner
3.2
2017-11-15 Changed Chapter ‘Time Zone’ and new
Stefan Elfner 5.12.1
Rules [C-CSC-2] and [C-CSC-3]
2017-10-05 New Chapter and new rules [C-MT-4],
Andreas Kemmler 5.3.1
[C-MT-5] and [C-MT-6]
New Chapter for forbidden and unwanted
2017-09-26 Renzo Colle statements and new Rules [OC-APP-17] - 5.18
1805
[OC-APP-21]
2017-09-26 New Chapter for object types and unique ids
Renzo Colle 5.17
and new Rule [OC-APP-16]
New chapter “Statutory Reporting
Christoph
2017-07-25 Framework – Advanced Compliance 15.8
Birkenhauer
Reporting”
2017-07-19 [OC-UX-1]: Fiori dev guideline link changed
Pev 4.2
to Fiori Dev Guide Portal
2017-03-28 Changed Chapter and Rules Data Archiving
Axel Herbst 12.1
versus Data Aging
2017-02-15 New Chapter for database table layout and
Renzo Colle 5.15
new Rule [OC-APP-15]
2016-11-22 New Chapter “15.1.1 Business Events”, 15.1.2
J.-Christoph Nolte
update wording in Workflow Inbox 15.1.3
Introducing new chapter “3.3. Managing
Release Scope and Functional Scope by
2016-11-18 Jochen Boeder Switching” which includes former chapter 0
5.12.4. “Managing Business Functions” and a
new section on “Feature Toggles”.
2.50
2016-11-04 New Rule [OC-UX-13] for draft handling of
Renzo Colle 4.6
reuse components
2016-10-24 Stefan Elfner BSP listed as outdated technology for Cloud 9
2016-09-29 Harald Evers Rule [OC-UX-11] enhanced 4.6
2016-09-27 Renzo Colle Rule [OC-APP-13] enhanced 5.14
2016-09-27 Renzo Colle Rule [OC-APP-10] enhanced 5.13
2016-09-27 New Rule [O-CSC-4] for local update task in
Renzo Colle 5.12.2
modifying OData services.
2016-08-18 New Chapter Cross Release Interoperability
Stefan Elfner 4.7
for S/4HANA
4 New Chapters including Rules
Semantic Compatibility and Fiori Apps 4.5
2016-08-17 Renzo Colle Transactional FIORI Apps 4.6
Optimistic Concurrency Control 5.13
ABAP Domain Conversion Routines 5.14
2016-07-25 Stephan Toebben New Chapter Attachment Services 15.4
2.40 2016-07-01 Martin Mayer New Chapter Zero-Downtime-Option of SUM 17.2.2
Danny von New Rule [OC-AL-4] 15.6
2016-06-06
Mitschke-Collande New Rule [C-BF-8] 5.12.4
2016-05-31 Stefan Elfner IGS listed as outdated technology 9
2016-05-13 Naming ‘OPEN CDS’ changed to ‘ABAP
Stefan Elfner 5.1
CDS'
Abbreviation Meaning
AL Application Log
AR Analytical Reporting
BF Business Functions
BL Business Logic
CONF Configuration
DA Data Aging
DO Deployment Option
DPL Deployment
EP Event Provisioning
EXT Extensibility
LM Lifecycle Management
MO Mobile Infrastructure
MT Multi Tenancy
OM Output Management
PF Print Forms
SC Service Consumption
SP Service Provisioning
SRCH Search
TA Test Automation
UX User Experience
WF Workflow
The rules presented in this guideline must be followed for all simplified development1. Especially all local
architecture guidelines shall be consistent with these rules. By default, the rules do not automatically imply that
existing architecture shall be changed. Wherever a rule is also backwards effective, i.e. has an impact on an
existing architecture, this is explicitly stated.
1.4.1 Deviations
Main purpose of Architectural Guideline at SAP is to create a most homogeneous product with a high degree
of conceptual integrity fulfilling all qualities which are relevant for cloud applications in a cloud deployment
infrastructure.
However, when working with guidelines one may find that some of its requirements cannot be fulfilled, be it for
technical, strategic, time-to-market or other reasons. Such deviations from the guideline are handled by change
management which encompasses recording deviations and their rationales, finding appropriate workarounds,
invoke an approval process, and apply mitigation strategies. Within Architecture Governance this activity is
named deviation management.
• All deviations of guidelines, ACDs, and implemented functionality from the S/4HANA Architecture
Guideline must be brought to the attention of the Central AI Architecture represented by Tobias Stein.
• All deviations of guidelines, ACDs, and implemented functionality from the respective Unit Architecture
Guideline must be brought to the attention of the respective Lead Architect.
Deviation management serves multiple purposes; amongst others it helps to create transparency, it helps to
improve the guidelines and it helps to direct technology investments.
1 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
• For all deviations from the SAP Architecture Guideline, the lead architect of the development area
must file for approval as described in the Deviation Requests.
• A record of all approved deviations from the S/4HANA Architecture Guideline shall be kept by the
Central AI Architecture.
• Rules Frameworks
• Planning tools
• Workflow engines
• Reporting Tools
• User Management Aspects
• UI Technologies
[OC-ONE-1] The creation of any new redundancy in frameworks or in application data is strictly forbidden
[OC-ONE-2] Zero redundancy in frameworks shall be achieved during transition phase – Clear roadmap
to zero redundancy shall be available
[OC-ONE-3] Zero redundancy in application data shall be achieved during transition phase – Clear
roadmap to zero redundancy shall be available. This includes also application indices and
aggregation tables.
• Lifecycle Management
• Data and Configuration Migration from On Premise Business Suite to S/4HANA
• Upgrade from S/4HANA Version [n] to Version [n+1]
The advantages of the intended cloud qualities which are essential for SAP to achieve, are in most cases also
advantages for an On Premise deployment.
Following Picture describes very simple how the different deployment options with different enabled scope are
being built out of one so called sInfinity Codeline Software Stack (see also Chapter 3.5).
The meaning and task of the Development Consolidation Layer will be explained in next chapters.
The initial definition of the S/4HANA Public Cloud Scope CORE was based on the Business All in One (BAiO)
baseline packages whereas roundabout 100 Process Scope items across the major application areas like
Materials Management, Sales and Distribution, Services, Financial Accounting and Controlling focusses on
~400 business transactions.
All these 400 business transactions were contained inside the initial Stack of S/4HANA located in one software
component named S4CORE for S/4HANA on Premsie [sInfinity System ER9] or SAPSCORE for S/4HANA
Cloud [sInfinity System ER6 for Cloud only Artefacts]. There are major parts of scope which is still
classified as On Premise only scope, compatibility scope or even deprecated scope.
Outdated scope and its artefacts shall be classified as deprecated – [Field TDEVC-PACKTYPE = ‘D’].
3.3.1 Customizing
Customers scope and configure S/4HANA Cloud using Packaging Framework and Best Practice Content only!
See rules for availability of scope [OC-DPL-5] and rule for Delivering Innovation as customer option in Public
Cloud [OC-PC-11]
All definitions and rules for predefined configuration content, configuration methodology and Fiori based cloud
user configurations can be found in following WIKI: https://wiki.wdf.sap.corp/wiki//x/Zb3bXw
[O-BF-2] For S/4H On Premise Deployment all existing Business Functions shall have a state
predefined by SAP as either.
• Customer Switchable. (Default behavior if not defined explicitly)
• Always On
• Always Off
Status “Customer Switchable” is the same behavior as in the past. It’s up to the customer to
decide, whether he wants to activate a BF or not.
Status “Always Off” means, that new customers cannot activate this BF in S4H OP. Existing
customers who have already activated this BF, cannot upgrade to S4H OP! So use it
with care. This is primarily intended for industries which are not yet released in S4H OP but
technically part of the stack.
The planned status of a Business Function shall be maintained as attribute of the BF in SFW2.
In follow-on shipments the following status changes are allowed:
• Always Off -> Always On
• Always Off -> Customer Switchable
• Customer Switchable -> Always On
[OC-BF-3] If a Business Function is defined as “Always On” by SAP in all three S/4H deployment
options (Cloud and On Premise) the Business Function and Switches can be removed in
some cases, so the functionality behind the Business Function further on is not switchable
anymore. Details see
\\Dwdf213.wdf.sap.corp\aci_lob_soh\49_Simplified_Suite\30_Workstreams\25_Architecture\
BusinessFunctions\Business_Function_Removal.docx
[OC-BF-4] New, S4H specific development shall not be switched via Swtich Framework. This refers to
further developments in functionality which is designated as the “go-to” functionality in S4H
and which is recommended to be used by customers – typically new, already simplified
applications. Contrary to old, redundant functionality which we offer in S4H only in order to
reduce the initial upgrade effort for customers.
[OC-BF-5] New Business Functions introduced in further developments of old functionality in the
context of EHP8 or future Enhancement Packages, which reach S4H via upports are also
subject to rules [*BF-1-4 and 6-7]. Preferred option in this case is - if a status of “Always On”
is possible in all three deployment options in line with rules [*BF-1-2] – to not introduce the
Business Function in the first place in the S4H codeline.
[C-BF-6] Contrary to On Premise deployments, where consultants on customer side decide if and
which non-cascading delivery customizing is cascaded from client 000 to productive clients,
this is exclusively handled by SET (SaKP) processes and tools in Cloud deployments. So if
you declare a Business Function as “Always On” in Cloud and this Business Function
contains Switch BC Sets with client-specific, non-cascading table content (table delivery
class C,G) you have to get in contact with the SET PO of your development area (see
https://wiki.wdf.sap.corp/wiki/display/SimplSuite/Best+Practice+Content ) to ensure, that this
content is handled in line with other best practice content (repackaged or at least added to
the white list for a clean client setup).
[C-BF-7] DDIC shall not be switched in S/4HANA Core code line. DDIC shall be the same in S/4 On
Premise and S/4 Cloud for the common parts, means software component S4CORE.
As of now (this version of the guideline) feature toggles are in pilot phase and support hidden code delivery
only. If you are interested to try feature toggles, please contact Vijaya A Bhaskar or Jochen Boeder.
[OC-FT-1] One „Feature“ shall be enabled/disabled by one feature toggle. Arbitrary functional pieces shall not
be bundled as a feature.
[O-FT-2] In on premise release feature toggles shall be used only to hide a feature (“hidden code delivery”).
There is no beta test using feature toggles in on premise.
[C-FT-3] Feature toggles are temporary. After final release of a feature to all customers, the feature toggle
shall be removed and the corresponding source code cleaned up.
[OC-FT-4] Feature toggles must not be exposed to customers directly. They can be activated either directly by
SAP or indirectly by the customer selecting scope items.
[OC-FT-5] In case a features is partly implemented in S/4HANA Core and partly in SAP HCP, each feature
part shall use a separate feature toggle according to its technology platform. The dependency shall be
managed manually until supported by the central feature repository.
As a guidance a feature should be implemented with least amount condition statements possible to limit the
effort to remove them afterwards. Often it is sufficient to hide the new feature on the UIs and APIs.
Figure 3-2: 1805 Stack for sInfinity Development – internal Grouping [ER9]
Figure 3-3: 1805 Stack for sInfinity Cloud only Development – [ER6, ER3]
Without massive and sustainable changes inside the software stack (stable interfaces, elimination of
dependencies, version interoperability, 100% clean package definitions…) this one component approach will
remain. This shall also be a role model for a next Suite on HANA release.
https://wiki.wdf.sap.corp/wiki/display/SimplSuite/Mandatory+Developer+Onboarding+Information#Mandatory
DeveloperOnboardingInformation-MasterLanguageSoftSwitch
The logon language will be defaulted with English and can’t be changed.
All new objects will be created in the new Master Language. In case of changes to existing objects where the
master language is German, the following Popup will be suppressed and internally process with the option
‘Change orig. language’.
Please be aware in case a sub-object e.g. a function module is being changed for its master language that
this will enforce that the complete main object – here the function group – and all its sub objects will be
changed concerning the master language.
3 Currently, the usage of Design Studio and Lumira is not recommended in S/4HANA. See Current Restrictions
for details.
[OC-UX-1] All new applications for any purpose of User Accesses (Application, Configuration
and Administration) shall be developed based on the FIORI UX paradigm. All Fiori
design principles and Fiori development guidelines for Business Suite are also valid
for S/4HANA. In case there is a dedicated deviation this will be mentioned in same
Fiori development guidelines.
As mentioned before in the deviation section, we expect the first rule won’t be applicable for all
applications defined for the business scope of S/4HANA. Therefore there is – especially for already
existing applications – a 2nd choice for UI technology to be allowed in the S/4HANA product:
[C-UX-2] All existing applications inside the defined scope which cannot be converted or re-
build into FIORI applications shall make use of WebDynpro ABAP Technology
based on the Floor Plan Manager (FPM) concept or WebGUI (HTML GUI).
[O-UX-9] All existing applications inside the defined scope which cannot be converted or re-
build into FIORI applications may stay based on SAP-GUI.
[OC-UX-3] For use cases were the visualization of the location context (via geocodes) is
needed the Visual Business component shall be used. For use cases were the
visualization of 2D and/or 3D CAD drawings is needed the Right Hemisphere
component shall be used. Further information and detailed rules will be linked into
this document as soon as possible.
[OC-UX-4] Rules for the usage of Adobe interactive forms (aka SAP Interactive Forms by
Adobe) can be found here. See also 11.1.1 (Rules for Print Forms)
[C-UX-6] Configuration and Administration UIs for Cloud Managing End Users (Shared
Service Center, SAP-IT Administrators) might have access on existing applications
(e.g. Transactions like SPRO, ST*, SM*, SE*). There is no access of Cloud
Customer Users to IMG (SPRO).
[C-UX-6] Batch-Input is not allowed in any Applications or Processes for S/4HANA Cloud
offering.
[OC-UX-7] Batch-Input is only allowed for Configuration, Administration and especially Test
Automation purpose in case the executing user belongs to a cloud infrastructure
managing role. The target is to get rid of batch input in applications.
[OC-UX-8] Frontend test automation and unit tests for all S/4HANA applications is mandatory
1. The FIORI app is very simple or any kind of business logic necessary based on the user
interaction is provided in the client implementation or orchestrated and invoked from the client
side via functional (non-modifying) services. The modifying request is sent to the server at the
end of the user interaction and invokes further server business logic and stores a consistent state.
Example: Leave Request with very few business logic during user interaction and major logic only
during leave request submission.
2. The FIORI app is requiring major server logic during user interaction that is not or cannot be made
available via functional services, e.g. due to massive reuse of existing backend functionality. As
the modifying requests do not lead to a consistent state (in the sense of a business consistency)
in this approach the transactional application shall be draft-enabled.
Example: Purchase Order with major business logic invocation during user interaction.
As the draft-enablement adds further features like “data-loss prevention”, “start now, continue later”,
“device switch” and even “collaborative editing” it might be requested (by the product owner) that also
FIORI apps without major business logic invocation apply the draft concept.
Example: Leave Request where I can plan my next vacations, but do not yet submit it to my manager.
[OC-UX-11] FIORI apps shall apply the draft concept by providing a draft-orchestrated OData service
and enabling the underlying application for draft handling if they need:
a) modifying server requests for business logic invocation during user interaction
b) pessimistic locking of active data to protect draft processing against conflicting changes of
existing application logic used by classic UIs or process integration scenarios.
c) Draft features and related UX qualities (data loss, continuous work, shared editing, …)
Conversely, Fiori apps may disregard applying the draft concept if
d) logic is purely triggered via (parameterized) quick actions
e) the entire state can be kept in a single view on the UI and saved at once.
In all cases, the consistent state of draft or active entities needs to assured by implementing eTags.
In case of e) applications must be aware that these data entry apps only provide optimistic
concurrency control and have a different UX behavior than apps with draft enablement4.
Further as based on this rule many applications and business objects will implement draft-enabled
Fiori apps it is obvious that also any embedded reuse component with own persistency that can be
maintained together with the using business object has to support this concept. Otherwise the UI
behavior would not be consistent and data of the using business object is stored when cloding the
browser where data of the reuse component is lost. Examples are Attachments, Notes, Proces Route,
Pricing, etc.
Reuse components that are maintained separately and only based on active data do not necessarily
need to adapt the draft concept. Examples are Application Log, etc.
4 Limitations of Smart Templates for non-draft enabled apps can be found here:
https://wiki.wdf.sap.corp/wiki/display/fioritech/Non-Draft-App+restrictions+and+limitations
[OC-UX-13] The adaptation of the draft concept for reuse components with own persistency that can
be maintained together with its using business object (i.e. with an “aggregate”-dependency) is
mandatory.
5 Application Development
5.1 Usage of HANA Content
S/4HANA is clearly and only based on SAP HANA DB. The following rules apply regarding the usage
of HANA specific functionality:
[OC-APP-1] The use of HANA specific features is generally allowed and there is no need to cover
the usage of such features in Optimization BAdIs (as it is needed for the Business
Suite 7 i2013 Quarterly Shipments).
[OC-APP-2] Everything (performance, functionality …) which can be reached with ABAP, Open
SQL and ABAP managed HANA features (views, procedures, table functions)
should be done by using these techniques as they still have advantages in the area
of extensibility, supportability, life cycle management and operations compared to
the corresponding HANA managed objects (calc views, …)
[OC-APP-3] It is not allowed to create new HANA repository objects if they don’t fulfill the lifecycle
requirements of Zero Downtime Management (ZDM). Existing HANA repository
content shall be migrated / converted to ABAP managed artefacts or to HDI
supported content.
[OC-APP-4] New Virtual Data Models (VDM) used e.g. in S/4HANA shall be realized via ABAP
CDS. Existing VDMs which shall be used in S/4HANA shall be migrated / converted
to ABAP CDS.
[OC-APP-8] ABAP managed stored procedures and table functions are fully supported and shall
be used in case they offer a better performance as ABAP CDS views or if they offer
functionality which is not available with ABAP CDS.
You will find more information, tips and tricks and best practices about CDS Views and AMDPs in
the following Wiki: https://wiki.wdf.sap.corp/wiki//x/KrQMY
What is shared and how isolation is reached depends on the Multi Tenancy (MT) implementation
concept.
Typically the following areas are using and therefore depending on the multiple tenancy concepts
From a customer point of view isolation and integrity of the tenant is the primary requirement while it
is in the provider’s interest to have a maximum of shared “resources” to minimize the cost. Resources
can be hard- and software. For hardware resources providers typically try to “underprovide” the
tenants needs while assuming that the probability that the maximum allowed usage of the resources
seldom occurs and is still manageable (not fatal).
For HEC scenarios or typical IaaS vendors the datacenter, the network and the virtualized hardware
is shared. The main mechanisms to enforce isolation are in the network and in the virtualization
techniques for hardware.
Other SaaS implementations have chosen to implement the multi tenancy by application frameworks
sometimes supported natively by application programing models and languages. In case of larger
tenant sizes they use database techniques like DB-schemas and/or multiple databases to achieve
data isolation and efficiency.
• The ABAP system does not use database techniques to reach data isolation – they use a
concept only known to the application server based on the “client” field
• We intent to push more execution into the database. To have a proper MT support or even
an enforcement of MT database techniques for isolation must be used.
• The tenant (data) separation is not enforced by the application server – only enabled
• The multi-client separation rules are very often violated in the current ABAP coding
• ABAP extension mechanisms are largely ABAP language oriented. They lead to system
artefact (source code / DDIC) which are shared within one system.
As a consequence of the high implementation costs it was decided that we do not introduce a MT
concept based on the ABAP client field in the foreseeable future – like we did it with ByD. We still
fulfill the customer requirement for isolation through the fact that tenants will get their own ABAP
system. While database management as well as some hardware costs could be reduced by
leveraging the “HANA multiple DB” feature, this does not address the TCO reduction a multi-tenant
application server could provide. They are addressing different cost drivers. The “Hana multiple DB”
concept does not imply any specific rules for application development as long as we are consuming
data from one schema out of one “tenant DB” which is the case for Model S.
On the other hand the ABAP client field is used also for other proposes than data separation between
the clients; it is also essential for the separation of “system” managed artifacts by the provider and
the actual “tenant” data. This “separation of concern between business users and provider IT“ is
something we must foster for any SaaS implementation regardless of the MT concept. The rules for
the application code to support system from tenant isolation are very similar to achieve the separation
between tenants, what are “must haves” to fulfill the customer requirement of “isolation”.
With the introduction of the delivery models of the “managed cloud” and “OnPremise” shipment of
S/4HANA product variants also the OnPremise lifecycle management methodology, tools and
processes must be supported as well – at least in the early product version until we extended the
validity of the “public cloud” procedures to the other delivery types. In the classical ERP systems
essential procedures make use of multiple clients. They are a fact that to be supported without
compromises. Also we make use of multiple (application) clients in public cloud installations e.g. for
trial systems where strict isolation requirements are relaxed.
The following rules are motivated for 2 reasons:
[C-MT-2] Shall: Business application code should not be dependent on system configuration
System configuration defines system behavior which is by nature independent of any customer choice
or decision.
The default is that the business application code does not depend (directly) on any system
configuration. Therefore all customizing tables of the application code should be client dependent.
There are some allowed exceptions to this general rule:
1. Reading of system configuration defined and managed by system coding is allowed, but
should be used thoroughly.
2. In cases where a business component delivers out of one code line to Models S and also
into On Premise ERP the behavior might depend on the system type (Models S or on
Premise). To manage to different behavior definition of system configuration is allowed. For
that purpose a central ‘System Type Switch’ CL_SYS_TYPE_????` (To be implemented)
will be provided.
System configuration should deliver in SaaS-environments together with the coding as part of the
software change process. For Models S all systems (of the same version) will have the identical
system configuration concerning the application behavior.
[OC-MT-3] Should: Eliminate “cross client data access” in business applications
For business application code there is no valid use case known where data from more than one
application clients (potential tenants) should be processed in one transaction. The processing of data
belonging to different ABAP-clients is strictly forbidden with the exception of the involvement of client
“000”.
The ABAP client “000” is used as the “delivery client”. Some applications therefore may read the
system default configuration from client “000” while the tenant specifics are persisted in same table
but in the appropriate ABAP client. Here we recommend for every new implementation to use a client
independent table of delivery class “S” for shipping the system default and merge the effective value
during read with a value from an ABAP client dependent table of delivery class “C”
Copy procedures as part of the business applications, like they are present in the OP suite stack,
where data is copied from one (business) client into a different (business) client are not allowed. If
there is a need for a cross tenant data copy use replication techniques of other allowed data move
approaches. Even for the OP suite we recommend to eliminate this cross client data copy procedures
– you may use “remote copy” patterns which ensures that permissions for reading and writing of the
2 users (the sender user and the receiver user) are implemented.
5.3.1 The ABAP Cloud Platform and S/4HANA Cloud Multitenancy Concept
Multitenancy in S/4HANA Cloud is implemented by using HANA Multitenant Database Containers,
introducing a Shared Container for tenant - independent data, thus reducing the RAM footprint for
Tenant Containers.
To reduce tenant footprint, and to allow optimizations of software change management procedures,
so called “Shared Containers” are used which are holding common data (software and configuration
artifacts) shared by all tenants.
A set of ABAP “Runtime Containers” sharing the same software version for the entire software stack
is called a “Runtime Cluster” if they all share the same Shared Container. Each Runtime Container
can have one or multiple ABAP Application Server instances. These instances are dedicated to
specific tenants. Of course, Runtime Containers from different tenants can share hardware, e.g. via
server virtualization.
A multitenant HANA System is used per Runtime Cluster, with a dedicated HANA Container per
tenant and a single Shared HANA Container. Changes of table content in the Shared HANA Container
are done by software change management tools, performed only as part of release upgrade and
patching procedures. The ABAP Platform Data Dictionary manages the structure of shared tables
with tenant-local content and SQL union views (see next section) in cooperation with software change
management tools.
You will find more information about the MT concept in the Wiki of the WhiteBird MT and Resource
Sharing Workstream.
The "Sharing" concept of ABAP Platform Cloud and S/4HANA Cloud (1711 onwards)
When speaking of "sharing" in the S/4HANA cloud environment, it means to deliver SAP owned
artifacts, code and table content, just once for a large set of customer "tenants". This set of SAP
owned shared artifacts, the "shared container", is used by many tenants and therefore must be read-
only from the tenant perspective. Content of the shared container in consequence can only be
modified by software updates, for example patches of upgrades. Any attempt to write a "shared
object" from the tenant results in a dump.
Consequently, any code generating program that creates or modifies programs, function modules,
interfaces, classes and the like based on customer configuration cannot produce "shared content",
but must write into the respective tenant container only. For this purpose we must distinguish "shared
content" and the "tenant specific" complement. This is based on the object catalog (TADIR) attribute
"GENFLAG" which indicates a generated object. Any object listed in the object catalog with an initial
(space) GENFLAG is a sharable object and will be stored in the shared container, thus cannot be
modified from an application.
[C-MT-4] IF a generated object has a TADIR entry AND can be (re-)generated in a cloud tenant
context THEN this TADIR entry must be generated with TADIR-GENFLAG <> space.
Remarks:
1. Whether TADIR-GENFLAG has to be set to 'X' or 'T' (which are currently, June 2017, the
only supported values beside space) is dependent on the generating scenario, see F1-
documentation of TADIR-GENFLAG.
2. This refers to code generation in tenant containers only; any "design time generation" of
objects which are then transported ‘normally’ (just as any other development object) is not
concerned.
3. No issue are also generated objects without TADIR entry and those located in software
component LOCAL respective package $TMP as these will never be shared.
4. Find more details in wiki Correct code generation programs for shared cloud usage.
See also the General Generation Guidelines [C-GEN-1] in section 5.16.
[C-MT-5] Write accesses to REPOSRC in tenant containers are generally limited to:
• sources that belong to the tenant (i.e. within namerange Z*, Y*; generated sources with
TADIR-GENFLAG <> space, see C-MT-4; sources in software component LOCAL like
objects in package $TMP; sources without TADIR entry)
• administrative fields of sources which do not belong to the tenant. Administrative fields are
the columns UNAM, UDAT, UTIME, IDATE, ITIME, SDATE and STIME ( “last change to
this source”-like fields and “last overall change to this program”-like fields).
Remarks:
1. Any write attempt beyond this limitation will result in a runtime error (in systems which are
setup as multitenancy systems).
2. Find more details in wiki Adoption to REPOSRC sharing.
[C-MT-6] Ensure that the Package Classification is done correctly.
All tables which belong to (development) packages which are classified as 'O' or 'D' (Package is only
relevant for S/4HANA On Premise or Deprecated) are seen as not relevant for the Cloud. In the Cloud
such tables will be moved to the Shared Containers and by this the corresponding memory is saved
in the Tenant Containers. It is important to know that tables in the Shared Containers are set to Read-
Only mode! Application programs cannot change the contents of this tables anymore (it is however
possible to move such tablea back to the tenant containers, if required, see remarks below)! It is
therefore important that the package classification is done properly!
Remarks:
3. Moving a table into and out of the Shard Container is not possible in Extraordinary Patches
but only within Hotfix Collections and Upgrades.
4. Report CALCULATE_STACK (Transaction FSCS) can be used to verify the package
classification for a whole bunch/set of packages at once.
5. In the following excel we are providing up-to-date tracing information from systems CC2
and CCF about all Write Accesses to OnPremOnly tables. Such write accesses can be an
indication that the respective tables are used in cloud relevant business scenarios! The
more different users access the table and the more often a table is accessed the stronger is
the indication that it is relevant for the cloud: WriteAccessToOnPremOnlyTables.xlsx
6. In case of wrong package classifications, send the packages to be changed to
[email protected]
Based on the above list of cloud characteristics and the cloud qualities derived out of the same the
following “Do’s and Don’ts” transformed in Rules shall be taken into account by development.
Note: This is not a general “Developer Guideline” but focus on aspects only which are of special
importance in a public cloud environment.
[OC-PC-1] In general avoid everything which makes the operation of the system expensive
(system provisioning, monitoring, patching and upgrade)
[OC-PC-2] Do not introduce new functionality which requires monitoring activities by the cloud
provider. If a new functionality requires monitoring, you shall deliver an automatic
health check (to be checked with Karolin Laicher) and a recommended action for
the operators. It is essential that such check creates incidents in SPC without the
need for manual checks in the backend
[C-PC-3] Technical configuration it shall be possible to make a system copy if new
functionality is implemented. Therefore it is forbidden to use SIDs, hostnames, etc.
in any configuration
[OC-PC-4] Avoid system complexity (i.e. additional server, central components, etc.)
[C-PC-5] Functionality shall be 100% web consumable – it is forbidden to develop or use
functionality which requires a VPN connection to the customer
[OC-PC-6] Upgrade shall run fast, secure and automated. XPRA’s during patching or upgrade
and incompatible DDIC changes are forbidden
[OC-PC-7] Manual post processing after patches, updates or upgrades shall be avoided
[C-PC-8] New functionality shall not be shipped with by-weekly patching
[OC-PC-9] New functionality shall be 100% compatible to the previous release – especially
important for hybrid and integration scenarios which require stable interfaces for
several releases and different software stacks/products
[OC-PC-10] New configuration and pre-delivered content for new functionality (or existing one)
shall be provided along with the code. Newly developed configuration Fiori apps
shall be assigned to the pre-delivered role content in a segregation of duty compliant
way.
[OC-PC-11] New Innovation or new functionality shall be delivered inactive. The customer shall
be able to decide whether to use it or not. The activation shall be done by
configuration Fiori apps for cloud key users.
To ensure that initially excluded software components which may be added to the S/4HANA stack at
a later point in time are able to be prepared for such a deployment, we have to ensure, that every
ABAP repository object which has be transferred from the SoH Stack to the S/4HANA Stack and were
a deletion is intended, takes part within the general deprecation process which is explained here.
[OC-APP-8] Any deletion of former ERP EHP7 Repository Objects shall be processed via the
general Deprecation Execution.
[C-CSC-2] Applications do not need to change the code to obtain a local update task as this is
being done centrally. In addition, the statement ‘CALL FUNCTION <XYZ> IN
UPDATE TASK shall not be changed to ensure data integrity for all such function
module calls keeping them in the same Logical Unit of Work (LUW).
With this definition we are only able to influence the so called V1 posting. We still have plenty of V2
and V3 posting calls were the initial assumption is, that these postings contain redundant data for
secondary persistencies
[OC-CSC-3] V2, V3 Posting shall be reviewed according their potential to create redundant data,
questioned and disabled
The update behavior for all systems for the S/4HANA OnPremise has not been changed overall. For
modifying OData services it is strongly recommended that modifications are done using a local update
task to prevent issues with subsequent GET-requests (HTTP 404).
To ensure a local update task the statement can be set in any modifying method or alternatively in
the following central methods:
• CONSTRUCTOR
• /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CHANGESET_BEGIN
• /IWBEP/IF_MGW_SOST_SRV_RUNTIME->OPERATION_START (if soft state
can be activated for a service)
[O-CSC-4] Applications need to adapt the code to obtain a local update task for modifying
OData services if not done so far. In addition, the statement ‘CALL FUNCTION
<XYZ> IN UPDATE TASK shall not be changed to ensure data integrity for all such
function module calls keeping them in the same Logical Unit of Work (LUW).
o IS_S4H
▪ True, if SIMPLIFY_PUBLIC_CLOUD or SIMPLIFY_ON_PREMISE is
active.
o IS_S4H_CLOUD
▪ True, if SIMPLIFY_PUBLIC_CLOUD is active
o IS_S4H_PUBLIC_CLOUD
▪ True, if SIMPLIFY_PUBLIC_CLOUD is active.
o IS_S4H_ON_PREMISE
▪ True, if SIMPLIFY_ON_PREMISE is active.
o When using these methods, do only a positive check / check only against return
value true. Because if any of the methods returns false, this could mean different
things.
The following rules apply for using these mechanisms:
[OC-DO-1] In case no different behavior between deployment options is required, none of these
mechanisms must be applied.
[OC-DO-2] In case different behavior between deployment options is required which can be
achieved by other measures described in the architecture guideline, these other
measure shall take precedence over usage of the business functions. Specifically
this means
• Where the deployment specific behavior can be achieved by pure
configuration, this shall be handled via pre-delivered content (SaKP
content)
• On level of Transactions & WebDynpro Applications it will be ensured via
the IAM concept that customers can only see can access functionality
which is explicitly whitelisted Cloud.
[OC-DO-3] In case different behavior between the deployment options is required, which
cannot be achieved by other means this shall be achieved by either assigning own
Switches to one or more of these three Business Functions and putting the
respective functionality behind these Switches (preferred) or by a code switch
checking the status of the Business Functions via the provided methods. Any usage
of the central Business Functions is subject to approval (get in contact with Markus
Goebel) and is to be documented at
https://wiki.wdf.sap.corp/wiki/display/SimplSuite/Usage+Governance+of+Central+S
4H+Switches
[OC-DO-4] For functionality which is assigned to existing Business Functions introduced in the
past via an Enhancement Package, the Business Function assignment shall not be
changed. Here the existing Business Functions shall be kept and are subject to the
rules described in chapter 5.12.4.
• draft handling to detect whether an edit draft document still fits to the related active document
or is outdated.
The calculation of this information depends on the capabilities of an application and might be quite
expensive from a calculation as well as a data transfer point of view by getting much data from the
database. Further the calculation approach might not be sufficient and wrong implementation needs
to be prevented. A last changed data and time is not sufficient if it is only on the accuracy “second”.
As potential conflicts are quite low the application can mitigate it by ensuring a unique update time,
e.g. by adding a second in case of a conflict with sub-second updates. The same applies for a long
timestamp with even less probability due to not synchronized application server clocks where
uniqueness needs to be ensured by adding a unit in case of a conflict (i.e. a resulting identical
timestamp during an update). Change documents are not sufficient and even not reliable as the
customer could deactivate them completely and not all changes are recorded via change documents.
Last not least a document hash of the complete document is an approach often chosen. As a fallback
this is ok, but from a performance point of view this is not the best approach and it might get critical
when (node-) enhancements are added. So the only reliable approach is a timestamp, a UUID or a
document version set on each change. If none of these is available the introduction of a timestamp is
the recommended approach.
representation for sorting and filtering and the field with the conversion for data entry. To ensure no
negative performance impact implement reverse exits (e.g. in SADL) for filtering and sorting if possible
to delegate these operations to the original field again instead of the calculated ones.
[OC-APP-14] If a conversion routine is still needed in a FIORI app and the conversion is complex or
contains customer exits check if the external representation can be stored in addition to enable [OC-
APP-12]. If this is not possible check the impact of the conversion with regard to the unsupported
features and ensure these features are not used (e.g. “not filterable”) or make the impact transparent
in the documentation of the app.
Ensure that generating of development objects is only done if really required and that it cannot be
avoided at all. Cloud systems, especially those utilizing multi tenancy, are meant to support volume
business; and volume business partly contradicts individual generation of development artifacts...
IF generating of development objects is required and cannot be avoided the following shall be
considered:
2. Strictly separate generated artifacts from design time artifacts (by name or
namespace), e.g. do not generate includes into a function group delivered by SAP.
3. Best make sure that generated artifacts are NEVER delivered to customers but are
generated in customer systems only/directly. This can for example be achieved by
generating into package $TMP
4. Generate only into dedicated namespaces, which are not used for object delivery,
but only for generation. Best Practice is to use dedicated development namespaces
that begin with /1 for code generation.
5. Always ensure that the generator scenario ‘detects’ when the generator should run
and (re-)generate its generated objects. Do not rely solely on lifecycle management
processes to save generated objects during upgrade.
transaction started with CALL TRANSACTION USING runs in an own logical unit of work so no
complete rollback is possible afterwards.
[OC-APP-20] ABAP source code that runs not in a SAPGui environment shall check and adapt or
remove the usage of CALL DIALOG and CALL TRANSACTION USING. Especially the required
addition WITH or WITHOUT AUTHORITY-CHECK of CALL TRANSACTION shall be explicitly added
if the statement is not removed.
Further in the special context of draft handling and OData-services the usage of PERFORM ON
COMMIT and PERFORM ON ROLLBACK is critical and has to be reviewed. If within draft processing
existing code is called that works with these statements the application needs to be prepared that to
store the draft data a COMMIT WORK is always done by the infrastructure. Therefore ideally these
statements are not used or it is ensured that the right things are done. Especially if multiple logical
units are spanned within one OData request ($batch with multiple change sets) the application cannot
rely on the fact that the ABAP session is closed after a COMMIT WORK. The same is true if the
Gateway soft state is used (while this has to be enabled actively). Further even if the code running
during PERFORM ON COMMIT does the right thing the application has to verify whether further
requests within the same ABAP session register the required PERFORM ON COMMIT routines
again.
[OC-APP-21] ABAP source code that is called during draft processing shall avoid using PERFORM
ON COMMIT and PERFORM ON ROLLBACK. If these statements are used the application has to
ensure that the overall application works as expected.
6 Performance
By removing aggregates and indexes and by using one Data Model for OLTP and Analytics one major
goal of S/4HANA with respect to performance is to reduce the memory footprint and to increase the
throughput. At the same time we want to improve user experience by offering Fiori UIs consuming
stateless OData services based on a Business Objects Modell exposed via CDS views.
This Programming model together with the fact that we have to reuse big parts of the business logic
code gives us some challenges with respect to performance. In the following chapter it is described
what we have to consider to meet the expectations with respect to the main performance
characteristics:
• User interactions where immediate response is expected we shall not wait for a
synchronous request from the backend to return because typical network times in WAN
may already exceed expectation (< 200 ms)
o Typing
o Tapping to another field
o Static drop down list boxes
o …
This does not mean that we may not send or request data from the backend triggered by
such interaction but we cannot hinder the user continuing with his work. You should consider
carefully which level of “total consistency” is needed for each user interaction. For example
entering multiple line items tabbing from field to field should normally not interrupt the user
until the Business Object has executed all determinations and validations to be ready for save
after each field input! (See also draft architecture and resource consumption below)
• For a typical user interaction where data from the backend is needed to continue to work we
have to ensure sub-second response time. As a rule of thumb this translates to KPIs for
the client for network and for the backend:
o 350 ms rendering time on the client
o 300 ms network delivery time
o 350 ms backend execution time (including gateway and application)
To ensure not to exceed network time our goal is to have only one (exceptionally two)
synchronous http request to the backend (per user interaction) and only moderate data
exchange (~10 KB of business data after compression). From this we can derive some simple
rules which have to be applied:
o Enable consumption of CDN (Content Delivery Network) including device cache
(browser, mobile apps …) for Metadata and Code lists (static dropdown list) to
minimize access times to static content
o Bundle data request of different aspects form one BO (respectively CDS view) into one
OData request
o Bundling of data request belonging to different BOs (CDS views) by batching them in
one OData request
o Only request data which is shown to the user (resp. which are needed for
computation on the client).
▪ This means you have to restrict the selected field to the minimal set which is
needed for the current screen
▪ You have to enable paging for all mass data retrieval in lists and in other
hierarchies
As we talk about elapsed time at this point parallelization may help on all levels:
o Parallel requests from the client to the backend
o Parallel request from gateway to application
o Parallel execution of DB request within HANA
As it is clear that this will have a significant drawback to resource consumption (discussed below
in detail) we have to carefully balance the gain in responsiveness and increase of resource
consumption. As a counter example we would not recommend to retrieve different aspects of
some BO hitting at the end the same CDS view neither by requesting parallel OData calls nor by
parallel execution of batched calls from gateway into the application executing big parts of
identical code (on the Application Server and on HANA). The best solution for this would be to
have one OData call retrieving all needed data with one application call. As this problem might
arise due to independent components of the UI on the client, this must be solved by an
appropriate UI pattern framework.
In the past only transactional data and some master data had been accessed from persistency; all
kind of Meta Data (customizing, texts, etc.) have been cached on the Application Server. Using CDS
the retrieval of a simple direct accesses may be replaced by a complex (layered) view including joins
with Meta Data and authorizations.
The second key difference is that in the past data, which had to be retrieved, was determined as part
of the business logic evaluating attributes of the already processed data (significant part of our code
consists of IF ELSE blocks or huge CASE Loops where on attribute level it is decided which data is
needed (from persistency) for further processing).
This is fundamentally different now: As the execution plan of a database statement is computed on
statement level not considering the actual attributes the execution of the plan at runtime cannot cut
away unnecessary branches which was done in the sequential processing in the past. This may result
in dramatic resource increase in particular if we use nested or stacked views and imperative code
within. This problem might improve in future by HANA but this needs huge efforts and will probably
not be available when we have to ship the product.
One thing we should also keep in mind: With the client server architecture of R/3 we had the possibility
to scale out on the application server layer. The data which could be consumed not relying on
transactional consistency like Meta Data and Master Data have been distributed to the application
servers or even to the clients and the infrastructure assured that data was most effectively consumed
on the right layer. In today’s vocabulary this is nothing else but an implementation of a Content
Delivery Network (CDN) together with Client Sides Statement Routing to ensure optimal scalability
behavior. This strategy is still a valid one and we should not apply the principal of bringing code to
the data as the only way to implement business applications reducing the application layer to a pass
through of OData request to the Database. We will need both strategies in order to build a high
performance and high scalable solution and we have to carefully decide when to move the code to
HANA as the scalability of HANA for scale out and scale up is still to quite limited.
For that we have to follow some rules to not dramatically increase the resource consumption of the
persistence layer and spoil the end user performance and the scalability limits in particular for
accesses which are executed inside “high volume” transactions:
• Inside transactional processing (e.g. creation of sales orders) you must not use views to retrieve
data of single business object instances (or small lists of objects accessed via a list of object
keys) if the views are accessing data which is buffered in the application server or if the views
are complex views doing complex expressions and/or calculations. This becomes even more
important if the views are called not only once but several times within a transaction (there
might be in future a possibility via client side statement routing to use the views but make also
usage of a cash infrastructure which would only pass the corresponding requests for the
transactional data to the persistency)
• In general complex views (doing complex calculation and/or accessing buffered tables) should
only be used in code push down scenarios where the result set that has to be transferred to the
Application Server can be reduced dramatically. In such a case it is more efficient to do the
calculations and the evaluations of the configuration data directly on the DB. OLAP scenarios
are typical candidates where the usage of such complex views is required and where their
usage makes
• Do not use CDS views where huge aggregates have to be computed within “high volume”
transactions. For those cases where we have removed aggregate tables and shadowed them
by compatibility views we have to ensure that we do not replace a simple direct access to a
former aggregate to an aggregation on the fly within transactional processing(this will be even
more important looking towards an on-premise shipment for S/4HANA.)
• In many cases we have been using read modules to encapsulate and cash accesses to the
persistency. These read modules typically read all aspects (with select *) to be able to serve
different requesters without additional access to the database. As in many scenarios data
from the aggregates is not needed we have to redesign these read modules to avoid these
accesses.
• For the situation where we need aggregate information within “high volume” transactions.
We need a cache infrastructure to be able to access the needed information with
reasonable resources. This has to be build.
An ongoing debate is the selection of the “right and best” integration infrastructure. In the past, SAP
has introduced various technologies and applications build their integration on top of it. Among others,
the most used technologies are RFC, ALE, SOAP Web Services and newest OData. None of these
technologies and infrastructures where designed, which respect to the ABAP environment, for public
cloud operations. This includes technology, infrastructure and the adaptation by the application. To
lift all technologies, even the oldest, to become public cloud ready, will be a massive investment in all
areas. Therefore we decided to invest only in technologies that are based on internet standards like
SOAP Web Service and OData. Which one of the remaining technologies is the best for all kinds of
integration is not a pure architecture decision. The availability of tools, matureness and market
penetration are important boundaries conditions. Where OData has proven it strength in data centric
scenarios like UI and analytical consumption, there is less experiences within the sSuite, how to use
OData for application-to-application integration as found in common A2A or B2B scenarios, which
were in the past the domain of asynchronous messaging infrastructures.
To give sustainable guidance if and how to implement the common business transaction pattern on
top of OData, we will collect your input sustained by POCs. This is an ongoing activity. Please visit
our wiki and this architecture guideline for updates.
[C-CSI-7] All public remote APIs must be tested. SAP Cloud internal and pre-packaged
integration must be tested end to end. Performance KPIs must be recorded.
[C-CSI-8] For simplifying the on-boarding of existing customer, existing IDOC and
BAPIs/RFCs could be used in special cases. This moves the operations from SAP
cloud operations to the customer IT and therefore the integration workstream must
be involved to define all required tasks.
[C-CSI-9] If SAP application development replaces an existing end-to-end integration based
on IDOC or BAPI/RFC by a semantic compatible based on OData or SOAP WS, the
application must provide pre-packaged integration to simplify and speed up the re-
implementation of this integration by our customer.
[C-CSI-10] Pre-packaged and customer-built integration should be based on OData or
synchronous SOAP web services
[C-CSI-11] If an application provides an asynchronous messaging inbound interface, the
application must provide an UI for end-user/business-experts to resolve potential
errors. NOTE: a simple XML-based message editor is by no means a sufficient
alternative. The application interface framework (AIF), a SAP custom development
product now included in S/4HANA, could be used for this purpose.
[C-CSI-12] All business objects exchanged in an integration scenario must have an explicit link
to their system of record. These objects must be immutable outside the specified
system of record (explicit data owner ship).
[C-CSI-13] Integrations requiring an integration middleware shall use SAPs cloud enabled
products. Options are HANA Cloud Integration (HCI) for process centric and HANA
EIM for data centric integrations. Integration to SAP on-premise products could
leverage exiting SAP on premis products like SAP PI & PO and SLT as well.
[C-CSI-14] All integrations must be assigned to a “Communication Scenario” and must use the
“Communication Arrangement”.
[C-CSI-15] Business customizing required by an integration shall be provided by SET content
or as customer grade UIs.
As mentioned, UI configuration is the starting point for Business User Extensibility. Regardless which
UI technology is used, UI configuration shall allow the following features:
• Personalization of UIs for End users
• Adaptation of UIs for Business Experts. Common use cases are reduction of UIs (e.g. by
hiding parts), combination of UIs into one UI, rearranging UI elements, adding of new
elements to UIs. The adaptions can be deployed to test and productive systems or tenants.
• Acting as an entry point for the business user extensibility tool.
As several UI technologies might be used for a single application, the following has to be stated:
• Not all UI technologies are expected to trigger extensibility (e.g. Web Dynpro).
• Full coverage of extensibility, including extensibility tool navigation, is granted for Fiori Type
1 – 3 UIs.
• All used UI technologies must allow personalization and extensibility (including adding
extension fields to UIs).
To make the extensibility approach available for other platforms (e.g. HCP) as well, the extension
definition UI will be loosely coupled to the backend. It will follow the same OData and Fiori UI paradigm
as the S/4HANA applications.
Wherever possible, the extensibility registry refers to existing models like HANA Live or the extension
includes, which are the basis for multi-layer DDIC extensibility.
For tool-based extensibility of non-model-driven applications, a central metadata registry is a
precondition. The extensibility tool would access the repository about:
• Application definition with clearly defined relations on node level
(simplified and business user understandable “BO” model)
• Extensibility metadata describing which applications are extensible by which patterns. The
steps to be done for those extensions by the tools are defined here as well.
• Assignment of UIs, forms and reports to extensible applications structures
• A simplified process model defining end-to-end processes on message and application
level.
In future even cross platform extensions will need to be handled, so the registry needs to be
accessible across platforms.
ID Area Guideline
ID Area Guideline
[OC-EXT-GEN-4] Transport All extensions are created and transported with ABAP
based mechanisms in a way, that a transport contains
an extension completely without the need of further
LCM activities.
ID Area Guideline
[C-EXT-FIELD-1] Extensibility All applications that are part of the core scope
Model have to register their relevant application artifacts
Registration to the extensibility tool (as described in the
separate guideline).
[C-EXT-FIELD-2] Extension All applications that are part of the core scope
Includes have to implement extension includes for all
extensible nodes of an application object and
guarantee it is transport from DB up to service
implementations. It must be possible to extend a
Model S application end to end (in all layers:
backend, oData, UI). Inside your application use a
“move-corresponding” logic in all transfers from the
persistency structures to other internal structures.
ID Area Guideline
[C-EXT-FIELD-3] Gateway / OData For Gateway development you have to make sure
Extensibility that all OData services are built up from CDS
views or DDIC structures that are extensible with
Extension Includes. Every OData entity must be
bound either to a CDS view or to an ABAP DDIC
Structure.
To be extended…
10 Analytics
10.1 General Architecture
The architecture for S/4HANA Analytics is described in detail in a concept paper which is available
via the S/4HANA Analytics. An overview and the most prominent guidelines are described in this
chapter.
• All S/4HANA scenarios (e.g. transactional and analytical) shall use the same landscape
components to reach a low TCO. Especially topics like user, role and authorization
management, lifecycle and application management shall be implemented only once via one
infrastructure component consistently use through all scenarios.
• A common meta model, Virtual Data Model (VDM), shall be the basis for the different scenario
types, i.e. Analytics & Planning, Search, and Transactional scenarios. Future integration of
additional scenario types shall be possible on top of the existing model.
• For the most part the data model of S/4HANA are the standard Business Suite DDIC tables
which remain unchanged. For specific areas these will be changed or created newly in the
context of the simplification efforts.
• Generic UI consumption for Search, Smart Business, UI5 ALV, or Design Studio Ad-Hoc
application shall be enabled.
• The architecture shall a support a pushdown to later without any change to the data model.
• The architecture shall support a deployment model in the public cloud, where end users and
key users shall only have web access to the system and be enabled to do all relevant steps
to configure and run the scenarios on their own.
To fulfill these requirements the general S/4HANA architecture relies on a common data model
provided via ABAP CDS Views as depicted in the overall architecture diagram.
UI
Lumira / UI5 /
Design Studio Analytical Table
ABAP Server
Analytic Engine
CDS View
HANA DB
SQL View
Table
S/4HANA data are exposed via a virtual data model (VDM) for analytical reporting. The VDM is
implemented using ABAP CDS Views. At runtime the views are consumed via the Analytical Engine
which is part of the ABAP server. The Analytic Engine evaluates the metadata of the CDS views,
especially the analytical annotations, to enable the analytic functionality, e.g. formula aggregation,
exceptional aggregation, or hierarchy handling. For data retrieval the Analytic Engine calls the HANA
SQL views which are generated from the CDS views.
The Analytic Engine exposes the data via different channels for UI consumption. All channels in the
S/4HANA must be http-based to support a cloud deployment. For On-Premise shipments the usage
of other channels can be considered. Long-term it is desired to consolidate all accesses via OData.
For the time being additional protocols like InA are used to enable the usage of UI tools not able to
consume OData (like Design Studio).
On the UI level two default consumers are defined for S/4HANA.Ad-Hoc reporting is enabled via a
generic Design Studio application which allows a flexible analytic evaluation based on the VDM.
Second, analytical data can be integrated into UI5 applications, e.g. via the Analytic Table control,
and use OData for access to the views. The OData services are provided in a generic way by the
Analytic Engine.
In more detail, the following diagram shows how the CDS views are modeled and integrated into the
Analytic Engine runtime.
DS web
Design Studio DS Application application UI5 ALV
(SAPUI5)
Query Builder
(Web)
CDS View NW BW Analytic
(Query) Engine
InfoProv API
CDS View
DDL Editor NW ODP runtime
(ODP)
OpenSQL
[OC-AR-2] No ABAP coding or BW content shall be used for analytic data access or
metadata description.
[OC-AR-3] No HANA-managed content shall be used for Analytics
In order to allow the later migration to a HANA-based approach it is necessary that all functionality
is contained in the views. This offers the possibility to push down the whole evaluation and
execution down to HANA when the necessary view and engine functionality is available in HANA
directly. Especially, this requires that no application-specific ABAP coding is used within the end-to-
end processing of an analytical request from a client. Only generic engine coding must be executed
in this process.
[OC-AR-4] All backend connectivity shall use the ABAP server via integrated Gateway server
(OData) or protocols provided by Analytic Engine (InA). No access is allowed using
HANA XS engine.
To simplify the system landscape it is mandatory to allow only one access channel to the backend.
11.1.2 Printing
Print forms can be displayed in the UI as PDF documents and printed locally via the client (frontend
output). They can also be printed by printers in the customer network without requiring a UI
connection (backend output).
Furthermore output management addresses the output of simple lists, which were generated in the
past with the help of a WRITE statement and the ABAP spool.
[OC-DA-2] Existing archiving objects can be used further in S/4HANA – however, in Cloud only
for Data Privacy reasons. Reading capabilities for old archives created with ADK
are required if corresponding conversion/migration scenarios are offered.
[OC-DA-3] The development of new archiving objects is also allowed in S/4HANA if this is
justified by the above listed criteria.
For the development of Data Aging in S/4HANA, consult the Development Guide in the Data Aging
Wiki. For Archiving Object development, Product Standard guidelines are available in the SAP Portal
go/ilm.
Note
Even though further deletion techniques exist, we encourage you to implement destruction via
ILM enabled archiving objects for the sake of a uniform user experience – especially if the data
to be deleted is subject to be “safety copied” before by some customers.
Note
SAP does not provide legal advice in any form. This is explicitly to be followed in architecture,
configuration and documentation. In cases where examples are required, they have to be
marked as examples. In documentation, SAP notes, or any other written communication, SAP
software supports the customer by providing functions to:
• Simplify the deletion of personal data.
• Report on existing data to an identified data subject
• Restrict access to personal data
• Log read access to personal data
• Log changes to personal data
• (…)
Some basic requirements that support data protection are often referred to as technical and
organizational measures (TOM). The following topics are related to data protection and require
appropriate TOMs:
• Access control
Authentication features as described in section <link to “13.2 User Authentication
and Single Sign-On (SSO)”>.
• Authorization
Authorization concept as described in section <link to “13.3 Authorizations”>.
• Read access logging
As described in section Read Access Logging.
• Communication Security
As described in section <link to “5.3 Ensuring Multi Tenancy Capabilities”??>.
• Input control
Change logging is required to log changes to personal data.
• Separation by purpose
Is subject to the organizational model implemented and must be applied as part of
the authorization concept.
Caution
Personal data has to be stored with regard to a corresponding legal entity or a
corresponding organizational unit!
Further requirements targeting data privacy compliant handling of person
related data are listed in the following and described where necessary in the
following chapters. Not all points are covered yet, details and requirements for
open topics will be added in the future.
• Deletion / Blocking
• Documentation of the “purposes of the processing”
• Masking
• Information
• Anonymization
• Consent
• Notification
• The data is “collected for specified, explicit and legitimate purposes and not further
processed in a way incompatible with those purposes.” (Art. 6 Section 1 EU DIR
95/46)
Beyond the end of the primary purpose, for which the personal data was initially stored, personal data
can still be retained for other explicit legal reasons such as retention periods prescribed by law,
statutes or contracts.
• End of Business (EoB) – represents the completion of the business respectively the
end of the primary purpose.
• End of Purpose (EoP) – represents the time, when further processing of personal
data after End of Business ends. This time period until EoP can be permitted or
prescribed by other legal provision or if the data subject has consented. A typical
example is financial reporting, which allows processing of personal also after end of
business.
• End of Retention (EoRT) – represents the time, when all other retention periods
beside the primary purpose to store data have expired. Then the data has to be
destroyed.
As conclusion this results in the following requirements:
The approach (as already implemented in the SAP Business Suite) is to destruct the whole object
e.g. “Order” and not only certain fields. This approach should only get challenged based on customer
requirement or in special cases of “sensitive personal data”.
Note
Retention management is not only done to manage data destruction after end of retention in
regard to data privacy. It is also related to volume management, covering data footprint
reduction via Archiving. Note that these topics targeting TCO reduction are covered in chapter
“Information Lifecycle Management”.
Master data as the business partner is by definition created without processing purpose. This can be
defined only considering the applications objects referencing the business partner. This means that
end of business, end of purpose and end of retention of a business partner is to be derived from the
highest time reference as to be determined by all depending applications. The following two figure
are examples for blocking and for destruction periods of a business partner with related business and
how the phases interrelate.
Figure 1: Information life cycle phases for residence period and important events for blocking, such as
start of residence time, end of purpose (EoP), and end of residence time.
As shown in Figure 1, each business document can have its own residence period with reference to
a business partner. During the residence time, data remains in the database and can be used in case
of subsequent processes such as returns, warranty issues, or even new business. The end of purpose
for a business partner is reached when the longest residence time of all related business documents
is over. After the residence time, expired data has to be blocked so that regular users and processes
cannot access this data any more, but only authorized users, such as data privacy officers or auditors,
can access it.
Figure 2: Information life cycle phases for retention periods and important events for destruction, such as
start of retention time, end of purpose (EoP), and end of retention time.
As shown in Figure 2, the retention time starts with the end of business and ends according to legal
requirements after a certain period of time, such as 7 or 10 years. Business partner data is stored
because of its use in related business documents. Retention periods are usually not defined for the
business partner master data. Instead, the retention period is defined by the retention periods of the
related business documents. After the longest retention time of the related business documents has
expired, the business partner master data has to be destroyed. Therefore, the business partner data
remains available until the last related business document data that uses a particular business partner
is destroyed.
To determine per business partner (including linked ERP Customer/Vendor) the usage in dependent
applications, to realize blocking after end of purpose and to store relevant start of retention information
a central End of Purpose check functionality is implemented. In a nutshell the blocking and deletion
concept for business partners has to fulfill the following requirements:
• For each business interaction, e.g. order, delivery and invoice it is possible to maintain
different residence and retention periods based on country- and customer specific
requirements. In case more than one residence/retention period is valid the longest is
applied.
• After expiration of the longest residence period the business partner is blocked.
• After expiration of the longest retention period the business partner and all related
data is destroyed.
In consequence the blocking of business partner master data requires then the following deliverables
by all dependent applications:
[OC-DP-4] Application objects with a reference to the business partner master data have to
provide an End-of-Purpose (EoP) check or at least a Where-Used-Check (WUC) to
enable blocking and deletion of business partners.
[OC-DP-5] Application objects with a reference to the business partner master data have to
consider a blocked business partner by preventing new business and providing
display access only for authorized users.
13.6 Masking
… to be continued
13.7 Information
… to be continued
13.8 Consent
… to be continued
13.9 Notification
… to be continued
• In S/4HANA cloud, SAP owns and operates the systems and to do so, requires users with
dedicated authorizations. The IAM concept in the cloud has to clearly separate SAP’s and
the customer’s “user and authorization scope”. In addition, it has to support automated
content lifecycle handling. For example, after system update, the authorizations of the
business users need to be updated automatically so that the customer business is not
disrupted. To reach these cloud qualities, SAP application development has to provide IAM
content and there is no flexibility for customers to “modify” this content. If customers request
more flexibility in S/4HANA cloud, we have to switch to the on-premise IAM model and run
the system in a hosting approach, like we do in the Hana Enterprise Cloud today.
• On-premise, the customer owns the S/4HANA system which will integrate with the
customer’s existing on-premise landscape. We must support a smooth migration path for
the customer-specific IAM content from older Suite systems, in addition application
development also has to provide IAM content for new, simplified components. In contrast to
the cloud, we can’t limit the granularity and flexibitity which the customer is used from older
Suite systems. While we strive for improved content lifecycle handling, a fully automated
handling is not possible and not desired. For example, on-premise admins want control
about changes of the authorizations in their systems after upgrade.
Considering these different assumptions and requirements, it is obvious that we need 2 different
IAM concepts for cloud and on-premise. On the other hand, we build both concepts on the same
user management and authorization (SU01/PFCG) runtimes, rely on the same IAM content models
and we target the same or similar improvements, addressing the pain points and TCO drivers that
today’s on-premise customers are facing.
Most SAP-owned users are managed via the Cloud Access Manager (CAM). The tool can be
considered as kind of IDM system for SAP-owned users required for operating the cloud solution.
User category definition and the associated role content for all user categories is defined in the
product development system and delivered via regular transports.
Customer-owned users include the business users and the customer-owned technical users.
Business users are created for and linked to a real person represented in the system as business
partner. Technical users are created as part of the use-case specific configuration, for example the
configuration of an integration scenario.
In the S/4HANA public cloud scenarios, users are typically created for business partners of type
employee. These users are maintained during employee upload from the leading HCM system.
Customers don’t have access to the full SU01 functionality in cloud scenarios.
All S/4HANA systems have the Identity Model switched on. The Identity Model defines the relation
between business partner and business user. Note that this is an incompatible change compared to
former Suite systems. Classical suite applications that deal with users but did not define the relation
to business partners need to be evaluated and potentially adapted to support the Identity Model in
case these applications are still relevant in context of S/4HANA.
Opposed to business users, communication users and SAP-owned users have local credentials in
the S/4HANA system as they are used for direct point to point communication with protocols that do
not support or require the SAML frontend SSO flow. Certificates are the preferred type for local
credentials of communication users and SAP-owned users. Only in exceptional cases, password
based authentication is used for them.
On-premise, we support multiple user authentication and SSO strategies. Customer’s can choose
those depending on their existing system landscape and infrastructure.
14.3 Authorizations
14.3.1 Authorizations for S/4HANA cloud
The S/4HANA authorization handling builds on the foundation of the ABAP authorization concept,
both for the cloud and the on-premise versions. With that, we ensure a common runtime and IAM
content model for the S/4HANA product family. To address the different requirements, we enable a
simplified approach for handling authorizations in the cloud, and evolve the existing approach on-
premise.
The simplified approach for handling authorizations in an S/4HANA cloud system is based on a new
design time for business roles that enables the key user to assign authorizations with the following
flow:
• The key user assigns a set of functionality (i.e. apps bundled in business catalogs) to a
business role.
• The key user optionally defines responsibilities as restrictions on business object instances.
• The key user assigns the business roles including restrictions to users.
• Based on these definitions the necessary authorizations are generated in the backend. The
customer key user does not directly deal with authorization objects anymore.
The approach relies on application-delivered (Fiori) IAM content, i.e.the properly maintained SU22
data for each OData service (or WebDynpro or HTMLGUI app), the business catalogs which
combine a set of apps needed by a user in a certain role as part of a business process, and an
authorization role per catalog. The catalogs represent tasks or sub-processes within a business
scenario - comparable to workcenter views in ByD. They are the most fine-grained unit regarding
structuring of work and authorization assignment and typically belong to an application area.
All business catalogs together build the “customer scope” in the S/4HANA cloud system and are
explicitly registered so that they show up in the new design time for defining business roles.
Fiori-PFCG integration allows adding catalogs to authorization roles in the backend. Authorization
relevant catalog content (i.e. OData services or WebDynpro apps) is determined per app and
automatically added to the role. The information which OData service is used by an app has to be
available in the frontend server assigned to the S/4HANA system. It’s planned to use the
AppDescriptor for storing this information in context of S/4HANA.
Like the cloud-approach, also the on-premise authorization handling relies on application-delivered
content. Properly maintained authorization content (SU22) is most important, all automation and
simplification tools rely on it.
New functionality, both for cloud and on-premise versions of S/4HANA is built with Fiori UI and
along the Fiori paradigm. Certain Fiori characteristics impact the authorization handling 6:
• The role-based apps bundled in Fiori catalogs provide a targeted scope for a specific role.
With that it gets possible to derive the functional authorizations from the app assignment.
• The SOA-paradigm for building apps leads to the situation that the backend system does
not know the app, but only the OData service. Authorizing an app means authorizing the
OData Service.
As a consequence, suitable OData services that fit to the functional scope of the app and authorization
content (SU22 data) per service are key prerequisites for authorization handling. Wrong service cut
or wrong content result in wrong authorizations and by that in increased TCO, customer tickets and
even worse – hot-fixes in case of S/4HANA cloud scenarios.
What does suitable OData services mean? To illustrate this, let’s look at the business object
“Employee” and 2 hypothetical Fiori apps providing different views on it: The app “Employee address
book” provides the company-internal contact information. A second app “My Employees” enables the
manager to maintain sensitive data like salary or performance-related attributes for his or her
employees. Building both apps with the same service that includes both the contact data for the
address book and the sensitive data for managers would be technically possible, but violate the “least
privilege principle” or imply significant additional logic and configuration to strip down the
authorizations for the address book users of the service. Thus, these apps have to be built with 2
different services that only include the required data and functions.
A related aspect is the recommendation to design interfaces in a way that the minimum result set
according to the use case is returned without the need of comprehensive authorization configuration
(“paranoid interfaces”) . The above-mentioned “My Employees” apps could be realized with a service
“Access Employees” that controls access to the sensitive data of “arbitrary” managers and employees
based on authority checks (“Is the current user allowed to see this employee?”). With that, the admin
would have to maintain a specific role per manager, specifying the employees assigned to the
6 The White Paper „How Fiori impacts Authorization Handling” describes what Fiori means for
authorization handling, i.e. from the perspective of Fiori app developers
manager as authorization field values and keeping these values up-to-date. This administration
nightmare can be avoided with a service “Access My Employees” that internally determines the
current user (which is the user calling the service) and the employees assigned to him or her and
restricts access to these employees. With this service fitting exactly to the functional scope of the
app, the role admin only needs one single role for all managers and does not have to specify the
employees any more.
14.3.4 Data Control Language (DCL) for Core Data Service (CDS) for data
retrieval
While functional authorizations are covered via start authorizations for the app’s OData services, we
aim for pushing down instance-based authorizations for data retrieval to HANA via Data Control
Language (DCLs) for CDSs views. The DCLs provide a mapping to the PFCG authorizations, which
remain the single source of truth from authorization perspective. Unauthorized data is filtered on the
database already and doesn’t have to be loaded into the application server. Applications shall realize
their mass data retrievals via CDS with DCLs to realize performance benefits.
14.4 Rules
[C-IAM-1] Applications that require a SAP-owned user, e.g. a technical user for a certain process or
integration, shall contact the IAM workstream to define the required deliverables.
[OC-IAM-2] Applications shall support the Identity Model. Classical Suite applications with an own
definition of the relation between business partners and user need to be adapted in case they are
still relevant in context of S/4HANA.
[OC-IAM-3] Application development shall provide high-quality authorization content (SU22, …) for
each OData service or application – automation tools rely on it!
[OC-IAM-4] Application development shall provide the IAM content required for productive use in
S/4HANA, i.e. the business catalogs, authorization role per catalog, org-level fields for instance-
based restrictions. A detailed description is available on the Wiki:
https://wiki.wdf.sap.corp/wiki/display/appsec/S4+HANA+Fiori+Security+Guidelines#S4HANAFioriSe
curityGuidelines-Authorizations.
[C-IAM-5] Application development shall provide business roles and restriction values as SAP
proposal for a new system setup.
[OC-IAM-6] Application development shall use a dedicated, suitable OData service for each Fiori
app. Too broad or generic services lead to wrong authorizations. Avoid interfaces that rely on
authorization configuration, instead use “paranoid interfaces”.
[OC-IAM-7] Application development shall provide AppDescriptors per app (not possible yet, rollout
will follow).
[OC-IAM-8] Application development shall push down instance-based authorizations for data
retrieval to the database via DCLs for CDS views.
15.2 Search
Central Fiori Search is realized with NW Enterprise Search. Enterprise Search Models are required
for all relevant Business Objects and all relevant business data that should be found via the central
Fiori Search. The Fiori search UI offers features to search cross Business Objects with filtering, text
search and result highlighting (“Why Found”). Fiori search is the main entry point for fact sheets and
object pages.
[OC-SRCH-1] For S/4HANA only table-based models are allowed in order to improve TCO and
avoid data replication.
[OC-SRCH-2] Extractor based search models MUST NOT be used (as of today Sapscript long
texts require extraction – replacement under discussion).
[OC-SRCH-3] Fact Sheets and Object Pages shall be modeled via CDS and exposed via SADL.
FIORI UI Object
Search pages
R R
ABAP Server
R
R
HANA
Search CDS Views
Models
Suite Tables
Find more information and guidelines on Search and Object Pages development in this WIKI:
https://wiki.wdf.sap.corp/wiki//x/dAMyYg
place in applications and systems. A core element of the business rule approach is the execution
transparency for functional experts and the ability to implement changes to the business logic without
technical support.
A BRMS includes the following components:
• A repository allowing decision logic to be externalized from core application code
• Tools allowing both technical developers and business experts to define and manage
decision logic
• A runtime environment allowing applications to invoke decision logic managed within the
BRMS and execute it using a business rules engine
[OC-BRMS-1] If business rule capabilities are required, use BRFplus. Do not use any other
comparable technology. Usages of other rule engines or similar tools need to be
migrated to BRFplus.
[C-BRMS-2] HRF must not be used directly but through the BRFplus encapsulation only.
Browser: Fiori URL Access with X-Token Browser: Fiori KPRO URL Access
Attachment-Reuse Attachment-Reuse
R R
OData OData
WF Business Workflow
The usage of the application log will therefore be challenged defining the clear rule that…
[OC-AL-1] The central application log SZAL shall be used for storage of application based
message collection for legal, error and other relevant application events.
[OC-AL-2] Every application using the central application log shall ensure that every written log
has a dedicated ‘Receiver’ who shall take care of the log.
[OC-AL-3] The entity of the Sub object can be used to declare the role of the ‘Receiver’.
Following Roles may be recognized / differentiated up to now:
• Cloud User
• Cloud Key User
• Internal Cloud System Administrator
[OC-AL-4] Every application using the central application log shall ensure that an expiration
date is being set according to legal and/or application relevant conditions. The
respective field inside the Application Log Interface is ‘Expiration Date’
[ALDATE_DEL]. If this field is not being defined, a default value [31.12.9999] is
being set, which lead to unlimided application log growth for that Application Log
Objects. This is to be avoided.
To ensure a correct usage of the Central Application Log using 1st the ‘Expiration Date’ and 2nd
(when technical realized inside the data model) the ‘Receiver’ we will establish a governance
process to ensure that the SBAL-Interfaces are called in an appropriate way.
For Cloud Operations a periodic Cleansing Job will be set up to delete all log entries where
the ‘Expiration Date’ [ALDATE_DEL] is exceeded.
It is planned to develop role based central Fiori User Interfaces which shall be serve as a reuse
component.
Application-specific Consumption
R
Analytical R Transactional
Access Access
Actions
OpenSQL
R
Side Effects
BOPF
(Determinations)
Runtime
optimized
direct transactional
read-only read and write R
access (bypassing Checks
BOPF buffer) (Validations)
Application
Tables
Analytical R Transactional R
Access Access
R
Side Effects
BOPF
(Determinations)
Runtime
optimized
direct transactional
read-only read and write R
access (bypassing Checks
BOPF buffer) (Validations)
Application
Tables
Analytical R Transactional R
Access Access
R
Side Effects
BOPF
(Determinations)
Runtime
optimized
direct transactional
read-only read and write R
access (bypassing Checks
BOPF buffer) (Validations)
Application
Tables
Therefore the general rule (as also valid for the business suite) is to use BOPF for the implementation
of new or renewed applications and components. The current implementation of business logic is
very heterogeneous with high TCD and TCO. The implementation of new business logic should avoid
a further increase in heterogeneity, TCD and TCO. Therefore the Business Object Processing
Framework (BOPF) should be used as standardized business logic implementation framework.
Important aspect is that BOPF “only” adds the business logic whereas the model is always derived
from the VDM CDS model and the compositional hierarchy defined therein. Also further data model-
related information is derived from CDS and the annotations like static capabilities and field control
information.
Following this rule and noticing that the introduction of the draft concept introduces exactly new
objects and a renewed, stateless enabled and optimized application it is obvious that BOPF is a major
part of the draft enablement and has to be used according to the rule when supporting the draft
concept.
[OC-BOPF-1] New Business Objects shall be implemented with the Business Object Processing
Framework (BOPF).
[OC-BOPF-2] When creating and implementing new business object with BOPF the data model
shall be generated from CDS and only the business logic shall be added via BOPF.
[OC-BOPF-3] When adopting the draft concept the draft handling and business logic shall be
implemented with BOPF.
[OC-TCO-1] The requested Version of the On Premise Business Suite Backend System shall not
be higher than Business Suite 7 – this corresponds to SAP ERP EhP4
[OC-TCO-2] Once a hybrid scenario is established, the connectivity and integration inside the
cloud application shall ensure version interoperability via downwards compatible
interfaces
[OC-TCO-3] Publically Released Interfaces (RFCs, BAPIs... ~1 200) and Database Tables with
the delivery class ‘A’ (Application table - master and transaction data ~ 8 700) shall
be kept compatible. A continuously cross system check from the S/4HANA
development system into the Suite on Premise Maintenance System will be
established. At a later point in time this check will be included into the general
Checkman Routines.
A positive Piece-Lists stored here ObjectsOfSematicCompatibiliy shall be kept
compatible.
So called After Import Methods (AIMs) and other application defined procedures (Switch-BAdIs,
switch-XPRAs) need a special ZDM enablement:
[OC-ZDM-1] Declare all tables read or written by the AIM or other procedure (in the AIM
repository). This includes all tables indirectly accessed via APIs that are called by
AIMs.
[OC-ZDM-2] Make sure that the production cannot change data read or written by an AIM or other
procedure.
Background:
• The upgrade has to clone all tables written by an AIM.
• The application must not overwrite any data written by an AIM (upgrade result would be
destroyed)
• The application must not overwrite any data read by an AIM (upgrade result would become
outdated)
[OC-ZDM-3] For any changes to an After Import Method, the corresponding declarations in the
AIM repository must be updated accordingly.
Background:
For each AIM, the ZDM procedure compares the list of DB tables entered in the AIM repository by
the DB tables that are actually accessed by the AIM during its execution. In case an access to a DB
table is requested by an AIM which is not listed in the AIM repository, the ZDM procedures will abort
to protect the system from inconsistencies.
[OC-ZDM-4] Support packages must not contain content of the HANA repository (R3TR NHDU)
like stored procedures, Calculation views etc. defined in the HANA repository. Do
not use native SQL. Use the DDIC-managed CDS views instead.
Background:
For any HANA managed content it is not possible to establish a second DB schema with aliases and
views, representing a different version of the DB table content. This however is needed for the ZDM
procedure and is the basic approach with ABAP managed content.
The ZDM procedure clones DB tables, renames them and accesses them through a different
database schema. Native SQL access might therefore be incompatible with ZDM.
Impact if you don’t comply with this guideline:
• Your support package cannot be deployed with ZDM.
• Your native SQL code might not work correctly during the ZDM procedure.
[OC-ZDM-5] No XPRAs must be applied in releases, product versions, support packages, other
deliveries.
Background:
• XPRAs would need ZDM enablement.
• XPRAs generally are not allowed in support packages and enhancement packages. No
exceptions have been approved for XPRAs in the past years.
Impact if you don’t comply with this guideline:
• ZDM cannot be applied for a support package or enhancement package containing XPRAs
Background:
By “conversion”, we mean table conversion by the DDIC converter. The DDIC converter is an ABAP
program. For a DB table that is supposed to be converted, the DDIC converter creates a new DB
table for the target release. Subsequently the content is transferred from the old DB table to the new
one. Finally, the table name of the new table is replaced by that of the old table.
Impact if you don’t comply with this guideline:
Database tables affected by a conversion will be set to a strict read only mode for end users on the
start release. This may harm the availability of functions and applications. (I.e. application
transactions may then end with a short-dump).
[OC-ZDM-7] No structural changes (complex DDL) to very large or very frequently updated tables
except appending a new nullable non-key field (simple DDL).
Background:
ZDM will clone all structurally changed DB tables. This happens in up-time. Cloning such very large
DB tables however takes too much (up-) time. The cloned tables not need to be renamed while being
used by production. This requires an exclusive lock. An exclusive lock on a frequently updated table
can severely affect the productive use of a system.
Impact if you don’t comply with this guideline:
Cloning very large database tables may require all available hardware resources i.e. CPU and
memory. Also DB table locks may reach a critical number and eventually lead dead locks. This can
lead to an effective downtime experienced by the end users.
[OC-ZDM-8] Avoid import of data through the upgrade or customer transports into very large or
very frequently updated data base tables.
Background:
All DB tables to which data is imported during ZDM are cloned. his happens in up-time. Cloning such
very large DB tables however takes too much (up-) time. The cloned tables not need to be renamed
while being used by production. This requires an exclusive lock. An exclusive lock on a frequently
updated table can severely affect the productive use of a system.
Impact if you don’t comply with this guideline:
Cloning very large database tables may require all available hardware resources i.e. CPU and
memory. Also DB table locks may reach a critical number and eventually lead dead locks. This can
lead to an effective downtime experienced by the end users.
HANA Database
Proxy for
published
Object
HDI Runtime Schema
Proxy for
CalcView
Table
_SYS_BIC
Legacy
CalcView
There may be other database schemata in the same database instance which do not belong to the
S/4HANA-ABAP-system but e.g. to XSA applications.
Each database schema can either belong to and be managed by the S/4HANA-ABAP-system or can
belong to any other application. S/4HANA and the lifecycle management tools of S/4HANA do not
provide lifecycle procedures, e.g. upgrades, which can simultaneously upgrade S/4HANA-ABAP-
systems together with other XSA applications.
Since the physical schema names are undefined at design time and since the physical
schema names can be subject to change during the lifecycle of an S/4HANA system,
applications must use the schema mapping when accessing database objects in one of
the HDI schemata.
There are two ways of doing this:
i. AMDP as a proxy: In the source code of an AMDP, logical schema names can
be used to access database objects in other schemata. (The application server
replaces the logical with the physical schema name at compile time and makes
sure that AMDPs are re-compiled whenever the schema mapping is changed).
I.e. the recommendation is to create AMDPs as proxy objects for database
objects in other schemata. This avoids the necessity for applications to
explicitly deal with the schema mapping API.
ii. Schema Mapping: Applications can call the schema mapping API at runtime
and find out the physical schema names belonging to their logical schema
names. With the physical schema names, applications can define DML (data
manipulation language) statements and execute them with ADBC.
• Cross schema access from an HDI schema to ABAP or other HDI runtime schemata:
Cross schema, only read access is allowed.
For each database object in a schema B to be referenced from HDI schema A, a
projection synonym or view must be created in schema A that points to the object in
schema B. When developing the synonym or view, the developer specifies the logical
name of schema B.
When deploying the projection view or synonym, HDI shall translate the logical name of
schema A to the physical name of schema A and create the projection views or
synonym with the correct physical name of schema B.
Similarly, in upgrade scenarios, HDI together with the upgrade tools will take care of
correctly modifying the projection views if schemata are cloned.
Cross schema references between two HDI runtime schemata are unidirectional. I.e.
either objects in schema A can access refer to objects in schema B or the other way
around. The fact that there are cross-schema references between two HDI runtime
schemata must be declared in the definition of the referencing schema (see 2b).
HANA Database
Proxy for
CalcView Table
Table
_SYS_BIC
Legacy
CalcView