SAP HANA Modeling Guide For SAP HANA Studio en
SAP HANA Modeling Guide For SAP HANA Studio en
SAP HANA Modeling Guide For SAP HANA Studio en
2 Introduction to Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
This guide explains how to create information models based on data that can be used for analytical purposes
using the SAP HANA modeler. It includes graphical data modeling tools that allow you to create and edit data
models and stored procedures.
Modeling refers to an activity of refining or slicing data in database tables by creating views to depict a business
scenario. The views can be used for reporting and decision making.
The modeling process involves the simulation of entities, such as customer, product, and sales, and the
relationships between them. These related entities can be used in analytics applications such as SAP
BusinessObjects Explorer and Microsoft Office. In SAP HANA, these views are known as information views.
Information views use various combinations of content data (that is, non-metadata) to model a business use
case. Content data can be classified as follows:
You can model entities in SAP HANA using the Modeler perspective, which includes graphical data modeling
tools that allow you to create and edit data models (content models) and stored procedures. With these tools,
you can also create analytic privileges that govern the access to the models, and decision tables to model
related business rules in a tabular format for decision automation.
• Attribute Views
• Analytic Views
• Calculation Views
This guide is intended for a modeler, who is also known as a business analyst, data analyst or database expert,
concerned with the definition of the model and schemas that will be used in SAP HANA, the specification and
definition of tables, views, primary keys, indexes, partitions, and other aspects of the layout and
interrelationship of the data in SAP HANA.
The data modeler is also concerned with designing and defining authorization and access control, through the
specification of privileges, roles, and users.
The modeler uses the Administration Console and Modeler perspectives and tools of the SAP HANA studio.
SAP HANA is an in-memory data platform that can be deployed on premise or on demand. At its core, it is an
innovative in-memory relational database management system.
SAP HANA can make full use of the capabilities of current hardware to increase application performance,
reduce cost of ownership, and enable new scenarios and applications that were not previously possible. With
SAP HANA, you can build applications that integrate the business control logic and the database layer with
unprecedented performance. As a developer, one of the key questions is how you can minimize data
movements. The more you can do directly on the data in memory next to the CPUs, the better the application
will perform. This is the key to development on the SAP HANA data platform.
SAP HANA runs on multi-core CPUs with fast communication between processor cores, and containing
terabytes of main memory. With SAP HANA, all data is available in main memory, which avoids the
performance penalty of disk I/O. Either disk or solid-state drives are still required for permanent persistency in
the event of a power failure or some other catastrophe. This does not slow down performance, however,
because the required backup operations to disk can take place asynchronously as a background task.
A database table is conceptually a two-dimensional data structure organized in rows and columns. Computer
memory, in contrast, is organized as a linear structure. A table can be represented in row-order or column-
order. A row-oriented organization stores a table as a sequence of records. Conversely, in column storage the
entries of a column are stored in contiguous memory locations. SAP HANA supports both, but is particularly
optimized for column-order storage.
Columnar data storage allows highly efficient compression. If a column is sorted, often there are repeated
adjacent values. SAP HANA employs highly efficient compression methods, such as run-length encoding,
Columnar storage, in many cases, eliminates the need for additional index structures. Storing data in columns
is functionally similar to having a built-in index for each column. The column scanning speed of the in-memory
column store and the compression mechanisms – especially dictionary compression – allow read operations
with very high performance. In many cases, it is not required to have additional indexes. Eliminating additional
indexes reduces complexity and eliminates the effort of defining and maintaining metadata.
SAP HANA was designed to perform its basic calculations, such as analytic joins, scans and aggregations in
parallel. Often it uses hundreds of cores at the same time, fully utilizing the available computing resources of
distributed systems.
With columnar data, operations on single columns, such as searching or aggregations, can be implemented as
loops over an array stored in contiguous memory locations. Such an operation has high spatial locality and can
efficiently be executed in the CPU cache. With row-oriented storage, the same operation would be much slower
because data of the same column is distributed across memory and the CPU is slowed down by cache misses.
Compressed data can be loaded into the CPU cache faster. This is because the limiting factor is the data
transport between memory and CPU cache, and so the performance gain exceeds the additional computing
time needed for decompression.
Column-based storage also allows execution of operations in parallel using multiple processor cores. In a
column store, data is already vertically partitioned. This means that operations on different columns can easily
be processed in parallel. If multiple columns need to be searched or aggregated, each of these operations can
be assigned to a different processor core. In addition, operations on one column can be parallelized by
partitioning the column into multiple sections that can be processed by different processor cores.
With a scanning speed of several gigabytes per millisecond, SAP HANA makes it possible to calculate
aggregates on large amounts of data on-the-fly with high performance. This eliminates the need for
materialized aggregates in many cases, simplifying data models, and correspondingly the application logic.
Furthermore, with on-the fly aggregation, the aggregate values are always up-to-date unlike materialized
aggregates that may be updated only at scheduled times.
A running SAP HANA system consists of multiple communicating processes (services). The following shows
the main SAP HANA database services in a classical application context.
Such traditional database applications use well-defined interfaces (for example, ODBC and JDBC) to
communicate with the database management system functioning as a data source, usually over a network
The main SAP HANA database management component is known as the index server, which contains the
actual data stores and the engines for processing the data. The index server processes incoming SQL or MDX
statements in the context of authenticated sessions and transactions.
The SAP HANA database has its own scripting language named SQLScript. SQLScript embeds data-intensive
application logic into the database. Classical applications tend to offload only very limited functionality into the
database using SQL. This results in extensive copying of data from and to the database, and in programs that
slowly iterate over huge data loops and are hard to optimize and parallelize. SQLScript is based on side-effect
free functions that operate on tables using SQL queries for set processing, and is therefore parallelizable over
multiple processors.
In addition to SQLScript, SAP HANA supports a framework for the installation of specialized and optimized
functional libraries, which are tightly integrated with different data engines of the index server. Two of these
functional libraries are the SAP HANA Business Function Library (BFL) and the SAP HANA Predictive Analytics
Library (PAL). BFL and PAL functions can be called directly from within SQLScript.
SAP HANA also supports the development of programs written in the R language.
SQL and SQLScript are implemented using a common infrastructure of built-in data engine functions that have
access to various meta definitions, such as definitions of relational tables, columns, views, and indexes, and
definitions of SQLScript procedures. This metadata is stored in one common catalog.
The database persistence layer is responsible for durability and atomicity of transactions. It ensures that the
database can be restored to the most recent committed state after a restart and that transactions are either
completely executed or completely undone.
The index server uses the preprocessor server for analyzing text data and extracting the information on which
the text search capabilities are based. The name server owns the information about the topology of SAP HANA
system. In a distributed system, the name server knows where the components are running and which data is
located on which server.
The below flow diagram shows the modeling process in SAP HANA modeler.
• You have installed all the SAP HANA components that are necessary to enable data replication.
• You have installed the SAP HANA studio.
• You have a live SAP HANA system to connect.
• You have a user on the SAP HANA server that has at least the following roles or their equivalent:
• MODELING: This is used as a template role that can be used to create users to work on content.
• CONTENT_ADMIN: This is used as a template role for users who are responsible for managing
repository content at a higher level, and for managing teams who develop and test the content. Users
with this role can:
• Maintain delivery units
• Import and export content
• Create, update, and delete active native and imported packages and objects in these packages
• Grant these privileges to other users
The below tables lists the tasks you can perform in the SAP HANA Modeler perspective
Import metadata Create tables by importing For more information, see You can also create tables
the table definitions from the Import Table Definitions from scratch using the SAP
source systems using the [page 25]. HANA Development perspec
Data Services infrastructure. tive.
Load data Load data into the table defi- For more information, see You can also provision data
nitions imported from the Load Data into Tables [page into the table definitions in
source system using the 27]. the SAP HANA Development
Load Controller, SAP Sybase perspective.
Replication Server or SAP
Landscape Transformation,
and from flat files.
Create packages Logically group objects to Logically group objects to
gether in a structured way. gether in a structured way.
Create information views Model various slices of the For more information, see You can also create informa
data stored in the SAP HANA Creating Information Views tion views in the SAP HANA
database. Information views and Previewing its Output Development perspective.
are often used for analytical [page 44]
use cases, such as opera
tional data mart scenarios or
multidimensional reporting
on revenue, profitability, and
so on.
Create analytic privileges Control which data that indi For more information, see
vidual users sharing the Defining Data Access Privi
same data foundation or view leges [page 162].
can see.
Import SAP BW objects Import SAP BW objects into For more information, see
SAP HANA, and expose them Import BW Objects [page
as information views. 242].
Create decision tables Create a tabular representa For more information, see
tion of related rules using Working with Decision Tables
conditions and actions. [page 248].
For creating modeling objects, you have to first add a system in your SAP HANA studio for establishing a
connection between your SAP HANA studio and your SAP HANA system.
Procedure
Results
Note
After you have completed working on an instance, it is recommended to disconnect instances of all SAP
HANA systems within your SAP HANA studio. You can disconnect a specific SAP HANA instance by
executing the below steps:
There are three types of information views: attribute view, analytic view, and calculation view. All three types of
information views are non-materialized views. This creates agility through the rapid deployment of changes as
there is no latency when the underlying data changes.
Procedure
The SAP HANA studio is an Eclipse-based development and administration tool for working with SAP HANA,
including creating projects, creating development objects, and deploying them to SAP HANA. As a developer,
you may want to also perform some administrative tasks, such as configuring and monitoring the system.
There are several key Eclipse perspectives that you will use while developing:
• Modeler: Used for creating various types of views and analytical privileges.
• SAP HANA Development: Used for programming applications, that is, creating development objects that
access or update the data models, such as server-side JavaScript or HTML files.
• Debug: Used to debug code, such as server-side JavaScript or SQLScript.
• Administration: Used to monitor the system and change settings.
In SAP HANA studio, the SAP HANA Modeler perspective helps you create various types of information views,
which defines your analytic model.
• SAP HANA Systems view: A view of database or modeler objects, which you create from the Modeler
perspective.
• Quick View : A collection of shortcuts to execute common modeling tasks. If you close the Quick View pane,
you can reopen it by selecting Help Quick View .
• Properties pane: A view that displays all object properties.
• Job Log view: A view that displays information related to requests entered for a job such as, validation,
activation, and so on.
• Where-Used view: A view that lists all objects where a selected object is used.
The Systems view is one of the basic organizational elements included with the Development perspective.
You can use the Systems view to display the contents of the SAP HANA database that is hosting your
development project artifacts. The Systems view of the SAP HANA database shows both activated objects
(objects with a runtime instance) and the design-time objects you create but have not yet activated.
• Security
Contains the roles and users defined for this system.
• Catalog
Contains the database objects that have been activated, for example, from design-time objects or from
SQL DDL statements. The objects are divided into schemas, which is a way to organize activated database
objects.
• Provisioning
Contains administrator tools for configuring smart data access, data provisioning, and remote data
sources
• Content
Contains design-time database objects, both those that have been activated and those not activated. If you
want to see other development objects, use the Repositories view.
In the SAP HANA Modeler perspective, you use the view editor to work with the information views. This editor is
also known as One View editor. The editor is common for all three types of information views. The editor
components vary based on the view types as follows:
The Scenario pane of the editor consists of the following default nodes:
• Data Foundation - represents the tables that you use for defining your attribute view.
• Semantics - represents the output structure of the view, that is, the dimension.
• Data Foundation - represents the tables that you use for defining the fact table and related tables of
analytic view.
• Star Join - represents the relationship between the selected table fields (fact table) and attribute views,
which you use to create a star schema.
• Semantics - represents the output structure of the analytic view.
The view editor for graphical calculation views consists of the following:
The Scenario pane of the editor consists of the following default nodes:
• Aggregation / Projection node - based on Data Category value that you choose. If the value is set to Cube,
the default node is an aggregation node. If the property is set to Dimension, the default node is projection
node. If you are creating graphical calculation view with star join, then the default node is the Star Join
node.
Note
You can change the default node in the Scenario pane as required; for example, projection node to
aggregation node using the context menu option Switch to Aggregation.
The view editor for script-based calculation views consists of the following:
The Scenario pane of the editor consists of the following default nodes:
• Script Node- represents the script, which is a series of SQL statements that defines the calculation view
logic.
• Semantics - represents the output structure of the view.
Attributes and measures form content data that you use for data modeling. The attributes represent the
descriptive data, such as region and product. The measures represent quantifiable data, such as revenue and
quantity sold.
Attributes
Simple Attributes Individual non-measurable analytical ele For example, PRODUCT_ID and PRODUCT_NAME
ments that are derived from the data are attributes of product data source.
sources.
Calculated Attributes Derived from one or more existing attrib For example, deriving the full name of a customer
utes or constants. (first name and last name), assigning a constant
value to an attribute that can be used for arithmetic
calculations.
Local Attributes Local attributes that you use in an ana For example, if an analytic view or a calculation view
lytic view allow you to customize the be includes an attribute view as an underlying data
havior of an attribute for only that view. source, then the analytic view inherits the behavior
of the attributes from the attribute view.
Note
Local attributes convey the table fields available in the default node of analytic views.
Measures
Measures are measurable analytical elements. That are derived from analytic and calculation views.
Calculated Measures Calculated measures are defined based on For example, you can use calculated measures to
a combination of data from other data calculate the net profit from revenue and opera
sources, arithmetic operators, constants, tional cost.
and functions.
Restricted Measures Restricted measures or restricted col For example, you can choose to restrict the value
umns are used to filter attribute values for the REVENUE column only for REGION = APJ,
based on the user-defined rules. and YEAR = 2012.
Counters Counters add a new measure to the calcu For example, to count how many times product
lation view definition to count the distinct appears and use this value for reporting purposes.
occurrences of an attribute.
Related Information
You need a minimum set of permissions to perform the modeling activities such as, create, activate, and data
preview on views and analytic privileges.
• Object Privileges
1. _SYS_BI - SELECT privilege
2. _SYS_BIC - SELECT privilege
Note
If you are using front end tools such as, SAP Lumira or Advanced Analysis for Office, see SAP Note
1907696 to grant SQL privileges.
Note
The above permissions need not be Grantable to other users and roles.
• Analytic Privileges
1. _SYS_BI_CP_ALL
If you want to grant users with full data access to all information views in your SAP HANA system, then
assign the analytic privilege _SYS_BI_CP_ALL to the users, for example, in development systems. If
you want to grant only restricted data access to information views, for example, in production systems,
then create an analytic privilege with filters by including these information views as secured models,
and assign this analytic privilege to the user role. For more information, see Defining Data Access
Privileges.
• Package Privileges
1. Root Package - REPO.MAINTAIN_NATIVE_PACKAGES privilege.
If you want grant users with access to all packages, then for the root package select privilege,
REPO.MAINTAIN_NATIVE_PACKAGES and assign it to the users, for example, in development systems.
It is otherwise recommended to assign a more suitable package privilege.
2. <package_used_for_content_objects> - REPO.READ, REPO.EDIT_NATIVE_OBJECTS &
REPO.ACTIVATE_NATIVE_OBJECTS
Note
The above permissions need not be Grantable to other users and roles.
In SAP HANA Modeler perspective, the SAP HANA Systems view lists both the active and inactive objects
available in default workspace.
• Attribute Views
• Analytic Views
• Calculation Views
• Procedures
• Analytic Privileges
• Decision Tables
• Business Scenarios
The object types not listed above are not completely supported in SAP HANA Modeler perspective. This means
that, in SAP HANA Modeler perspective, you can open those objects, which are not listed above, in simple text
editors only. You use the respective SAP HANA perspectives to open those objects.
This section discusses on various options available for importing table definitions and data from the SAP HANA
Modeler perspective.
You need to import the table definitions as a prerequisite for creation of information views.
Prerequisites
You have configured the SAP HANA modeler for importing metadata using the Data Services infrastructure.
Context
• Mass Import: To import all table definitions from a source system. For example, you can use this approach
if this is a first import from the given source system.
• Selective Import: To import only selected table definitions from a source system. For example, you can use
this approach if there are only few table definitions added or modified in the source system after your last
import.
Procedure
1. If you want to import all table definitions from a source system, do the following:
a. In the File menu, choose Import.
b. Expand the SAP HANA Content node.
Note
If the required system is not available from the dropdown list, you need to contact your
administrator.
2. If you only want to import selective table definitions from a source system, do the following:
a. In the File menu, choose Import.
b. Expand the SAP HANA Content node.
c. Choose Selective Import of Metadata, and choose Next.
d. Select the target system where you want to import the table definitions, and choose Next.
e. Select the required source system.
Note
If the required system is not available from the dropdown list, you need to add the new source
system using Manage Connections. For more information about installing and using Manage
Connections functionality, refer to 1942414 .
f. In the Type of Objects to Import field, select the required type, and choose Next.
g. Add the required objects (tables or extractors) that you want to import.
Note
If you want to add dependent tables of a selected table, select the required table in the Target pane,
and choose Add Dependent Tables in the context menu.
h. Select the schema into which you want to import the metadata.
i. If you selected object type as extractor, select the package into which you want to place the
corresponding objects.
j. Choose Next, then review and confirm the import by choosing Finish.
Before you begin creating information models, you have to import all necessary table definitions into the SAP
HANA database and load them with data.
Prerequisites
• If you are using the Load Controller or Sybase Replication Server infrastructure, make sure that you have
imported all table definitions into the SAP HANA database. For more information, see Import Table
Definitions [page 25].
• If you are using the SLT component, the source systems, the target schema, are configured by the
administrator during the installation.
Context
Use this procedure to load data into your table definitions. Depending on your requirements, you can perform
the following:
• Initial Load - to load all data from a source SAP ERP system into the SAP HANA database by using Load
Controller or SAP Landscape Transformation (SLT). This is applicable when you are loading data from the
source for the first time.
• Data Replication - to keep the data of selected tables in the SAP HANA database up-to-date with the
source system tables by using SAP Sybase Replication Server or SAP Landscape Transformation (SLT).
Procedure
Note
Select Source System dropdown list contains all the ERP and non-ERP source systems, which are
connected to the SLT system.
4. If you are using the SLT-based replication, select the target schema, which is configured for SAP ERP or
non-SAP systems in the Target Schema Configured dropdown list.
5. Choose Load for initial load or Replicate for data replication.
6. Select the required tables to load or replicate data in any of the following ways:
Note
7. If you are using the load controller infrastructure, choose Next and enter the operating system user name
and password.
8. Choose Finish.
Next Steps
Over a period of time the SAP HANA status tables grow very large with data load action status entries, which do
not need to be maintained. You can choose to delete these entries from the SAP HANA status tables using the
delete button in the Data Load Management view. Once you choose this option in the follow-on dialog, you can
select which entries you want to delete in the status tables:
1. Choose Operation for which you want to delete the status table entries such as load, replicate, or create.
2. In the Entry Type dropdown list, select the required option.
Note
To delete all the entries from the status tables for a particular operation, choose All, otherwise
Specific.
3. If the value for Entry Type is Specific, in the Value dropdown list, select the tables for which you want to
delete the entries.
4. If you want to delete the entries for a specific time period, select it using the From and To calendar options.
5. Choose Delete.
When loading data into tables using SLT- based replication, you can choose to stop data replication temporarily
for a selected list of tables, and later resume data load for these.
Procedure
You can upload data from flat files in a client file system to the SAP HANA database.
Context
• If the table schema corresponding to the file to be uploaded already exists in the SAP HANA database, the
new data records are appended to the existing table.
• If the required table for loading the data does not exist in the SAP HANA database, create a table structure
based on the flat file.
The application suggests the column names and data types for the new tables, and allows you to edit them.
There is a 1:1 mapping between the file and table columns in the new table. The application does not allow you
to overwrite any columns or change the data type of existing data. The supported file types are: .csv, .xls,
and .xlsx.
Note
By default, the application considers up to 2000 records in the file to determine the data types of columns
in the new table. You can modify this value by choosing Window Preferences SAP HANA Modeler
Data from Local File Decision Maker Count
Procedure
Note
A delimiter is used to determine columns and pick the correct data from them. In a csv file, the
accepted delimiters are ',', ';' and ':'.
Note
• Only 1:1 column mapping is supported. You can also edit the table definition by changing the
data types, renaming columns, adding or deleting the columns, and so on.
• You can choose to map the source and target columns using the Auto Map option. If you
choose the one to one option, then the first column from the source is mapped to the first
column of the target. If you choose the Map by name option, the source and target columns
with the same name are mapped.
7. Select the Existing option if you want to append the data to an existing table.
a. Choose Next.
b. On the Manage Table Definition and Data Mapping screen, map the source and target columns.
8. Perform the following steps if you want to provide a constant value for a column at the target:
a. Right-click the column. From the context menu, choose Make As Constant.
b. In the Constant dialog box, enter a value, and choose OK.
Note
9. Enter a value in the Default Value column to provide a default value for a column at the target. Choose
Finish.
Copy standard content delivered by SAP or by an SAP partner to a local package in the SAP HANA system, and
use this content for modeling information views. For example, copy content from the package sap.ecc.fin to the
package customer.ecc.fin.
Prerequisites
Context
Copying the content shipped by SAP or an SAP partner to a local package in your SAP HANA system, helps
avoid overwriting any changes to the existing content during the subsequent import. You can also copy
modeler objects that you have created from one local package to another local package in your SAP HANA
system.
Note
If you are copying dependent objects for script-based calculation views or procedures to a local package,
manually change the script or procedure to adjust references in impacted objects after copying.
Procedure
Modeler creates a mapping for the source package and the target package.
Note
You can copy the content delivered by SAP to a target root package (or to sub packages within the root
package), and maintain package mappings accordingly.
6. If you want to create more than one source-target package mapping, then on the next row, choose the
source package and target package as required.
Note
If you are launching this dialog for the first time, modeler fetches the package mapping information
from the Global Catalog Store.
8. Choose Next
9. Copying selected modeler objects.
If you want to copy only selected modeler objects from the source package to the target package,
a. Expand the source package.
b. Select the modeler objects to copy.
c. Choose Add.
10. Convert analytic views to calculation views in the target package.
If you are copying analytic views from the source package to the target package, you can convert them to
calculation views before copying it to the target package,
a. In the Copy as Calculation View(s) section, select the checkbox.
11. Choose Next to view summary of the copy process.
12. Override existing objects in target.
Modeler does not copy an object in the source package if it already exists in the target package. If you want
to override the existing object in the target package,
a. Select the object in the summary page.
13. Choose Finish to confirm content copy.
Note
If you are copying objects to the target package without copying its dependent objects, the copied
object has references to the dependent objects in the source package.
Next Steps
After copying modeler objects to the target package, you need to manually activate the copied objects.
Schema mapping is essential when the physical schema in the target system is not the same as the physical
schema in the source system. For example, in a transport scenario, to access and deploy transported objects,
you need to map the authoring schema to the physical schema.
Context
Content object definitions are stored in the repository and contain references to the physical database
schemas. When you copy the content objects to a different system, for example, from an SAP system to a
customer system or between customer systems, the object definition still refers to the physical database
schemas at the source. Modeler uses the schema mapping definitions in the configuration table
“_SYS_BI”.”M_SCHEMA_MAPPING” to resolve conflicts.
Schema mappings are applicable only to references from the repository objects to the catalog objects. It is not
recommended to use them for repository to repository references.
Note
You need to map the references of script-based calculation views and procedures manually by changing the
script, and by checking if the tables are qualified with the schema. If the tables are not qualified, the default
schema of the view is used, and the schema mapping is also applied to the default schema.
You can map several authoring schemas to the same physical schema. For example, content objects delivered
by SAP refer to different authoring schemas, whereas in the customer system, all these authoring schemas are
mapped to a single physical schema where the tables are replicated.
Note
If a system does not have schema mapping, the authoring schema is filled 1:1 from the physical schema;
otherwise, the default schema cannot be changed.
Procedure
Note
If you are using an SAP HANA system with multiple isolated tenant databases to perform cross
database access between tenants, you must provide the authoring DB name and the physical DB name
For each package in your SAP HANA system, you can define and maintain a default authoring schema, which is
specific to that particular package. You maintain all package specific default schema definitions in the table,
M_PACKAGE_DEFAULT_SCHEMA (Schema: _SYS_BI).
Procedure
1. In the context menu of your SAP HANA system, choose SAP HANA Modeler Maintain package specific
default schema .
2. Choose Add.
3. Select Package Name and Default Schema.
The Package Column dropdown list displays all packages available in this SAP HANA system and the
Default Schema dropdown list displays all authoring schemas with definitions available in the
_SYS_BI.M_SCHEMA_MAPPING table.
4. Choose Finish.
You maintain package specific default schema in order to maintain a single authoring schema.
If you have mapped multiple authoring schemas against a single physical schema, and if you try to create new
views (or if you try to change a particular view by adding more catalog objects), the view editor automatically
considers the authoring schema of the catalog objects as its physical schema. This is typically seen in
scenarios in which multiple back-end systems (E.g. ERP, CRM) are connected to a single SAP HANA instance.
In such scenarios, in order to maintain a single authoring schema, you can maintain a default schema for the
objects that are defined in specific packages. You define the package specific default schema, as an authoring
schema, in your schema mapping definition, and maintain it in the table M_PACKAGE_DEFAULT_SCHEMA
(Schema: _SYS_BI). The system creates this table while you update your existing SAP HANA instance or when
you install a new SAP HANA instance. Each time you modify the content of the table, you have to restart your
SAP HANA studio instance to update the schema mapping and package specific default schema information
Example
SAP_ECC CUS_PHY
SAP_FND SAP_TEST
SAP_CRM CUS_PHY
SAP_RET SAP_TEST
SAP_AUTH OTHER
OTHER_AUTH OTHER
PACKAGE_NAME M_PACKAGE_DEFAULT_SCHEMA
sap.ecc SAP_ECC
sap.ecc SAP_RET
sap.crm CUS_CRM
sap.crm.fnd SAP_ECC
OTHER_AUTH
Scenario 1:
Consider that you have defined an object in package sap.ecc.fnd, and you are trying to add a catalog table from
physical schema CUS_PHY.
In the above scenario, a lookup for schema mapping definition in M_SCHEMA_MAPPING results in authoring
schemas SAP_ECC, SAP_CRM. A lookup for package specific default schema in
M_PACKAGE_DEFAULT_SCHEMA results in default schemas SAP_ECC, SAP_RET.
The view editor considers SAP_ECC as the default schema and not SAP_RET.
Note
The package matching is done recursively navigating to the parent package as long as the package entry
does not match.
Scenario 2:
Consider that you have defined an object in package sap.crm.fnd, and you are trying to add a catalog table from
the physical schema, CUS_PHY.
A lookup for schema mapping definition in M_SCHEMA_MAPPING results in authoring schemas SAP_ECC,
SAP_CRM. A lookup for package specific default schema in M_PACKAGE_DEFAULT_SCHEMA results in one
default schema SAP_ECC.
Scenario 3:
Consider that you have defined an object in package sap.crm, and you are trying to add a catalog table from the
physical schema, CUS_PHY.
Scenario 4:
Consider that you have defined an object in package sap.crm, and you are trying to add a catalog table from the
physical schema, OTHER.
A lookup for schema mapping definition in M_SCHEMA_MAPPING results in one authoring schema SAP_AUTH
A lookup for package specific default schema in M_PACKAGE_DEFAULT_SCHEMA results in one default
schema SAP_CRM.
The view editor does not consider SAP_CRM as the default schema. (since it is neither the physical schema nor
is it defined in the list of authoring schemas). As a result, the view editor considers the authoring schema
SAP_AUTH.
Scenario 5:
Consider that you have defined an object in package cus.crm, and you are trying to add a catalog table from
physical schema, OTHER.
A lookup for schema mapping definition in M_SCHEMA_MAPPING results in two authoring schemas
SAP_AUTH, OTHER_AUTH.
A look up for package specific default schema in M_PACKAGE_DEFAULT_SCHEMA results in one default
schema OTHER_AUTH.
The view editor considers OTHER_AUTH as the default schema.(since a mapping for it exists in the table
M_SCHEMA_MAPPING with OTHER as its corresponding physical schema).
Scenario 6:
Consider that there are no default schemas found in M_PACKAGE_DEFAULT_SCHEMA (matching default
schemas for package or parent packages, or an entry with <empty> package name). In such scenarios, view
editor considers the authoring schemas matched to the corresponding physical schema from the table
M_SCHEMA_MAPPING.
You can change the authoring schema of the catalog objects referenced in a model, and also change the
authoring schema of elements of the object.
Context
Each information model points to catalog objects such as, tables from various schemas. In the case of a
transport scenario, the physical schema where these catalog objects are placed may vary when the models are
transported from one system to another. To work with the transported models, the physical schema
As all the information models save authoring schema details, if required, the modeler or content administrator
can change the existing authoring schema of one or more information models to a new one.
Procedure
Note
If you change the authoring schema of an analytic view where underlying objects such as tables also
point to the same authoring schema, the authoring schema for all these elements also changes. The
default schema (containing currency related tables) for the selected analytic view also changes.
4. Select or enter the authoring schema that you want to change for the objects selected above in the Source
dropdown list.
5. Select or enter the authoring schema that you want to associate with the objects selected above in the
Target dropdown list and choose OK.
Note
If you enter an authoring schema as a target that does not exist in the schema mapping defined for the
current system instance, then the specified authoring schema name is set in the information models
irrespective of whether a schema mapping exists. In this case, you need to map the authoring schema
to the physical schema.
Next Steps
If the mapping of the newly associated authoring schema with the correct physical schema (where catalog
objects reside) is not available, you cannot open the objects. In such cases, you need to map the authoring
schema with the correct physical schema, for example:
Schema Mapping
Authoring Schema Physical Schema
AS1 PS1
AS2 PS1
AS3 PS2
If you change the authoring schema of the information models from AS1 to AS2, you can work with the models
as is. But if you change the authoring schema of the information models from AS1 to AS3, due to the current
Schema Mapping
Authoring Schema Physical Schema
AS1 PS1
AS2 PS1
AS3 PS1
This section describes how you can change the default settings and define certain preferences before you
begin working with the SAP HANA modeling environment.
Related Information
Launch the modeler preferences screen to view and manage the default settings that the system must use
each time you logon to the SAP HANA Modeler perspective.
Procedure
Note
Related Information
You can specify certain default values that the system must use each time you log on to the SAP HANA
Modeler perspective. Use this Modeler Preferences screen to manage the default settings.
Choose your requirement from the table below, and execute the substeps mentioned for your requirement.
Preference Substeps
Content Presentation: To specify Under Package Presentation select one of the following options:
the structure of content packages
in the SAP HANA Systems view • Hierarchical - to view the package structure in a hierarchical manner such that
the child folder is inside the parent folder.
• Flat - to view all the packages at the same level, for example, sap, sap.ecc,
sap.ecc.ui.
Show Object Type Folders - to group together similar objects in a package such as
attribute views in the Attribute View package.
Show all objects in the SAP HANA Systems view - to view all the repository objects in
the SAP HANA Systems view. If this option is unchecked, only modeler objects are
available in the SAP HANA Systems view. Also, if the option is unchecked and the user
package has no modeler object, but it contains other hidden repository objects, then
the user package is marked with contains hidden objects. If the option is checked, all
the repository objects are shown in the SAP HANA Systems view with their names
suffixed with the object type such as, ABC.attributeview.
Note
Select the checkbox to make non-modeler object types visible. However, not all
the operations are supported.
Data from Local File: To set the 1. Browse the location to save error log files for data load using flat files.
preferences for loading data using 2. Enter the batch size for loading data. For example, if you specify 2000 and a file
flat file has records of 10000 rows the data load will happen in 5 batches.
3. Enter a decision maker count that system uses to propose data types based on
the file. For example, enter 200 if you want the proposal to be made based on
200 rows of file data.
Default Model Parameters: To set Choose the client from Default Client.
the default value for the client that
SAP HANA modeler must use to
preview model data
Validation Rules: To enforce Select the required rules to be applied while performing object validation.
various rules on objects.
Note
Enforcing validation rules with severity Error is mandatory.
Data Preview: To determine the Select the number of rows that your require in data preview.
numbers of rows that system
must display.
Logs: To specify a location for job Expand the Logs node and choose Job Log. Browse the location where you want to
log files save the job log files.
Logs: To enable logging for 1. Expand the Logs node and select Job Log.
repository calls and specify a 2. Choose True.
location for repository log files
3. Browse the location where you want to save the repository log files.
Case Restriction: To allow lower Deselect the Model name in upper case checkbox.
case alphabets for attribute view,
analytic view, calculation view,
procedure, and analytic privilege
names
Keyboard shortcuts to perform your modeling activities such activate, validate, data preview, and so on.
The table below lists the commands and the keyboard shortcuts to execute those commands:
In SAP HANA Systems view you can choose to filter the content, and view only the packages that you want to
work with. If you apply a filter at the package level, the system displays all the packages including sub-packages
that satisfies the filter criteria. You can apply a filter for packages only on the Content node in the SAP HANA
Systems view.
Procedure
Note
If a filter already exists, the new filter will overwrite the existing one. You can also apply the previous
filter on the Content using the Apply Filter '<filter text>' option.
In SAP HANA Systems view, filter the content and view only objects that you want to work with. You can apply a
filter for objects at the package level including sub-packages.
Procedure
Information views are used for analytical use cases such as operational data mart scenarios or
multidimensional reporting on revenue, profitability, and so on. There are three types of information views:
attribute view, analytic view, and calculation view.
All three types of information views that you create in SAP HANA Modeler perspective are non materialized
views, which creates agility through the rapid deployment of changes. You can create information views to
depict a business scenario using content data (attributes and measures). This section describes how you can
create and use the different information views that SAP HANA modeler supports.
Related Information
Generate time data into default time-related tables present in the _SYS_BI schema and use these tables in
information views to add a time dimension.
Context
For modeling business scenarios that require time dimension, you generate time data in default time related
tables available in the _SYS_BI schema. You can select the calendar type and granularity and generate the time
data for a specific time span.
Note
For the granularity level Week, specify the first day of the week.
Note
The variant specifies the number of periods along with the start and end dates.
9. Choose Finish.
Note
For the Gregorian calendar type, modeler generates time dimension data into
M_TIME_DIMENSION_YEAR, M_TIME_DIMENSION_MONTH, M_TIME_DIMENSION_WEEK,
M_TIME_DIMENSION tables and for the Fiscal calendar type, the modeler populates the generated
time dimension data into the M_FISCAL_CALENDAR table. These tables are present in _SYS_BI
schema.
Related Information
The following table provides more information on each of the calendar types.
Gregorian Use the Gregorian calendar type, if your financial year is same as the calendar
year, for example, January to December.
Fiscal Use the Fiscal calendar type, if your financial year is not same as the calendar
year, for example, March to April.
Related Information
For the Gregorian calendar type, based on the granularity you choose, modeler defines certain restrictions on
the time range for which you can generate time dimension data.
For each granularity level, the following table displays the time range for which you can generate time
dimension data.
Granularity Range
Note
The following restrictions are applicable for generating time dimension data:
Related Information
Attribute views are used to model an entity based on the relationships between attribute data contained in
multiple source tables.
In attribute views you define joins between tables and select a subset or select all the columns and rows of the
table. The rows selected can also be restricted by filters. One application of attribute views is to join multiple
tables together when using star schemas, to create a single dimension view. The resultant dimension attribute
view can then be joined to a fact table via an analytic view to provide meaning to its data. In this use case, the
attribute view adds more columns and also hierarchies as further analysis criteria to the analytic view. In the
star schema of the analytic view, the attribute view is shown as a single dimension table (although it might join
multiple tables) that can be joined to a fact table. For example, attribute views can be used to join employees to
organizational units, which can then be joined to a sales transaction via an analytic view
You can create hierarchies to arrange the attributes hierarchically. Hierarchies help you to visualize and analyze
the data in a hierarchical fashion. You can create Level hierarchies and Parent Child hierarchies by specifying
the attributes that correspond to different levels, and parent child nodes respectively.
Related Information
You can create a view that is used to model descriptive attribute data by using attributes, that is data that does
not contain measures. Attribute views are used to define joins between tables and to select a subset or select
all the columns and rows of the table.
Prerequisites
You have imported SAP system tables T009 and T009B tables of type Time to create time attribute views.
Procedure
Note
d. Choose OK.
Note
You can add the same table again in Data Foundation using table aliases in the editor.
Note
If you want to add all columns from the data source to the output, in the context menu of the data
source, choose Add All To Output.
If you want to hide the attributes from the client tools or reporting tools when you execute the attribute
view, then
a. Select the Semantics node.
b. Choose the Columns tab.
c. Select an attribute.
d. Select the Hidden checkbox.
11. Define key attributes.
Define at least one attribute as a key attribute. If there are more than one key attribute, all the key
attributes must point to the same table, also referred to as the central table, in the data foundation.
a. Select the Semantics node.
b. Choose the Columns tab.
c. Select an attribute.
d. Select the Key checkbox.
e. In the Attributes tab page of the Column pane, select the required attribute and select the Type as Key
Attribute.
For auto generated time attribute views, the attributes, and key attributes are automatically
assigned.
Note
You can also activate the current view by selecting the view in the SAP HANA Systems view and
choosing Activate in the context menu. The activation triggers validation check for both the client
side and the server side rules. If the object does not meet any validation check, the object
activation fails.
Note
The activation triggers the validation check only for the server side rules. Hence, if there are
any errors on the client side, they are skipped and the object activation goes through if no error
found at the server side.
Results
Restriction
• Consider that you have added an object to the editor and the object was modified after it was added. In
such cases, close and open the editor. The helps reflect the latest changes of the modified object in the
editor. For more information, see SAP Note 1783668 .
After creating an attribute view, you can perform certain additional tasks to obtain the desired output. The
following table lists the additional tasks that you can perform to enrich the attribute view.
If you want to filter the output of data foundation node. Filter Output of Data Foundation
Node.
If you want to create new output columns and calculate its values at runtime using an Create Calculated Columns
expression.
If you want to assign semantic types to provide more meaning to attributes in the at Assign Semantics
tribute views.
If you want to create level hierarchies to organize data in reporting tools. Create Level Hierarchies
If you want to create parent-child hierarchies to organize data in reporting tools. Create Parent-Child Hierarchies
If you want to filter the view data either using a fixed client value or using a session Filter Data for Specific Clients
client set for the user.
If you want to execute time travel queries on attribute views. Enable Information Views for Time
Travel Queries
If you want to invalidate or remove data from the cache after specific time intervals. Invalidate Cached Content
If you want to maintain object label texts in different languages. Maintain Modeler Objects in Multiple
Languages
Related Information
SAP HANA modeler supports three types of attribute views. The following table provides information on
attribute view types.
You can select the calendar type as Fiscal or Gregorian and model time attribute views.
In addition, you can also auto-create time attribute views. When you select, auto-create,
modeler auto-creates these attribute views based on the default time tables. It also defines
the appropriate columns or attributes based on the granularity, and creates the required fil-
ters.
Derived Create an attribute view that is derived from an existing attribute view. You cannot modify
derived attribute views. It only acts as reference to the base attribute view from which it is
derived.
Derived attribute views are read-only. The only editable value is the description of the attrib
ute view.
Note
The tables used in time attribute views for calendar type Gregorian are, M_TIME_DIMENSION,
M_TIME_DIMENSION_ YEAR, M_TIME_DIMENSION_ MONTH, M_TIME_DIMENSION_WEEK and for
calendar type Fiscal is M_FISCAL_CALENDAR. If you want to do a data preview for the created attribute
view, generate time data into tables from the Quick View.
Analytic views can contain two types of columns: attributes and measures. Measures are simple, calculated, or
restricted. If analytic views are used in SQL statements, then the measures have to be aggregated.
For example, using the SQL functions SUM(<column name>), MIN(<column name>), or MAX(<column
name>). Normal columns can be handled as regular attributes and do not need to be aggregated.
You can also include attribute views in the analytic view definition. In this way, you can achieve additional depth
of attribute data. The analytic view inherits the definitions of any attribute views that are included in the
definition.
You can assign one or more alternate names (or aliases) to tables. For example, if you want to improve the
readability of a table name or if you want to add the same table again to your data foundation node, then
you can use aliases to avoid name conflicts. From the Details pane, select a table and provide your Alias
Name value in the table properties pane. You can use this alias value for all references to the table.
If you are not able to activate your information view because of name conflicts between shared and local
attributes of a column view, then you can use aliases to resolve such conflicts and activate the information
view. Select the semantics node and in the Shared Attribute pane, provide Alias Name and Alias Label
values.
If you come across errors due to aliases, while trying to open an information view that was already created,
then you can use the Quick Fix option to resolve the error. Select the error message or the problem in the
Problems view, and choose Quick Fix in the context menu. This action resolves the issue by assigning right
names to the column and alias.
You can choose to hide the attributes and measures that are not required for client consumption by assigning
value true to the property Hidden in the Properties pane, or selecting the Hidden checkbox in the Column view.
The attributes or measures marked as hidden are not available for input parameters, variables, consumers, or
higher level views that are built on top of the analytic view. For old models (before SPS06), if the hidden
attribute is already used, you can either unhide the element or remove the references.
For an analytic view, you can set the property Data Category to Cube or Dimension. If the Data Category
property of the analytic view is set to Dimension, the view will not be available for multidimensional reporting
purposes. If the value is set to Cube, an additional column Aggregation is available to specify the aggregation
type for measures.
You can enable relational optimization for your analytic view such as, Optimize stacked SQL for example,
convert
to
Setting this property would be effective only for analytic views having complex calculations such that
deployment of analytic view generates catalog calculation view on top of the generated catalog OLAP view.
Caution
In this case, if this flag is set counters and SELECT COUNT may deliver wrong results
Related Information
Creating native HANA models can be one way to improve performance compared to development options
outside of the database, or in some cases also compared to pure SQL development.
Native HANA models can be developed in the new XS Advanced (XSA) development environment using SAP
Web IDE for SAP HANA. These models supersede older artifacts like Analytic and Attribute Views; these views
should now be replaced by graphical Calculation Views which can be used to model complex OLAP business
logic. Native HANA modeling provides various options to tune performance by, for example, helping to achieve
complete unfolding of the query by the calculation engine or modeling join cardinalities between two tables
(that is, the number of matching entries (1...n) between the tables) and optimizing join columns.
For more information about modeling graphical calculation views refer to the SAP HANA Modeling Guide for
SAP Web IDE for SAP HANA.
• https://blogs.sap.com/2017/09/01/overview-of-migration-of-sap-hana-graphical-view-models-into-the-
new-xsa-development-environment/ Overview: Migration of Models into the XSA Development
Environment
• https://blogs.sap.com/2017/10/27/join-cardinality-setting-in-calculation-views/ Join cardinality setting
in Calculation Views
• https://blogs.sap.com/2018/08/10/optimize-join-columns-flag/ Optimize Join Columns Flag
Related Information
SAP HANA Modeling Guide for SAP Web IDE for SAP HANA
A calculation view is a flexible information view that you can use to define more advanced slices on the data
available in the SAP HANA database. Calculation views are simple and yet powerful because they mirror the
functionality found in both attribute views and analytic views, and also other analytic capabilities.
Use calculation views when your business use cases require advanced data modeling logic, which cannot be
achieved by creating analytic views or attribute views. For example, you can create calculation views with layers
of calculation logic, which includes measures sourced from multiple source tables, or advanced SQL logic, and
much more. A calculation view can include any combination of tables, column views, attribute views, and
analytic views. You can create joins, unions, projections, and aggregation levels on its data sources.
The Calculation Engine is designed for optimal performance and thus uses a variety of optimization processes.
The Calculation Engine optimizations can sometimes result in a nonrelational behavior. It does not behave in
the same way that a typical SQL user would expect. A key feature in the SAP HANA Calculation Engine is the
instantiation process.
During an instantiation processes the Calculation Engine simplifies the calculation view into a model that fulfills
the requirement of the query. This means that, the Calculation Engine will prune a calculation view that has
Field_1 and Field_2 into a model that only consists of Field_1 if no other fields are requested by the query
executed on the calculation view.
The calculation view instantiation happens at runtime when executing a query. The other optimizations for
calculation views include pruning of joins. For more information on the instantiation process, see 1764658
Related Information
Create script-based calculation views to depict complex calculation scenarios by writing SQL script
statements. It is a viable alternative to depict complex business scenarios, which you cannot achieve by
creating other information views (Attribute, Analytical, and Graphical Calculation views).
Context
For example, if you want to create information views that require certain SQL functions (i.e. window), or
predictive functions (i.e. R-Lang), then you use script-based calculation views. Sufficient knowledge of SQL
scripting including the behavior and optimization characteristics of the different data models is a prerequisite
for creating script-based calculation views.
Procedure
Note
If you do not select a default schema while scripting, then provide fully qualified names of the
objects used.
Note
The IN function does not work in SQL script to filter a dynamic list of values. Use APPLY_FILTER
functions instead.
Note
For all duplicate column names in the Target pane, the modeler displays an error. You cannot add
two columns with the same name to your output. If you want to retain both the columns, then
change the name of columns in the Target pane before you add them to the output.
h. If you want to override the existing output structure, select Replace existing output columns in the
Output.
i. Choose Finish.
Note
The defined order and data types of columns and parameters must match with the order and data
types of the columns and parameters in the select query, which is assigned to the output function
var_out.
13. Write the SQL Script statements to fill the output columns.
You can drag information views from the navigator pane to the SQL editor to obtain an equivalent SQL
statement that represents the deployed schema name for the information view.
Note
For information on providing input parameters in script-based calculation views, see SAP Note 2035113
Note
You can also activate the current view by selecting the view in the SAP HANA Systems view and
choosing Activate in the context menu. The activation triggers validation check for both the client
side and the server side rules. If the object does not meet any validation check, the object
activation fails.
Note
The activation only triggers the validation check for the server side rules. If there are any errors
on the client side, they are skipped, and the object activation goes through if no error is found
on the server side.
For more information about the details of the functions available on content assist (pressing Ctrl +
Space in the SQL Console while writing procedures) in the SAP HANA SQLScript Reference.
15. Assign Changes
a. In the Select Change dialog box, either create a new ID or select an existing change ID that you want to
use to assign your changes.
b. Choose Finish.
For more information on assigning changes, see SAP HANA Change Recording of the SAP HANA
Developer Guide.
16. Choose Finish.
Next Steps
After creating a script-based calculation view, you can perform certain additional tasks to obtain the desired
output. The following table lists the additional tasks that you can perform to enrich the calculation view.
If you want to assign semantic types to provide more meaning to attributes and Assign Semantics
measures in calculation views.
If you want to parameterize calculation views and execute them based on the values Create Input Parameters
users provide at query runtime.
If you want to, for example, filter the results based on the values that users provide to Assign Variables
attributes at runtime.
If you want associate measures with currency codes and perform currency conver Associate Measures with Currency
sions.
If you want associate measures with unit of measures and perform unit conversions. Associate Measures with Unit of
Measure
If you want to create level hierarchies to organize data in reporting tools. Create Level Hierarchies
If you want to create parent-child hierarchies to organize data in reporting tools. Create Parent-Child Hierarchies
If you want to group related measures together in a folder. Group Related Measures.
If you want to filter the view data either using a fixed client value or using a session Filter Data for Specific Clients
client set for the user.
If you want to execute time travel queries on script-based calculation views. Enable Information Views for Time
Travel Queries
If you want to invalidate or remove data from the cache after specific time intervals. Invalidate Cached Content
If you want to maintain object label texts in different languages. Maintain Modeler Objects in Multiple
Languages
If you do not recommend using a script-based calculation view. Deprecate Information Views
Related Information
Context
Script-based calculation views support only one script view node. However, with table functions as a data
source in graphical calculation views, you can use the script (table function) and also combine it with other
view nodes such as union, join, and more.
Note
You cannot create, view, or modify table functions in the SAP HANA Modeler perspective. You use the
project explorer view or the repositories view of the SAP HANA Development perspective to perform these
tasks.
4. Provide the name of the graphical calculation view that uses this table function as a data source in its
default node (aggregation or projection).
5. Provide a name to the new table function.
Note
If a table function with the same name exists within the same package, then overwrite the existing table
function with this new table function by selecting the checkbox Overwrite changes to an existing table
function.
6. Select the checkbox Open the Graphical calculation view to open the graphical calculation view in a new
view editor.
Results
You have now saved your script-based calculation view as a table function and included it as a data source in
the default node of the new graphical calculation view. You can access all three objects (the new graphical
calculation view, the script-based calculation view, and the table function) from within the same package.
Create graphical calculation views using a graphical editor to depict a complex business scenario. You can also
create graphical calculation views to include layers of calculation logic and with measures from multiple data
sources.
Context
Graphical calculation views can bring together normalized data that are generally dispersed. You can combine
multiple transaction tables and analytic views, while creating a graphical calculation view.
Note
If you want to execute calculation views in SQL engine, see SAP NOTE 1857202
Procedure
Modeler launches a new graphical calculation view editor with the semantics node and default aggregation
or projection node depending on the data category of the calculation view.
9. Continue modeling the graphical calculation view by dragging and dropping the necessary view nodes from
the tool palette.
10. Add data sources.
You can add one or more data sources depending on the selected view node.
d. Choose OK.
11. Define output columns.
a. Select a view node.
Note
Using keep flag column property. The keep flag property helps retrieve columns from the view node
to the result set even if you do not request it in your query. In other words, if you want to include
those columns into the SQL group by clause even if you do not select them in the query, then:
If you are creating a calculation view with data category as cube, then to successfully activate the
information view, specify at least one column as a measure.
a. Select the Semantics node.
b. Choose the Columns tab.
c. In the Local section, select an output column.
d. In the Type dropdown list, select Measure or Attribute.
If the value is set to Cube, an additional Aggregation column is available to specify the aggregation type
for measures.
Note
If the default node of the calculation view is aggregation, you can always aggregate the measures
even if no aggregation function is specified in the SQL.
2. In the Properties tab, set the value of the property Always Aggregate Results to True
e. If you want to hide the measure of attribute in the reporting tool, select the Hidden checkbox.
f. If you want to force the query to retrieve selected attribute columns from the database even when not
requested in the query, set the Keep Flag property to True for those attributes.
This means that you are including those columns into the group by clause even if you do not select
them in the query. To set the Keep Flag property of attributes to True, select an attribute in the Output
pane, and in the Properties pane set the Keep Flag property to True.
Note
If you are using any attribute view as a data source to model the calculation view, the Shared
section displays attributes from the attribute views that are used in the calculation view.
You can also activate the current view by selecting the view in the SAP HANA Systems view and
choosing Activate in the context menu. The activation triggers validation check for both the client
side and the server side rules. If the object does not meet any validation check, the object
activation fails.
Note
The activation only triggers the validation check for the server side rules. If there are any errors
on the client side, they are skipped, and the object activation goes through if no error is found
on the server side.
Note
1. For an active calculation view, you can preview output data of an intermediate node. This helps to
debug each level of a complex calculation scenario ( having join, union, aggregation, projection,
and output nodes). Choose the Data Preview option from the context menu of a node.
When you preview the data of an intermediate now, SAP HANA studio activates the intermediate
calculation model with the current user instead of the user _SYS_REPO. The data you preview for a
node is for the active version of the calculation view. If no active version for the object exists then
activate the object first.
Next Steps
After creating a graphical calculation view, you can perform certain additional tasks to obtain the desired
output. The following table lists the additional tasks that you can perform to enrich the calculation view.
If you want to query data from two data sources and combine records from both the Create Joins
data sources based on a join condition or to obtain language-specific data.
If you want to query data from database tables that contains spatial data. Create Spatial Joins
If you want to validate joins and identify whether you have maintained the referential Validate Joins
integrity.
If you want to combine the results of two more data sources. Create Unions
If you want to partition the data for a set of partition columns, and perform an order Create Rank Nodes
by SQL operation on the partitioned data.
If you want to filter the output of projection or aggregation view nodes. Filter Output of Aggregation or Pro
jection View Nodes.
If you want to count the number of distinct values for a set of attribute columns. Create Counters
If you want to create new output columns and calculate its values at runtime using an Create Calculated Columns
expression.
If you want to restrict measure values based on attribute restrictions. Create Restricted Columns
If you want to assign semantic types to provide more meaning to attributes and Assign Semantics
measures in calculation views.
If you want to parameterize calculation views and execute them based on the values Create Input Parameters
users provide at query runtime.
If you want to, for example, filter the results based on the values that users provide to Assign Variables
attributes at runtime.
If you want associate measures with currency codes and perform currency conver Associate Measures with Currency
sions.
If you want associate measures with unit of measures and perform unit conversions. Associate Measures with Unit of
Measure
If you want to create level hierarchies to organize data in reporting tools. Create Level Hierarchies
If you want to create parent-child hierarchies to organize data in reporting tools. Create Parent-Child Hierarchies
If you want to group related measures together in a folder. Group Related Measures.
If you want to filter the view data either using a fixed client value or using a session Filter Data for Specific Clients
client set for the user.
If you want to execute time travel queries on calculation views. Enable Information Views for Time
Travel Queries
If you want to invalidate or remove data from the cache after specific time intervals. Invalidate Cached Content
If you want to maintain object label texts in different languages. Maintain Modeler Objects in Multiple
Languages
Create graphical calculation views with star joins to join multiple dimensions with a single fact table. In other
words, you use star joins to join a central entity to multiple entities that are logically related.
Context
Star joins in calculation views help you to join a fact table with dimensional data. The fact table contains data
that represent business facts such price, discount values, number of units sold, and so on. Dimension tables
represent different ways to organize data, such as geography, time intervals, and contact names and more.
Procedure
You can create star join with data category as cube only.
9. Choose Finish.
Modeler launches a new graphical calculation view editor with the semantics node and the star join node.
10. Add data sources.
a. Select the star join node.
b. In the context menu, choose Add Objects.
c. In Find Data Sources dialog box, enter the name of the data source.
Note
You can only add calculation views with data category types as dimension or blank as a data source
in star join node.
By default, the last data source (first from the bottom) in the star join node is executed first. If you want to
rearrange the data sources after adding it to the star join node:
a. Select the data source.
b. In the context menu, choose Move.
c. Choose Up of Down to rearrange the data sources.
12. Add inputs to star join node.
Continue modeling the graphical calculation view with a cube structure, which includes attributes and
measures. The input to the star join node must provide the central fact table.
13. Maintain star join properties.
a. Select the Star Join node.
b. In the Details tab, create joins by selecting a column from one data source, holding the mouse button
down and dragging to a column in the central fact table.
c. Select the join.
d. In the context menu, choose Edit.
e. In the Properties section, define necessary join properties.
Note
You can assign the join types, Right Outer or Full Outer Join only to the last data source (first from
the bottom) and no other data sources in the star join node must have the right outer or full outer
join types.
Related Information
Table functions as a data source in graphical calculation views helps build calculation views for complex
calculation scenarios.
Context
Table functions in graphical calculation views are an alternate to script-based calculation views that you use
generally for building complex calculation views.
Note
The IN function does not work in SQL script to filter a dynamic list of values. Use APPLY_FILTER functions
instead.
Procedure
Related Information
Table functions are functions that users can query like any other database tables, and you can add it as a data
source in your graphical calculation views. You can use the from clause of a SQL statement to call a table
function.
Context
You can create and edit table functions in both, the project explorer view or in the repositories view of SAP
HANA Development perspective.
Procedure
To model graphical calculation views, you can also use SAP HANA CDS entities and SAP HANA CDS views as a
data source. The CDS entity is the core artifact for defining the persistence model using the SAP HANA CDS
syntax, and CDS views are database views created using the SAP HANA CDS syntax.
Prerequisites
• You have upgraded to SAP HANA 1.0 SPS 10 server where the version of rest API DU (HANA_DT_BASE) is
1.3.11
• You have the user role permission, sap.hana.xs.dt.base::restapi
SAP HANA CDS and SAP ABAP CDS are similar but not interchangeable; they are intended for use in
different development scenarios. The information in this topic relates only to SAP HANA CDS. For more
information about SAP ABAP CDS, see SAP - ABAP CDS Development User Guide in Related Information
below.
Procedure
Note
You cannot drag and drop SAP HANA CDS entities or views from the navigation pane.
4. In the Find dialog box, enter the name of the SAP HANA CDS entity or view and select it from the list.
Note
The names of SAP HANA CDS entities and views are case sensitive.
5. Choose OK.
Related Information
Considering the different business scenario and reporting use cases, SAP HANA modeler offers different view
nodes to model graphical calculation views.
The following table lists the view nodes and its description.
Note
You can use data sources, union, join, projection, or aggregation view nodes and the inputs to union, join,
projection, and aggregation view nodes.
Related Information
SAP HANA supports multiple isolated databases in a single SAP HANA system. These are referred to as tenant
databases.
An SAP HANA system always has exactly one system database, used for central system administration, and
any number of tenant databases (including zero). An SAP HANA system is identified by a single system ID
(SID). Databases are identified by a SID and a database name.
In SAP HANA modeler, you can model graphical calculation views in an SAP HANA system with data sources
from any tenant database.
Note
Modeler supports adding only the catalog tables, SQL views, and graphical calculation views as data
sources from the tenant databases. These data sources must be already activated before you use them for
modeling graphical calculation views.
Prerequisites
• You have added the SAP HANA system having multiple isolated databases to the SAP HANA Systems view.
• For activating graphical calculation views with remote data sources, the _SYS_REPO in the local database
needs to be a remote identity of (mapped to) a user in the remote database that has the privileges for the
remote tables. The database does not allow the administrator to add a remote identity to SYS_REPO in any
tenant. Instead, the administrator should create a dedicated user (for example, REPO_DB1_DB2), which
you can use exclusively for privileges in the remote database. This user only needs the privileges for tables
that are used as remote data sources and does need privileges for all the tables in the remote DB that are
used in calculation views.
Note
SAP HANA modeler does not support remote session client to filter client data. We recommend you to
use cross client views only, and filter clients using parameters.
• You have enabled and configured cross database access. For more information, refer to the section,
Enable, and Configure Cross-Database Access in the SAP HANA Administration Guide.
SAP HANA modeler supports two types of data categories to classify calculation views. The following table
provides more information on each of these data category types.
Cube Calculation views with data categories as Cube are visible to the reporting tools and supports data
analysis with multidimensional reporting.
If the data categories for graphical calculation views are set to Cube, modeler provides aggregation
as the default view node. Also, an additional aggregation column behavior is available that you can
use to specify the aggregation types for measures.
Dimension You can use Dimension views as data sources in other calculation views, which have data category
Cube, for multidimensional reporting purposes.
If the data category is Dimension, you cannot create measures. The output node offers only attrib
utes, also for numerical data types. For example, you use calculation views to fill lists, where recur
ring attribute values are not a problem, but desired instead. Other typical use cases are the use as
master data dimension in star-join Calculation Views.
If the data categories for graphical calculation views are set to Dimension, the default view node is a
projection.
<blank> Calculation views with data categories as <blank>, or if the calculation views are not classified as
cube or dimension, then they are not visible to the reporting tools and do not support multidimen
sional reporting.
However, you can use these calculation views as data sources in other information views, which
have data category as Cube, for any multidimensional reporting purposes.
If the graphical calculation views are not classified as cube or dimension, modeler provides projec
tion as the default view node.
Related Information
View nodes form the building blocks of information views. These view nodes help you build complex, flexible,
and robust analytic models, and each view node type possess specialized capabilities that triggers advanced
features in the database.
This section describes the different views nodes that you can use within graphical calculation views, its
functionalities, and examples on how you can use these views nodes to build calculation views and obtain the
desired output.
Related Information
Use join nodes in graphical calculation views to query data from two data sources. The join nodes help limit the
number of records or to combine records from both the data sources, so that they appear as one record in the
query results.
Procedure
Note
By default, modeler consider the data source that you add first to the join node as the left table and
the data source that you add next as the right table.
Note
If you want to add all columns from the data source to the output, in the context menu of the data
source, choose Add All To Output.
5. Create a join.
a. In the Details pane, create a join by selecting a column from one data source, holding the mouse button
down and dragging to a column in the other data source.
Note
If you want to switch the left and right tables, in the context menu of the join, select Swap Left and
Right Tables.
Related Information
Create spatial joins by using the join nodes in graphical calculation views to query data from data sources that
have spatial data.
Procedure
Note
By default, modeler consider the data source that you add first to the join node as the left table and
the data source that you add next as the right table.
Note
If you want to add all columns from the data source to the output, in the context menu of the data
source, choose Add All To Output.
5. Create a join.
a. In the Details pane, create a join by selecting a column from one data source, holding the mouse button
down and dragging to a column in the other data source.
Note
For spatial joins, you join the two database tables on columns of spatial data types.
Note
If you select Relate as the predicate value, in the Intersection Matrix value help, select a value.
Similarly, if you select Within Distance as the predicate value, in the Distance value help, select a
value. You can use a fixed value or an input parameter to provide the intersection matrix or the
distance values to modeler at runtime.
e. If you want to execute the spatial join only if predicate condition evaluates to true, then in the
dropdown list, select True.
7. Choose OK.
Related Information
SAP HANA modeler offers spatial data types in its data model and query language for storing and accessing
geospatial data.
Geometries The term geometry means the overarching type for objects such as points, linestrings, and
polygons. The geometry type is the supertype for all supported spatial data types.
Points A point defines a single location in space. A point geometry does not have length or area. A
point always has an X and Y coordinate.
In GIS data, points are typically used to represent locations such as addresses, or geo
graphic features such as a mountain.
Linestrings A linestring is geometry with a length, but without any area. ST_Dimension returns 1 for
non-empty linestrings. Linestrings can be characterized by whether they are simple or not
simple, closed or not closed. Simple means a linestring that does not cross itself. Closed
means a linestring that starts and ends at the same point. For example, a ring is an exam
ple of simple, closed linestring.
In GIS data, linestrings are typically used to represent rivers, roads, or delivery routes.
In GIS data, multilinestrings are often used to represent geographic features like rivers or a
highway network.
Polygons A polygon defines a region of space. A polygon is constructed from one exterior bounding
ring that defines the outside of the region and zero or more interior rings, which define
holes in the region. A polygon has an associated area but no length.
In GIS data, polygons are typically used to represent territories (counties, towns, states,
and so on), lakes, and large geographic features such as parks.
In GIS data, multipolygons are often used to represent territories made up of multiple re
gions (for example a state with islands), or geographic features such as a system of lakes.
Temporal joins allow you to join the master data with the transaction data (fact table) based on the temporal
column values from the transaction data and the time validity from the master data.
Procedure
1. Open the analytic view or calculation view with star join node in the view editor.
2. Select the Star Join node.
The star join node must contain the master data as a data source. The input to the star join node (the data
foundation node) provides the central fact table.
3. Create a join
Create a join by selecting a column from one data source (master table), holding the mouse button down
and dragging to a column in the other data source (fact table).
4. Select the join.
5. In the context menu, choose Edit.
6. Define join properties.
Note
For temporal joins in analytic views, you can use Inner or Referential join types only and for temporal
joins in calculation views, you can use Inner join type only.
Related Information
A temporal join indicates the time interval mapping between the master data and the transaction data for
which you want to fetch the records.
A temporal join, between two columns of the fact table and master table, is based on the date field from the
fact table and time interval (to and from fields) of the master table. The date field from the fact table is referred
to as the temporal column.
This means that, the tables are joined if the temporal column values in the fact table value are within the valid
time interval values from the master table. A time interval is assigned to each record in the results set, and the
records are valid for the duration of the interval to which they are assigned.
The supported data types for Temporal Column, From Column, and To Column are timestamp, date, and
integers only.
Temporal condition values in temporal joins, help determine whether to include or exclude the value of the
FROM and TO date fields of the master data, while executing the join condition.
The following table lists the temporal conditions and its description.
Include To Exclude From This temporal condition includes the value of the To Column field and excludes the
value of the From Column field while executing the join.
Exclude To Include From This temporal condition excludes the value of the To Column field and includes the
value of the From Column field while executing the join.
Exclude Both This temporal condition excludes the value from both the To Column field and the
From Column field while executing the join.
Include Both This temporal condition includes the value from both the To Column field and From
Column field while executing the join.
Create temporal joins to join the master data with the transaction data (fact table) based on time column
values from the transaction data and the time validity columns from the master data.
For example, consider an attribute view PRODUCT (master data) with attributes PRODUCT_ID,
VALID_FROM_DATAE and VALID_TO_DATE and an analytic view SALES (transactional data) with attributes
PRODUCT_ID, DATE and REVENUE.
Now, you can create a temporal join between the master data and transaction data using the attribute
PRODUCT_ID to analyze the sales of product for a particular time period.
For creating the temporal join, you can use the DATE attribute from the analytic view as the temporal column
and use the VALID_FROM_DATAE and VALID_TO_DATE attributes from the attribute view (master data) to
specify the time period for which the record set is valid.
A text join helps obtain language-specific data. It retrieves columns from a text table based on the session
language of the user.
The text tables contain description for a column value in different languages. For example, consider a PRODUCT
table that contains PRODCUT_ID and a text table PRODUCT_TEXT that contains the columns PRODUCT_ID,
DESCRIPTION, and LANGUAGE.
PRODUCT
PRODUCT_ID SALES
1 1000
2 2000
3 4000
PRODUCT_TEXT
1 E Description in English.
1 D Description in German.
2 E Description in English.
3 E Description in English.
Create a text join to join the two tables and retrieve language-specific data using the language column
LANGUAGE. For example, if your session language is E and if you have added all columns to the output of the join
view node, the output of the text join is:
Note
For text joins, always add the text table as the right table.
When you execute a query, the engine evaluates the language setting of your connection. The texts are
selected based on the language setting.
You can set the language when you add a container in the SAP HANA database explorer. If you want to see the
English language being selected, then when adding a container, in the Advanced Options, specify LOCALE=en.
You can further specify the language like en_US. For more information, see 2364550 .
After creating a join between two data sources, you can define the join property as dynamic. Dynamic joins
helps improve the join execution process by reducing the number of records processed by the join view node at
runtime.
Dynamic joins are special type of joins. In this join type, two or more fields from two data sources are joined
using a join condition that changes dynamically based on the fields requested by the client. For example –
Table1 and Table2 are joined on Field1 and Field2. But, if only one, Field1 or Field2 is requested by a client, the
tables (Table1 and Table2) are joined based only on the requested field (Field1 or Field2).
Note
You can set the Dynamic Join property only if the two data sources are joined on multiple columns.
Dynamic join behavior is different from the classical join behavior. In the classical join, the join condition is
static. This means that, the join condition does not change irrespective of the client query. The difference in
behavior can result in different query result sets. Use dynamic joins with caution.
At least one of the fields involved in the join condition is part of the client query. If you define a join as dynamic,
the engine dynamically defines the join fields based on the fields requested by the client query. But, if the field
is not part of the client query, it results in query runtime error.
• In static joins, the join condition isn't changed, irrespective of the client query.
• In a dynamic join, if the client query to the join doesn't request a join column, a query runtime error occurs.
This behavior of dynamic join is different from the static joins.
• Dynamic join enforces aggregation before executing the join, but for static joins the aggregation happens
after the join. This means that, for dynamic joins, if a join column is not requested by the client query, its
value is first aggregated, and later the join condition is executed based on columns requested in the client
query.
Related Information
Consider that you want to evaluate the sales of a product and also calculate the sales share of each product
using the following data sources.
SALES
APJ IND 10
APJ IND 10
APJ CHN 20
APJ CHN 50
EUR DE 50
EUR DE 100
EUR UK 20
EUR UK 30
PRODUCT
EUR DE PROD1
EUR DE PROD2
EUR UK PROD1
EUR UK PROD2
So you use a calculation view to join the above two data sources via two different aggregation view nodes as
inputs to the join view node. The aggregation view node with the data source SALES does not has the PRODUCT
column but contains total sales for a given region or country.
Now assume that the two aggregation view nodes join dynamically on the columns, REGION and COUNTRY.
The outputs of the join view node are columns REGION, PRODUCT, SALES and the calculated columns,
TOT_SALES, and SALES_SHARE.
When you execute a client query on the calculation view to calculate the sales share of a product at a region
level, the output from the dynamic join and static join is different:
Dynamic Join
Static Join
The dynamic join calculates the sales share at the region level by aggregating the sales values before joining the
data sources. The static join, on the other hand, first calculates the sales share at the region level and the
country level (because the join condition contains both region and country), and then aggregates the resulting
sales share after the join is executed.
While executing the join, by default, the query retrieves join columns from the database even if you don't
specify it in the query. The query automatically includes the join columns into the SQL GROUP BY clause
without you selecting them in the query.
You can avoid this default behavior by using the join property Optimizing Join Columns. When this property for
a join is set to True, only the columns specified in the query are retrieved from the database.
Note
Optimizing join columns is supported only for left outer joins with cardinality 1:1 or N:1, text joins with
cardinality 1:1 or N:1, right outer joins with cardinality 1:1 or 1:N, and referential joins.
If the filters are defined on join columns for which you have enabled Optimize Join Columns, the join optimizer
cannot remove attributes of static filters. In this case, you can optimize the join column by introducing a
dummy projection view node between the join and the input node with static filters.
Note
Optimize join columns are not supported for non equi joins.
Prerequisites
Consider a scenario in which a query requests only fields from one join partner, the cardinality to the other
partner (of which no fields are requested) is set to 1, and the join is not an inner join. In such scenarios, the join
execution does not influence the result set. You can use the optimize join columns property to prune the fields
that are not requested in the query.
In general, for scenarios that have multiple join partners on different join fields, if the query requests fields from
only a small subset of join partners, you can use the optimize join columns property to omit various join fields
from the aggregation. This flag helps to explicitly state to the join optimizer to not include the join fields in the
aggregation. Omitting the join fields help reduce the number of records processed by the join optimizer.
This optimization therefore results in a better performance and lower memory consumption. The extent of
optimization depends on the fields that the query request at runtime, and on which fields you have defined the
join.
Remember
SAP HANA modeler allows you to validate the join cardinality and identify whether you have maintained the
referential integrity for the join tables.
Prerequisites
You have SELECT privileges on the catalog tables participating in the join to view the join validation status. If the
participating catalog tables are virtual tables, then you can view the join validation status only if the user has
SELECT privileges on the virtual table and also if the user credential to remote source has SELECT privileges on
the remote table.
Context
While defining a join, you can validate the cardinality and identify whether you have maintained referential
integrity for the join tables. If you have chosen a cardinality, which is not optimal for the join tables, modeler
recommends you a cardinality after analyzing the data in the participating join tables.
Note
Choosing a valid cardinality for your data sources is necessary to avoid incorrect results from the engine,
and to achieve better performance. If you are not aware of the optimal cardinality for your join, then it is
recommended not to provide any cardinality value.
Results
After analyzing the participating tables in your join definition, in the Validation Information section of the
Validate Join dialog box, modeler recommends an optimal cardinality and also specifies whether you have
maintained referential integrity for the join tables. You can choose to modify the join definition based on this
recommendation.
Note
The cardinality that SAP HANA modeler recommends is applicable only to the current state of the system.
It becomes invalid if you perform any changes to the data or if you transport your calculation view to
another system with a different data set.
After creating a join, define its properties to obtain a desired output when you execute the join.
SAP HANA modeler allows you to define the following join properties.
Join Type The value of this property specifies the join type used for creating a join. For more
information, see Supported Join Types.
Cardinality The value of this property specifies the cardinality used for creating a join.
By default, the cardinality of the join is empty. If you are not sure about the right
cardinality for the join tables, we recommend to not specify any cardinality. Modeler
determines the cardinality when executing the join.
Language Column The value of this property specifies the language column that modeler must use for
executing text joins. For more information, see Text Joins.
Dynamic Join The value of this property determines whether modeler must dynamically define the
columns of the join condition based on the client query. For more information, see
Dynamic Joins.
Optimize Join Columns The value of this property determines whether modeler must retrieve the columns that
are not specified in the query from the database. For more information, see Optimize
Join Execution.
When creating a join between two tables, you specify the join type. The following table lists the supported join
types in SAP HANA modeler.
Inner This join type returns all rows when there is at least one match in both the database ta
bles.
Left Outer This join type returns all rows from the left table, and the matched rows from the right
table.
Right Outer This join type returns all rows from the right table, and the matched rows from the left
table.
Referential This join type is similar to inner join type, but assumes that referential integrity is main
tained for the join tables.
Text Join This join type is used to obtain language-specific data from the text tables using a lan
guage column.
Full Outer Joins This join type displays results from both left and right outer joins and returns all (matched
or unmatched) rows from the tables on both sides of the join clause.
Note
Full outer join type is supported only in new calculation views created with SPS 11 version onwards.
Use union nodes in graphical calculation views to combine the results of two or more data sources.
Context
A union node combines multiple data sources, which can have multiple columns. You can manage the output of
a union node by mapping the source columns to the output columns or by creating a target output column with
constant values.
Procedure
Note
If you want to add all columns from the data source to the output, in the context menu of the data
source, choose Add All To Output.
If you want to assign a constant value to any of the target columns, then
a. In the Target section, select an output column.
b. In the context menu, choose Manage Mappings.
c. In the Manage Mappings dialog box, set the Source Column value as blank.
d. In the Constant Value field, enter a constant value.
e. Choose OK.
6. Create a constant output column.
If you want to create a new output column and assign a constant value to it, then
Note
Constant Column
Example: Constant Columns [page 90]
Empty Union Behavior
Prune Data in Union Nodes [page 88]
Pruning data in union nodes helps optimize the query execution. You prune data by creating and using a
pruning configuration table that specifies the filter conditions to limit the result set.
Context
For pruning data in union nodes, define the pruning configuration table that modeler must use in the view
properties of the calculation view.
Modeler cannot prune the data in union nodes, if queries that you execute on the union node are unfolded and
does not perform any aggregation. In such cases, you can switch of the unfolding behavior with the hint,
NO_CALC_VIEW_UNFOLDING. Unfolding is the normal query execution behavior where the query execution is
passed to the SQL engine or the optimizer after the calculation engine instantiates the query. But, unfolding is
not possible for complex calculation views.
Procedure
Note
You can use catalog tables or repository tables or views as pruning configuration tables.
Related Information
Modeler refers to the pruning configuration table while executing queries on the union node. The pruning
configuration table that you create must have the following table structure:
Note
Note
If you have already defined filters on columns outside of the pruning configuration table, for example,
consider:
In the preceding case, pruning is not done U1 because pruning table shows that all records in U1 has
value “5” in Column1.
Similarly, consider:
Here, U1 is pruned because pruning table shows that all records in U1 has value “6” in Column1. But,
you cannot obtain any result even if the union operation is executed.
If you are creating multiple filter conditions using the same column, then the filter conditions are combined
using the logical OR operator. Similarly, if you are using different columns to provide the filter conditions,
then the filter conditions are combined using the logical AND operator.
At runtime the table is read with definer privileges of the view. This means that, SYS_REPO should have
read access for the pruning configuration table.
Pruning data in union nodes of calculation views helps optimize query execution. The following is an example of
a pruning configuration table and a query that you can possibly execute.
a b u C1 = 2008
a b u C2 < 1998
a b u C2 > 2005
The preceding example is an equivalent of ('2000' <= C1 <= '2005' OR C1 = '2008') AND (C2 < '1998' OR C2 >
'2005')
Note
SQL queries that you use must have numerical constants enclosed within single quotes. For example, the
following query cannot be pruned:
You can prune the preceding query only if the numerical constants are enclosed within single quotes as
shown below:
Constant output columns help denote the underlying data from the source columns with constant values in the
output. You can also map the unmapped source columns to a constant output column based on the business
requirement.
For example, consider that you want to compare the planned sales of each quantity with its actual sales using
two data sources with similar structures, ACTUALSALES and PLANNEDSALES.
ACTUALSALES
5000 P1
3000 P2
PLANNEDSALES
SALES_QUANTITY PRODUCT_ID
4000 P1
4000 P2
When you use a union view node to combine the results of the two data sources, you cannot differentiate the
data from these data sources.
SALES_QUANTITY PRODUCT_ID
5000 P1
3000 P2
4000 P1
4000 P2
In such cases, create a constant output column PLANNED_OR_ACTUAL and assign the constant value ACTUAL
to ACTUALSALES and the constant value PLANNED to PLANNEDSALES.
5000 P1 ACTUAL
3000 P2 ACTUAL
4000 P1 PLANNED
4000 P2 PLANNED
Now, you can identify the data source and its underlying data.
This property is useful, for example, for value help queries in applications. You can select either No Row or Row
with Constant as values for the Empty Union Behavior property. Select the data source in the mapping
definition and in the Properties tab define the values for this property based on your business requirement.
Constant values A and B are defined for Projection_1 and Projection_2 using the constant column CONSTANT.
When you execute a query on calculation view with this union node, and if the column CUSTOMER_ID is not
queried, the Empty Union Behavior property for the Projection_2 data source determines whether the constant
column CONSTANT returns the constant value A for Projection_2 in the output.
• If the Empty Union Behavior property is set to No Row, no data from Projection _2 appears in the output
data. In the other words, only data from Projection_1 appears in the output data.
• If the Empty Union Behavior property is set to Row with Constant, then the output data includes one record
from Projection _2. In this one record, the constant value A appears for the CONSTANT column and values
for all other columns appears as null.
Use rank nodes in graphical calculation views to partition the data for a set of partition columns, and perform
an order by SQL operation on the partitioned data.
Context
For example, consider a TRANSACTION table with two columns PRODUCT and SALES. If you want to retrieve
the top five products based on its sales, use a rank node. The rank node first partitions the TRANSACTION
table with the PRODUCT as the partition column, and performs an order by operation on the partitioned table
using the SALES column to retrieve the top five products based on sales.
Procedure
Note
If you want to add all columns from the data source to the output, in the context menu of the data
source, choose Add All To Output.
Descending (Top N) Retrieves top N values from the ordered set where N is
the threshold value that you define.
Ascending (Bottom N) Retrieves bottom N values from the ordered set where
N is the threshold value that you define
a. In the Threshold Value value help, select a threshold value type and provide the threshold value
accordingly.
7. In the Order By dropdown list, select a column that modeler must use to perform the order by operation.
8. Partition the data.
a. In the Partition By section, choose Add.
b. In the Partition By Column dropdown list, select a partition column that modeler must use to partition
the data.
Note
You can partition the data using more than one partition column.
9. If you want to partition the data only with the partition by columns that query requests for processing the
rank node, select the Dynamic Partition Elements checkbox.
Note
If you do not select this checkbox, modeler partitions the data with all partition columns that you have
added in the Partition By section even if it is not requested in the query to process the rank node.
10. If you want to generate an additional output column to store the column rank value, select the Generate
Rank Column checkbox.
Related Information
Apply filters on columns in the data foundation nodes of attribute views and analytic view to filter the output of
these nodes.
Context
You apply filters, for example, to retrieve the sales of a product where (revenue >= 100 AND region = India) OR
(revenue >=50 AND region = Germany). You can also define filters using nested or complex expressions.
Filters on columns are equivalent to the HAVING clause of SQL. At runtime, the modeler executes the filters
after performing all the operations that you have defined in the data foundation nodes. You can also use input
parameters to provide values to filters at runtime.
If you want to define filters on columns of data foundation node in attribute views or analytic views:
a. Open the analytic view or attribute view in the view editor.
b. Select the Data Foundation node.
c. In the Details pane, select a column.
d. In the context menu, choose Apply Filter.
e. In the Apply Filter dialog box, select an operator.
f. In the Value field, select a fixed value or an input parameter (applicable for analytic views) from the value
help.
Note
You can also use an input parameter to apply filters. But, if you are using the attribute view or analytic
view in calculation views, then map the input parameter to another input parameter with the same
name in the calculation view. This allows you to filter the attribute or analytic view when you execute
the calculation view. If you do not map the input parameters, modeler uses unfiltered data from the
attribute or analytic views.
g. Choose OK.
Apply filters on columns in the projection or the aggregation view nodes (except the default aggregation or
projection node) to filter the output of these nodes.
Context
You apply filters, for example, to retrieve the sales of a product where (revenue >= 100 AND region = India) OR
(revenue >=50 AND region = Germany). You can also define filters using nested or complex expressions.
Filters on columns are equivalent to the HAVING clause of SQL. At runtime, the modeler executes the filters
after performing all the operations that you have defined in the aggregation or projection. You can also use
input parameters to provide values to filters at runtime.
Procedure
If you want to define filters on columns of projection or aggregation view nodes in calculation views:
a. Open the calculation view in the view editor.
Note
In the selected view node, if you are using other information views as data sources (and not tables),
then you can use only input parameters to apply filters on columns.
g. Choose OK.
2. Choose OK.
3. If you want to apply filters on columns or at the node level using expressions.
You can create expression in SQL language or the column engine language to apply filters. For example,
match("ABC",'*abc*') is an expression in the column engine language.
Note
For expression in SQL language, modeler supports only a limited list of SQL functions.
Related Information
The following table lists the operators and its meanings, which you can use while defining filter conditions.
Not Equal To filter and show data other than the filter value
Between To filter and show data for a particular range specified in the From Value and To Value
List of Values To filter and show data for a specific list of values separated by comma
Not in list To filter data and show data for the values other than the ones specified. You can provide a list
of values to be excluded using comma.
Is not NULL To filter and show data of all the rows that have non NULL values
Less than To filter and show data with values less than the one specified as filter value
Less than or Equal to To filter and show data with values less than or equal to the one specified as filter value
Greater than To filter and show data with values greater than the one specified as filter value
Greater than or Equal to To filter and show data with values greater than or equal to the one specified as filter value
Contains Pattern To filter and show data that matches the pattern specified in the filter value. You can use '?'
question mark to substitute a single character, and '*' asterisk to substitute many. For exam
ple, to filter data for continents that start with letter A, use Contains Pattern filter with value A*.
This would show the data for all the continents that start with A like Asia and Africa. The filter
Contains Pattern in expression editor is converted as match. Hence, for the given ex
ample the corresponding filter expression is (match("CONTINENT",'A*')).
After modeling information views based on your requirement, deploy them within SAP HANA modeler to
preview and analyze the output data. You can also view the SQL query that modeler generates for the deployed
information view.
Context
Data preview refers to visualizing the output of information views in graphical or tabular format. You can
preview output of information views within SAP HANA modeler using any of the following preview options:
In graphical calculation views, you can also preview output of any of the intermediate nodes in the view. Select
the view node and in the context menu, choose Data Preview.
Note
For data preview on intermediate nodes, you should have EXECUTE privilege on the procedure
SYS.CREATE_INTERMEDIATE_CALCULATION_VIEW_DEV.
This opens a new editor and displays output data in graphical format.
4. Preview raw data.
If you want to view all attributes along with its data in a simple table format,
If you want to analyze or edit the SQL query that modeler generates for a deployed information view,
This opens a new editor to preview the output data. In the Results tab, the SQL console displays the
SQL query that modeler generates for the deployed information view and also the equivalent output
data in simple table format.
d. If you want to modify the SQL query and view results accordingly, in the SQL tab, modify the query and
Note
You can drag information views from the navigator pane to the SQL editor to obtain an equivalent
SQL statement that represents the deployed schema name for the information view.
You can also generate the SQL query for an information view from the navigator pane (SAP HANA systems
view).
Modeler opens an SQL editor with equivalent SQL query for the selected information view.
If the information view has any variables or input parameters, modeler provides placeholders for
them in the SQL query. You can replace the placeholders with values for the input parameters and
variables.
e. If you want to modify the SQL query, enter the new query in the SQL editor and choose to view
output for the modified query.
Note
Invalidated view error. If there are inconsistencies in runtime information (that is, calculation views in
catalog or in tables related to runtime) of an information view, you get invalidated view errors. In such
cases, redeploy the view to correct the inconsistencies in runtime information.
Related Information
Use the data-preview editor to display and inspect raw data output or to view all attributes and measures in a
graphical format.
• Raw Data
• Distinct Values
• Analysis
Raw Data All attributes along with data • Filter data. For example, define filters on columns and
in a table format. filter the data based on company names.
• Export data to different file formats to analyze them in
other reporting tools.
Distinct values All attributes along with data Basic data profiling
in a graphical format.
Analysis All attributes and measures • Perform advance analysis using labels and value axis. For
in a graphical format. example, analyze sales based on country by adding
Country to the labels axis and Sales to the value axis.
Note
If you refresh data in the Analysis tab, the data modeler clears the data in the Raw Data tab. Refresh the
Raw Data tab to fetch the latest results.
Use SQL preview editor to analyze the SQL query that modeler generates for a deployed information.
• Results
• SQL
SQL SQL query that modeler • Filter data. For example, define filters on columns and
generates for the deployed filter the data based on company names.
information view.
• Export data to different file formats to analyze them in
other reporting tools.
Results SQL console with the query Preview raw data output.
Attributes and measures form content data that you use for data modeling. The attributes represent the
descriptive data such as city and country and the measures represent quantifiable data such as revenue and
quantity sold.
Information views can contain two types of columns, the measures and the attributes. Measures are columns
for which you define an aggregation. If information views are used in SQL statements, then aggregate the
measures, for example, using the SQL functions SUM(<column name>), MIN(<column name>), or
MAX(<column name>. Attributes can be handled as regular columns as they do not need to be aggregated.
This section describes the different operations you can perform using the attributes and measures. For
example, you can create calculated attributes or calculated measures.
Related Information
If you want to count the number of distinct values for one or more attribute columns, you create counters,
which are special type of columns that displays the distinct count of attribute columns.
Context
You can create counters for multiple attribute columns at a time. For example, if you create a counter for two
columns, then the counter displays the count of distinct combinations of both the columns.
You can create counters for attribute columns in the default star join node or in the default aggregation view
node only.
Procedure
Note
Set the transparent filter flag on attribute columns to True for obtaining correct counter results in the
following scenarios:
• Stacked calculation views on top of other dependent calculation views, and if you have defined
count distinct measures in the dependent views.
• Queries on main calculation views contain filter on a column that you do not want to project.
For the preceding scenarios, you should set the transparent filter flag to True for the filtered,
nonprojected columns. The filter must be set for these columns in all nodes of the upper calculation
view and in the default node of the lower dependent calculation view. This helps to correct the
unexpected counter numbers.
Related Information
Counter Properties
Example: Counters [page 103]
After creating a counter, you can view its properties or change them based on your business requirements.
Modeler displays the following properties for counters in the Semantics node.
Properties Description
Data Type The value of this property specifies the data type of the counter.
Semantic Type The value of this property specifies the semantics assigned to the counter. For more
information, see Assign Semantics [page 115].
Display Folder If the counter measure is grouped in any of the display folder, the value of this prop
erty specifies the display folder that was used to group related measures. For more
information, see Group Related Measures [page 146].
Exception Aggregation Type The value of this property specifies the exception aggregation type used for creating
counters. SAP HANA modeler supports only the COUNT_DISTINCT exception ag
gregation type for counters. This exception aggregation type counts the distinct oc
currences of values for a set off attribute columns.
Hidden The value of this property determines whether the counter is hidden in reporting
tools.
Columns The attribute columns used in the counter. Modeler counts the distinct combina
tions of these columns in the data source.
Counters help you count the number of distinct values for one or more of attribute columns.
For example, consider a business scenario where you want to count the distinct products in each region.
Consider the sales transaction table, SALES_TRANSACTION with columns PRODUCT_ID and REGION.
SALES_TRANSACTION
PRODUCT_ID REGION
P1 R1
P2 R1
P3 R2
P4 R3
P5 R4
P6 R4
P7 R1
P8 R1
P9 R2
P10 R3
P11 R4
P12 R4
Create a counter, DISTINCT_PRODUCTS using the attributes REGION, and PRODUCT_ID within an aggregation
node.
After creating the counter, add the columns PRODUCT_ID and REGION to the output of the aggregation node.
When you execute the aggregation node, the output is:
REGION DISTINCT_COUNT
R1 4
R1 2
R2 2
R3 4
Create new output columns and calculate its values at runtime based on the result of an expression. You can
use other column values, functions, input parameters, or constants in the expression.
Context
For example, you can create a calculated column DISCOUNT using the expression if("PRODUCT" =
'NOTEBOOK', "DISCOUNT" * 0.10, "DISCOUNT"). In this sample expression, you use the function if(), the
column PRODUCT and operator * to obtain values for the calculated column DISCOUNT.
You can create calculated attributes or calculated measures using attributes or measures respectively.
Note
If you want to create a calculated measure and enable client side aggregation for the calculated
measure, select the Enable client side aggregation checkbox.
This allows you to propose the aggregation that client needs to perform on calculated measures.
9. If you want to hide the calculated column in reporting tools, select the Hidden checkbox.
10. Choose OK.
11. Provide an expression.
You can create an expression using the SQL language or the column engine language.
For example, the expression in column engine language, if("PRODUCT" = 'NOTEBOOK', "DISCOUNT" *
0.10, "DISCOUNT") which is equivalent to, if attribute PRODUCT equals the string ‘NOTEBOOK’ then
DISCOUNT equals to DISCOUNT multiplied by 0.10 should be returned. Else use the original value of
the attribute DISCOUNT.
Note
You can also create an expression by dragging and dropping the expression elements, operators,
and functions from the menus to the expression editor. For expression in SQL language, modeler
supports only a limited list of SQL functions.
After creating a calculated attribute or a calculated measure, you can view its properties or change them based
on your business requirements.
Select a calculated column in the Semantics node. Modeler displays the following properties for calculated
columns in the Properties pane.
Properties Description
Data Type The value of this property specifies the data type of the calculated attributes or calculated
measures.
Semantic Type The value of this property specifies the semantics assigned to the calculated attributes or
calculated measures. For more information, see Assign Semantics [page 115].
Hidden The value of this property determines whether the calculated column is hidden in report
ing tools.
Drill Down Enablement The value of this property determines whether the calculated attribute is enabled for drill
down in reporting tools. If it is enabled, the value of this property specifies the drill down
type. For more information, see Enable Attributes for Drilldown in Reporting Tools [page
142].
Display Folder If the calculated measure is grouped in any of the display folder, the value of this property
specifies the display folder that was used to group related measures. For more informa
tion, see Group Related Measures [page 146].
Create a new measure column and calculate its value at runtime based on the result of an expression.
For example, consider a business scenario where you want to create a new calculated measure column,
PRODUCT_PROFIT_PERCENT within an aggregation node. This measure column stores the profit of a product
in percentage.
P1 30000 32000
P2 32000 24000
P3 40000 41000
P4 10000 11000
P5 14000 13800
P6 18000 17000
Sample Code
(("PRODUCT_SALES_PRICE" - "PRODUCT_COST_PRICE")/"PRODUCT_COST_PRICE")*100
PRODUCT_PROFIT_PERCENT
PRODUCT_ID PRODUCT_COST_PRICE PRODUCT_SALES_PRICE AGE
Create a new attribute column and calculate its value at runtime based on the results of an expression.
For example, consider a business scenario where you want to create a new calculated attribute column,
PRODUCT_SALES_RATING within an aggregation node. This attribute column stores the rating for sales of a
product as either Good Sales or Poor Sales or Average Sales based on the product quantity sold.
PRODUCT_ID PRODUCT_QUANTITY_SOLD
P1 50
P2 30
P3 20
P4 25
P5 40
P6 10
Sample Code
P1 50 Good Sales
P2 30 Good Sales
P3 20 Average Sales
P4 25 Average Sales
P5 40 Good Sales
P6 10 Poor Sales
Create restricted columns to restrict values of measures based on attribute restrictions. For example, you can
choose to restrict the value for the REVENUE column only for REGION = APJ, and YEAR = 2012.
Context
You can apply restrictions on measures defined in the semantics node by using any of the following
approaches:
• Apply restrictions on attribute values by using values from other attribute columns.
• Apply restriction on attribute values using expressions.
Note
For restricted columns, modeler applies the aggregation type of the base column, and you can create
restricted columns in the default aggregation view node or star join node only.
Procedure
Note
You can provide a fixed value or use input parameter to provide values to the condition at runtime.
Note
You can apply restrictions using more than one attribute column.
You can create an expression using the SQL language or the column engine language. If you want to use an
expression to apply restrictions on the base measure, then:
a. Select Expression.
b. In the Language dropdown list, select an expression language.
c. In the Expression Editor, enter your expression.
Note
You can also use input parameters in your expressions to create restricted columns. For expression
in SQL language, modeler supports only a limited list of SQL functions.
Related Information
After creating a restricted column, you can view its properties or change them based on your business
requirements.
Select a restricted column in the Semantics node. Modeler displays the following properties for calculated
columns in the Properties pane.
Properties Description
Data Type The value of this property specifies the data type of the restricted column.
Hidden The value of this property determines whether the restricted column is hidden in reporting tools.
Display Folder If the restricted measure is grouped in any of the display folder, the value of this property specifies
the display folder that was used to group related measures. For more information, see Group Re
lated Measures [page 146].
Restricted columns help you restrict measure values based on attribute restrictions.
For example, consider a business scenario where you want to create a new restricted measure column,
REGION_SALES within an aggregation node. This restricted column is used to restrict values of the measure,
QUANTITY_SOLD using the attribute, REGION.
Consider the sales transaction table, SALES_TRANSACTION with columns PRODUCT_ID, REGION, COUNTRY,
and QUANTITY_SOLD.
SALES_TRANSACTION
P1 Europe DE 3000
P1 Europe UK 4000
P1 Europe GR 5000
Create a restricted column for the measure QUANTITY_SOLD using the attribute restriction, REGION=Europe
After creating the restricted column, add columns, PRODUCT_ID, REGION, QUANTITY_SOLD, and the
restricted measure, REGION_SALES to the output of the aggregation node. When you execute the aggregation
node, the output is:
P1 APJ 8000 ?
Context
You assign variables to attributes in information views, for example, to filter the results. At runtime, you can
provide values to variables by manually entering a value or by selecting them from the value help dialog.
Procedure
Modeler uses this attribute data to provide values in the value help dialog at runtime.
Note
If you want to use attribute data from another information view as the reference column, in View/
Table for value help dropdown list, select the information view that contains the required attribute.
b. If you want to use a hierarchy to organize the filtered data in reporting tools, in the Hierarchy dropdown
list, select a hierarchy.
Note
The hierarchy must contain the reference column of the variable at the leaf level (in level
hierarchies) or as a parent attribute (in parent-child hierarchies).
Note
If you do not provide a value to variable at runtime and if you have not selected the Is Mandatory
checkbox, then modeler displays unfiltered data.
For example, you can assign variables to identify the revenue for the period 2000 to 2005 and 2012, at
runtime.
4. Provide a default value.
Provide a default value that modeler must consider as the variable value when you do not provide any value
to the variable.
a. In the Default Value section, provide default values using constant values or expressions.
Expression If you want to provide the result of an expression as the default value,
1. In the Default Value section, choose Add.
2. In the Type dropdown list, select Expression.
3. Provide the From Value or both From Value and To Value depending on the variable type
and the operator.
For example, if you are using variable type Single Value and operator Equal, then provide
just the From value.
4. In the From Value field or To Value field, choose the value help icon to open the expres
sion editor.
5. In the Expression Editor, provide a valid expression.
6. Choose OK.
For example, you can evaluate the expression date(Now()), and use the result as the default
value.
Note
If you have configured the variable to accept multiple values at the runtime by selecting the Multiple
Entries checkbox, then you can provide multiple default values to the variable. In the Default Value
section, choose Add to add multiple default values. These values appear on the selection screen when
you execute the information view.
a. In the Apply the variable filter to section, choose Add to add an attribute.
b. In the Attributes dropdown list, select an attribute.
6. Choose OK.
Type Description
Single Value Use this to filter and view data based on a single attribute value. For example, to view the sales of a
product where the month is equal to January.
Interval Use this to filter and view a specific set of data. For example, to view the expenditure of a company
from March to April.
Range Use this to filter view data based on the conditions that involve operators such as "="(equal to),
">" (greater than), "<" (less than), ">=" (greater than or equal to), and "<=" (less than or equal to).
For example, to view the sales of all products in a month where the quantity sold is >= 100..
After creating a variable, you can view its properties or change them based on your business requirement.
In the Parameters/Variables tab, select a variable. Modeler displays the following variable properties in the
Properties tab.
Properties Description
Attribute The value of this property specifies the attribute data that modeler uses to provide values in the
value help at runtime.
Selection Type The value of this property specifies the variable type used for creating the variable.
Multiple Entries The value of this property specifies whether the variable is configured to support multiple values at
runtime.
Assigning semantics to attributes and measures helps define the output structure of an information view. The
semantics provide meaning to attributes and measures of an information view.
Procedure
Related Information
Extract and Copy Semantics From Underlying Data Sources [page 115]
Propagate Columns to Semantics [page 117]
Supported Semantic Types for Measures [page 117]
Supported Semantic Types for Attributes [page 118]
Defining semantics for calculation views includes defining the output columns of the calculation views (its
label, its label column, its aggregation type, and its semantic type) and the hierarchies. While defining the
semantics for a calculation view, you can extract and copy the semantic definitions of columns and hierarchies
from their underlying data sources.
Context
For example, consider that you are modeling a complex calculation view with multiple underlying data sources
and these data sources have their own semantic definitions for its columns and hierarchies. In such cases, you
can extract and copy the semantic definitions of columns and hierarchies from their underlying data sources to
define the semantics of the calculation view. This way of extracting and copying the semantic definitions helps
you save the effort of manually defining the semantics of the calculation view.
In the Extract Semantics dialog box, modeler displays the output columns and hierarchies of underlying
data sources.
5. Select columns and columns properties.
If you want to extract and copy semantic definition of columns from their underlying data sources,
a. In the Columns tab, select the columns available in the underlying data sources.
Note
If the same column is available in two or more data sources specify the data source that modeler
must use to extract and copy the semantic definition. In the Data Sources dropdown list, select the
data source.
b. Select the checkbox of those column properties (Label, Label column, Aggregation Type, and
Semantic Type) that you want to extract and copy to the semantic definition of the calculation view.
6. Select hierarchies.
If you want to extract and copy hierarchies defined in the underlying views to the semantic definition,
a. Select the Hierarchies tab.
Note
You can extract and copy hierarchies only if the nodes in the hierarchies are available as output
columns of the calculation view.
Propagate columns from underlying view nodes to the semantics node and to other view nodes that are in the
joined path. In other words, you can reuse the output columns of underlying view nodes in other view nodes up
to the semantic node.
Context
Modeler allows you to propagate columns from an underlying view node to all nodes in the joined path up to the
semantics node. This helps you to avoid defining the output columns of each node if the same columns are
available in its underlying node and you also require them as output columns in the above nodes up to the
semantics node. Propagating columns are useful in complex calculation views with many levels of view nodes.
Procedure
Note
You cannot select the default view node and propagate columns to the semantics node.
3. In the Details tab, select an output column (or target column in union nodes) that you want to propagate to
the semantics node.
Note
You can select more than one column using the CTRL key.
Results
The modeler propagates the columns you select to all view nodes and up to the semantics node. If a column is
already present in any of the view node in the propagated path, the columns are not propagated.
Client tools use semantic types to represent data in appropriate format. The system supports the following
semantic types for attributes.
Input parameters help you parameterize information views and execute them based on the values you provide
to the input parameters at query runtime. The engine considers input parameters as the PLACEHOLDER
clause of the SQL statement.
Context
You create an input parameter at design time (while creating your information views), and provide value to the
engine at runtime and execute information views accordingly. For example, if you want your information view to
provide data for a specific region, then REGION is a possible input parameter. You can provide value to REGION
at runtime.
Procedure
1. If you want to create an input parameter from the Semantics node, then
a. Select the Semantics node.
b. In the Details pane, choose the Parameters/Variables tab.
Input Parameter
Type Description Next Steps
Column At runtime, modeler provides a value a. In the Reference Column dropdown list, select
help with attribute data. You can choose an attribute.
a value from the attribute data as an in b. If you want to use attribute data from another
put parameter value. information view as the reference column, in the
You can also choose a hierarchy from View/Table for value help dropdown list, select
the information view to organize the the information view that contains the required
data in reporting tools. But, only if the attribute.
hierarchy contains the variable’s refer c. If you want to use a hierarchy to organize the
ence column at the leaf level (in level hi data in reporting tools, in Hierarchy dropdown
erarchies) or as a parent attribute (in list, select a hierarchy.
parent-child hierarchies).
Derived from table At runtime, modeler uses the value from a. In the Table Name dropdown list, select a ta
the table’s return column as the input ble.
parameter value. This means that, you
b. For the table you select, in the Return Column
need not provide any values to the input
dropdown list, select a column value.
parameter at runtime.
c. In the Filters section, define filter conditions to
Input parameters of this type are typi
filter the values of return column.
cally used to evaluate a formula. For ex
ample, you calculate a discount for spe
cific clients by creating an input param
eter, which is derived from the SALES
table and return column REVENUE with
a filter set on the CLIENT_ID.
Direct Specify the data type and length and b. In the Data Type dropdown list, select the data
scale of the input parameter value that type.
you want to use at runtime.
c. Provide the Length and Scale for the data type
You can also define an input parameter you choose.
with semantic type as Currency or Unit
a. Optionally, In the Semantic Type dropdown
of Measure or Date.
list, specify the semantic type for you input pa
For example, in currency conversions, rameter.
you can specify the target currency
value at rutime by creating an input pa
rameter of type Direct with semantic
type as Currency.
Static List At runtime, modeler provides a value a. In the Data Type dropdown list, select the data
help with the static list. You can choose type for the list values.
a value from this list as an input param
eter value. b. Provide the Length and Scale for the data type
you choose.
Derived from Proce At runtime, modeler uses the value re a.In Procedure/ Scalar Function textbox, provide
dure/Scalar func turned from the procedure or scalar the name of procedure or scalar function.
tions function as the input parameter value.
Note
For input parameter of type Derived from Procedure/Scalar functions or Derived From Table, if you
want to provide a different value to the parameter at runtime (override the default value) and do
want modeler to automatically use the value returned by the procedure or scalar function or the
table as the input parameter, then select the Input Enabled checkbox. If this checkbox is enabled,
then at runtime modeler displays the value returned by the procedure or scalar function as the
default value but, you can override this value based on your requirement.
Note
You cannot configure input parameters of type Derived from table and Derived from Procedure/
Scalar functions to mandatorily accept a value or to accept multiple values at runtime.
Constant If you want to use a constant value as the default input parameter value,
1. In the Default Value section, choose Add.
2. In Type dropdown list section, select Constant.
3. In Value field, provide a constant value.
Expression If you want to use the result of an expression as the default input parameter value:
For example, you can evaluate the expression date(Now()), and use the result as the default
input parameter value at runtime.
Note
Providing multiple default constant values. If you have configured the input parameter to accept
multiple values at the runtime by selecting the Multiple Entries checkbox, then you can provide
multiple default constant values to the input parameter. In the Default Value section, choose Add to
add multiple default constant values. These values appear on the selection screen when you
execute the information view.
You cannot use a combination of expressions and constants as default values for input parameters.
7. Choose OK.
Related Information
If you are creating a calculation view by using other calculation views, attribute views or analytic views, which
have input parameters or variables defined on it, then you can map the input parameters or variables of the
underlying data sources with the input parameters or variables of the calculation view that you are creating.
Context
Similarly, for the following scenarios, mapping input parameter and variables is necessary:
• If you are creating a calculation view, which consists of other external views as value help references in its
variables or input parameters, then you map the parameters / variable of external views with the
parameters or variables of the calculation view that you are creating.
• If you are creating a calculation view, and for the attributes in the underlying data sources of this
calculation view, if you have defined a value help view or a table that provides values to filter the attribute at
runtime, then you map the parameters or variables of the attribute’s value help views with the parameters
or variables of the calculation view.
Note
Only those input parameters that you use in the dependent data sources are available for mapping.
Mapping parameters of the current view to the parameters of the underlying data sources, moves the filters
down to the underlying data sources during runtime, which reduces the amount of data transferred across
them. For value-helps from external views, in addition to the parameters, you could also map variables from
current view to the external views.
Note
If you are using attribute view with input parameters as an underlying data source, map attribute view input
parameters to calculation view input parameters with the same name only. For example, consider that you
have defined an attribute view GEO with filter set on COUNTRY column such that, the filter value is an input
parameter $$IP$$. When you use this attribute view in a calculation view, Define a same name input
parameter IP and map it with the attribute view parameter. When you perform data preview on the
calculation view, the runtime help for the calculation view input parameter is shown. The value selected for
calculation view parameter serves as input for the attribute view parameter to filter the data.
Procedure
Data Sources If you are using other data sources in your calculation view and if you want to map
input parameters of these data sources with the input parameters of the calculation
view.
Views for value help for If you are using input parameters or variables, which refer to external views for value
variables/input parameters help references and if you want to map input parameters or variables of external
views with the input parameters or variables of the calculation view.
Views for value help for If you are creating a calculation view, and for the attributes in the underlying data
attributes sources of this calculation view, if you have defined a value help view or a table that
provides values to filter the attribute at runtime.
5. Manage mappings for the source and target’s input parameters or variables by selecting a value from the
source, holding the mouse button down and dragging to a value in the target.
Note
You cannot map input parameters defined in external views for value help references to the input
parameters of type Derived from table.
6. If you want auto-map source and target input parameter or variables based on its names, then:
Note
If you are choosing Auto Map, then for mapping all unmapped source input parameters or
variables, the system creates an input parameter or variable of the same name at the target. If you
want to avoid creating a new target parameter or variable, then select a source input parameter or
variable and choose Map by Name in the context menu.
7. If you want to create a constant value at the target calculation view, then:
a. Select Create Constant.
b. Enter constant value.
c. Choose OK.
Note
If you want to map input parameters of type derived from table or derived from procedure or scalar
function, which are input enabled, then you can only map them to a constant value of the target
calculation view.
Use input parameters to parameterize the view and to obtain the desired output when you run the view.
This means that the engine uses the parameter value that users provide at runtime, for example, to evaluate
the expression defined for a calculated measure. The parameter value is passed to the engine through the
PLACEHOLDER clause of the SQL statement. A parameter can only have a single value, for example, for
in("attr", $$param$$)
in("attr", $$param$$)
The table here summarizes with some examples the input parameter expressions at design time and the query
at runtime.
Input Parameter
Data Type Multiple Values Expression In Query
Or
(placeholder."$$IP_1$
$"=>'''test'',''test2''')
You use input parameters as placeholders, for example, during currency conversion, unit of measure
conversion, or in calculated column expressions. When used in formulas, the calculation of the formula is
based on the input that you provide at runtime during data preview.
The expected behavior of the input parameter when a value at runtime is not provided is as follows:
No Results in error
The table implies that it is mandatory to provide a value for the input parameter at runtime, or assign a default
value while creating the view, to avoid errors.
After creating an input parameter, you can view its properties or change them based on your business
requirement.
In the Parameters/Variables tab, select an input parameter. Modeler displays the following input parameter
properties in the Properties tab.
Properties Description
Default Value The value of this property specifies the default value that modeler uses if you do not provide any
values to the input parameter at runtime.
Parameter Type The value of this property specifies the input parameter type..
Multiple Entries The value of this property specifies whether the input parameter is configured to support multi
ple values at runtime.
Is Mandatory The value of this property specifies whether the input parameter is configured to mandatorily ac
cept a value at runtime.
SAP HANA modeler helps create hierarchies to organize data in a tree structure for multidimensional reporting.
Each hierarchy comprises of a set of levels having many-to-one relationships between each other and
collectively these levels make up the hierarchical structure.
For example, a time hierarchy comprises of levels such as Fiscal Year, Fiscal Quarter, Fiscal Month, and so on.
You can create the following two types of hierarchies in SAP HANA Modeler:
• Level Hierarchies
• Parent-child Hierarchies
Note
Hierarchies in attribute views are not available in a calculation view that reuses the attribute view.
In level hierarchies each level represents a position in the hierarchy. For example, a time dimension can have a
hierarchy that represents data at the month, quarter, and year levels.
Context
Level hierarchies consist of one or more levels of aggregation. Attributes roll up to the next higher level in a
many-to-one relationship and members at this higher level roll up into the next higher level, and so on, until
they reach the highest level. A hierarchy typically comprises of several levels, and you can include a single level
in more than one hierarchy. A level hierarchy is rigid in nature, and you can access the root and child node in a
defined order only.
Procedure
The node style determines the node ID for the level hierarchy.
Note
a. In the Sort Direction dropdown list, select a value that modeler must use to sort and display the
hierarchy members.
10. Define level hierarchy properties.
In the Advanced tab, you can define certain additional properties for your hierarchy.
a. If you want to include the values of intermediate nodes of the hierarchy to the total value of the
hierarchy’s root node, in the Aggregate All Nodes dropdown list select True. If you set the Aggregate All
Nodes value to False, modeler does not roll-up the values of intermediate nodes to the root node.
Note
The value of Aggregate All Nodes property is interpreted only by the SAP HANA MDX engine. In the
BW OLAP engine, the modeler always counts the node values. Whether you want to select this
property depends on the business requirement. If you are sure that there is no data posted on
aggregate nodes, set the option to false. The engine then executes the hierarchy faster.
b. In the Default Member textbox, enter a value for the default member.
This value helps modeler identify the default member of the hierarchy. If you do not provide any value,
all members of hierarchy are default members.
c. In the Orphan Nodes dropdown list, select a value.
This value helps modeler know how to handle orphan nodes in the hierarchy.
Note
If you select Stepparent option to handle orphan nodes, in the Stepparent text field, enter a value
(node ID) for the step parent node. The step parent node must already exist in the hierarchy at the
root level and you must enter the node ID according to the node style that you select for the
hierarchy. For example if you select node style Level Name, the stepparent node ID can be [Level2].
[B2]. The modeler assigns all orphan nodes under this node.
The value helps modeler know if it needs to add an additional root node to the hierarchy.
e. If you want the level hierarchy to support multiple parents for its elements, select the Multiple Parent
checkbox.
11. Create a Not Assigned Member, if required.
In attribute view or calculation views of type dimensions, you can create a new Not Assigned Member that
captures all values in fact table, which do not have corresponding values in the master table. In level
hierarchies, the not assigned member appears at each level of the hierarchy.
By default, modeler does not provide a hierarchy member to capture such values. This means that, Not
Assigned Members is disabled. You can either enable or choose Auto Assign to handle not assigned
members.
Note
Selecting, Auto Assign to handle not assigned members impacts the performance of your
calculation views. Select Auto Assign with caution.
This label value appears in reporting tools to capture not assigned members.
d. If you want to drilldown this member in reporting tool, select the Enable Drilldown checkbox.
e. If you want to use null convert values to process NULL values in the fact table, which do not have any
corresponding records in the master table, select the Null Value Processing checkbox.
By default, modeler uses the string _#_ as the null convert value. You can change this value in the
Name field under the Null Value Member Properties section.
f. Provide a label for the null value member.
Related Information
Node style is applicable for level hierarchies, and helps modeler identify the format the node ID. For example, if
the node ID must comprise of the level name and the node name in the reporting tools.
Level Name For this node style, the node ID comprises of the level name and the node name.
For example, for a fiscal hierarchy, the Level Name node style implies: MONTH.JAN
Name Only For this node style, the node ID comprises of the level name only. For example, for
a fiscal hierarchy, the Name Only node style implies: JAN
Name Path For this node style, the node ID comprises of the node name and the names of all
ancestors apart from the (single physical) root node. For example, for a fiscal hier
archy, the Level Name node style implies: FISCAL_2015.QUARTER_1.JAN
Based on your business requirements, you can define certain properties of level hierarchies. The value of these
properties determines the characteristics of the hierarchy at runtime.
In the Hierarchies tab, select a level hierarchy. Modeler displays the following hierarchy properties in the
Properties tab.
Properties Description
Aggregate All Nodes The value of this property determines whether modeler must roll-up the value of inter
mediate nodes of the hierarchy to the root node of the hierarchy. If the value is set to
True, modeler rolls-up the value of intermediate nodes to the total value of the hierarchy
root node.
Default Member (English) This value of this property helps modeler identify the default member of the hierarchy. If
you do not provide any value, all members of hierarchy are default members.
Root Node Visibility The value of this property helps modeler identify whether to add an additional root node
to the hierarchy. For more information, see Root Node Visibility [page 134].
Node Style The value of this property specifies the node ID format of the level hierarchy. For more
information, see Node Style [page 128].
In parent-child hierarchies, you use a parent attribute that determines the relationship among the view
attributes. Parent-child hierarchies have elements of the same type and do not contain named levels.
Context
Parent-child hierarchies are value-based hierarchies, and you create a parent-child hierarchy from a single
parent attribute. You can also define multiple parent-child pairs to support the compound node IDs. For
A parent-child hierarchy is always based on two table columns and these columns define the hierarchical
relationships amongst its elements. Others examples of parent-child hierarchies are bill of materials hierarchy
(parent and child) or employee master (employee and manager) hierarchy.
Procedure
In the Advanced tab, you can define certain additional properties for your hierarchy.
a. If you want to include the values of intermediate nodes of the hierarchy to the total value of the
hierarchy’s root node, in the Aggregate All Nodes dropdown list select True. If you set the Aggregate All
Nodes value to False, modeler does not roll-up the values of intermediate nodes to the root node.
Note
The value of Aggregate All Nodes property is interpreted only by the SAP HANA MDX engine. In the
BW OLAP engine, the modeler always counts the node values. Whether you want to select this
property depends on the business requirement. If you are sure that there is no data posted on
aggregate nodes, set the option to false. The engine then executes the hierarchy faster.
This value helps modeler identify the default member of the hierarchy. If you do not provide any value,
all members of hierarchy are default members.
c. In the Orphan Nodes dropdown list, select a value.
This value helps modeler know how to handle orphan nodes in the hierarchy.
Note
If you select Stepparent option to handle orphan nodes, then in the Node tab, enter a value (node
ID) for the stepparent. The stepparent node must already exist in the hierarchy at the root level.
The value helps modeler know if it needs to add an additional root node to the hierarchy.
e. Handling cycles in hierarchy
A parent-child hierarchy is said to contain cycles if the parent-child relationships in the hierarchy have
a circular reference. You can use any of the following options to define the behavior of such hierarchies
at load time.
Options Description
Break up at load time The nodes are traversed until a cycle is encountered. The cycles are broken-up at
load time.
Traverse completely, then The nodes in the parent-child hierarchy are traversed once completely and then
breakup the cycles broken up.
f. If you want the parent-child hierarchy to support multiple parents for its elements, select the Multiple
Parent checkbox.
If you want to order and sort elements of a parent child hierarchy based on a column value,
Note
If elements in your hierarchy are changing elements (time dependent elements), you can enable the
parent-child hierarchy as a time dependent hierarchy. In other words, if you are creating hierarchies that
are relevant for specific time period, then enable time dependency for such hierarchies. This helps you
display different versions on the hierarchy at runtime.
Not all reporting tools support time dependent hierarchies. For example, time dependent hierarchies
does not work with BI clients such as MDX or Design Studio.
a. In the Time Dependency tab, select the Enable Time Dependency checkbox.
b. In the Valid From Column dropdown list, select a column value.
c. In the Valid To Column dropdown list, select a column value.
SAP HANA modeler uses Valid From Column and Valid To Column values as the validity time for the
time dependent hierarchies.
13. If you want to use an input parameter to specify the validity of the time dependent hierarchy at runtime,
a. In the Validity Period section, select Interval.
b. In the From Date Parameter dropdown list, select an input parameter that you want to use to provide
the valid from date at runtime.
c. In the To Date Parameter dropdown list, select an input parameter that you want to use to provide the
valid to date at runtime.
14. If you want to use an input parameter to specify the key date at runtime,
a. In the Validity Period section, select Key Date.
b. In the Key Date Parameter dropdown list, select an input parameter value that you want to use to
provide key date value at runtime.
15. Create a Not Assigned Member, if required.
In attribute views or calculation views of type dimensions, you can create a new Not Assigned Member that
captures all values in fact table, which do not have corresponding values in the master table.
a. Select the Not Assigned Member tab.
b. If you want to capture values in the fact tables that do not have corresponding values in the master
table, then in the Not Assigned Members dropdown list, select Enable .
By default, modeler does not provide a hierarchy member to capture such values. This means that, Not
Assigned Members is disabled. You can either enable or choose Auto Assign to handle not assigned
members.
Note
Selecting, Auto Assign to handle not assigned members impacts the performance of your
calculation views. Select Auto Assign with caution.
This label value appears in reporting tools to capture not assigned members.
d. If you want to drilldown this member in reporting tool, select the Enable Drilldown checkbox.
e. If you want to use null convert values to process NULL values in the fact table, which do not have any
corresponding records in the master table, select the Null Value Processing checkbox.
By default, modeler uses the string _#_ as the null convert value. You can change this value in the
Name field under the Null Value Member Properties section.
f. Provide a label for the null value member.
In the Hierarchies tab, select a parent-child hierarchy. Modeler displays the following hierarchy properties in the
Properties tab.
Properties Description
Aggregate All Nodes The value of this property determines whether modeler must roll-up the value of inter
mediate nodes of the hierarchy to the root node of the hierarchy. If the value is set to
True, modeler rolls-up the value of intermediate nodes to the total value of the hierarchy
root node.
Default Member (English) This value of this property helps modeler identify the default member of the hierarchy.
If you do not provide any value, all members of hierarchy are default members.
Root Node Visibility The value of this property helps modeler identify whether to add an additional root
node to the hierarchy. For more information, see Root Node Visibility [page 134].
Context
You can enable SQL access to shared hierarchies and query them using SQL statements at runtime. This is
necessary to obtain correct aggregation results for hierarchy nodes.
Note
Not all reporting tool support SQL access to shared hierarchies. For example, this feature does not work
with BI clients such as MDX or Design Studio.
c. Choose .
d. Choose the SQL Access tab.
e. Select the Enable SQL access checkbox.
4. If you want to enable SQL access to all shared hierarchies of the current version of the calculation view,
then:
a. Select the View Properties tab.
b. In the General section, select the Enable Hierarchies for SQL access checkbox.
Results
After you enable SQL access to shared hierarchies, modeler generates a Node column and a Hierarchy
Expression Parameter for the shared hierarchy with default names. You can use the node column to filter and
perform SQL group by operation and use the hierarchy expression parameter to filter the hierarchy nodes (for
example, if you want query only the children nodes of a parent-child hierarchy).
For example, the following query shows uses the node column to filter and perform SQL group by operation:
Sample Code
select "HierarchyNodeColumn",
sum("Revenue") as "Revenue"
FROM "_SYS_BIC"."mini/CvSalesCubeHier" group by "HierarchyNodeColumn";
Based on your business requirement, choose to add an additional root node to the hierarchy and place all other
nodes as its descendants.
Add Root Node If Defined This is applicable only for parent-child hierarchies. Modeler adds a
root node only if you have defined a root node value, while creating
the parent child hierarchy.
Add Root node The modeler adds an additional root node to the hierarchy and all
other nodes are placed as descendants to this node. Select this value
if your hierarchy does not have a root node, but needs one for report
ing purposes. Modeler creates a root node with the technical name
ALL.
Do Not Add Root Node The modeler does not add an additional root node to the hierarchy.
For orphan nodes in a hierarchy, SAP HANA modeler provides different options to handle them. For example,
you can treat orphan nodes as root nodes or treat them as errors.
Options Description
If measures in your calculation views or analytic views represent currency or unit values, associate them with
currency codes or unit of measures. This helps you display the measure values along with currency codes or
unit of measures at data preview or in reporting tools.
Associating measures with currency code or unit of measure is also necessary for currency conversion or unit
conversions respectively.
Modeler performs currency conversions based on the source currency value, target currency value, exchange
rate, and date of conversion. Similarly, it performs unit conversions based on the source unit and target unit.
Use input parameters in currency conversion and unit conversion to provide the target currency value, the
exchange rate, the date of conversion or the target unit value at runtime.
Currency conversion or unit conversion are not supported for script-based graphical calculation views.
If measures in your calculation views or analytic views represent currency or unit values, associate them with
currency codes or unit of measures. This helps you display the measure values along with currency codes or
unit of measures at data preview or in reporting tools.
Prerequisites
You have imported the currency tables TCURC, TCURF, TCURN, TCURR, TCURT, TCURV, TCURW, and TCURX.
Context
Associating measures with currency codes is also necessary for currency conversions. For example, consider
that you want to generate a sales report for a region in a particular currency code and you have the sales data
in the database table with a different currency code. In such cases, create a calculation view by using the table
column containing the sales data in different currency as a measure and associate the measure with your
desired currency to perform currency conversion. Activate the calculation view to generate required reports.
Procedure
You can also assign semantics and perform currency conversion on measures in any of the intermediate
aggregation nodes. Select the measure in the Output pane and in the Properties tab assign a value to the
Semantic Type property.
Modeler displays the measure values with this currency code in reporting tools.
a. In the Currency field, choose the value help.
b. In the Type dropdown list, select a value.
Value Description
Fixed Associate the measure with a currency code available in the currency table TCURC.
Column Associate the measure with an attribute column available in the information view.
By default the precision of all values is 2 digits in SAP ERP tables. As some currencies require accuracy in
value, modeler shifts the decimal points according to the settings in the TCURX currency table. For
example, if the source currency has 0 valid digits, then each value needs to be multiplied by 100 because in
SAP ERP systems values are stored using 2 digits.
a. If you want to enable a decimal shift for the source currency that you select, select the Decimal shift
checkbox.
9. Enabling conversion.
a. If you want to convert the measure value to another currency, select the Conversion checkbox.
10. Enabling rounding.
a. If you want to round the result value after currency conversion to the number of digits of the target
currency, select the Rounding checkbox.
Note
Use this feature with caution if subsequent aggregations occur on the number and to avoid
accumulating rounding errors on each aggregation.
Decimal shift back is necessary if the result of the calculation views is interpreted in ABAP. The ABAP layer,
by default, always executes the decimal shift. In such cases, decimal shift back helps avoid wrong numbers
due to a double shift.
a. If you want to shift back the result of a currency conversion according to the decimal places that you
use for the target currency, select Decimal shift back.
12. If you have enabled conversion to convert a measure value to another, provide details for conversion.
a. In the Schema for currency conversion value help, select the required schema that has the currency
tables necessary for conversion.
b. In the Client for currency conversion value help, select the required value that modeler must use for
currency conversion rates.
Fixed/ Session Cli Fixed client value or to select a session client for currency conversions. Provide the required
ent value in the value help.
Column Attribute column available in the calculation view to provide the client value. Select the re
quired value from the value help.
Input Parameter Input parameter to provide the client value to modeler at runtime. Select the required input
parameter from the value help.
Value Description
Fixed Select the source currency from the currency table TCURC. Provide the required value in
the value help.
Column Attribute column available in the calculation view to provide the source currency value. Se
lect the required value from the value help.
Value Description
Fixed Select the target currency from the currency table TCURC. Provide the required value in the
value help.
Column Attribute column available in the calculation view to provide the target currency value. Se
lect the required value from the value help.
Input Parameter Input parameter to provide the target currency value to modeler at runtime. Select the re
quired input parameter from the value help.
Value Description
Fixed Select the exchange rate from the currency table TCURC. Provide the required value in the
value help.
Column Attribute column available in the calculation view to provide the exchange rate value. Select
the required value from the value help.
Input Parameter Input parameter to provide the exchange rate value to modeler at runtime. Select the re
quired input parameter from the value help.
Column Attribute column available in the calculation view to provide the date for currency conver
sion. Select the required value from the value help.
Input Parameter Input parameter to provide the date for currency conversion to modeler at runtime. Select
the required input parameter from the value help.
g. If you want to use value from a column in the information view to specify the exchange rate, in
Exchange Rate value help, select a column value.
Note
The result currency column is not available in reporting tools. You can only consume them using other
calculation views to perform any calculations.
In the Upon Conversion Failure dropdown list, select the required value that specifies how modeler must
populate data if conversion fails.
Value Description
Set to NULL Modeler sets the values for corresponding records to NULL at data preview.
Ignore Modeler displays unconverted value for the corresponding records at data preview.
Related Information
If measures in calculation views or analytic views represent unit values, associate the measures with a unit of
measure. This helps you display the measure values along with the unit of measures at data preview or in
reporting tools.
Prerequisites
You have imported the unit tables T006, T006D, and T006A.
Context
Associating measures with unit of measures is also necessary for unit conversions. For example, if you want to
convert a unit of a measure from cubic meters to barrels to perform volume calculations, then associate the
unit of measure with the semantic type Quantity with Unit of Measure and perform unit conversions.
Procedure
You can also assign semantics and perform unit conversion on measures in any of the intermediate
aggregation nodes. Select the measure in the Output pane and in the Properties tab assign a value to the
Semantic Type property.
7. Select a display unit.
Modeler displays the measure values with this unit in reporting tools.
Value Description
Fixed Associate the measure with a unit of measure available in the unit tables T006, T006A,
or T006D.
Column Associate the measure with an attribute column available in the calculation view.
8. If you want to convert the unit value to another unit, select the Conversion checkbox.
a. In the Schema for Unit Conversion value help, select the required schema that has the unit tables
necessary for conversion.
b. In the Client for Currency Conversion value help, select the required value that modeler must use for
unit conversion factors.
Value Description
Fixed/ Session Cli Fixed client value or to select a session client for unit conversion factors. Provide the re
ent quired value in the value help.
Column Attribute column available in the calculation view to provide the client value. Select the re
quired value from the value help.
Input Parameter Input parameter to provide the client value to modeler at runtime. Select the required input
parameter from the value help.
Value Description
Fixed Select the source unit from the unit tables T006, T006A, or T006D. Provide the required
value in the value help.
Column Attribute column available in the calculation view to provide the source unit value. Select the
required value from the value help.
Value Description
Fixed Select the target unit from the unit tables T006, T006A, or T006D. Provide the required
value in the value help.
Column Attribute column available in the calculation view to provide the target unit value. Select the
required value from the value help.
Input Parameter Input parameter to provide the target unit value to modeler at runtime. Select the required
input parameter from the value help.
The result unit column is not available in reporting tools. You can only consume them using other
calculation views to perform any calculations.
In the Upon Conversion Failure dropdown list, select the required value that specifies how modeler must
populate data if conversion fails.
Value Description
Set to NULL Modeler sets the values for corresponding records to NULL at data preview.
Ignore Modeler displays unconverted value for the corresponding records at data preview.
Related Information
By default, SAP HANA modeler allows you to drilldown attributes or calculated attributes in reporting tools. For
attributes in calculation views with data category as dimension, you can drilldown using flat hierarchies in MDX
based tools.
Procedure
Related Information
Enable attributes in information views for drilldown or disable them for drilldown in reporting tools.
SAP HANA modeler supports the following drilldown types for attributes in calculation views.
<blank> Attributes are not available for drilldown operations and the tool
does not generate an additional flat hierarchy.
Drill Down with flat hierarchy (MDX) This drilldown option is available for the attributes in calculation
views (with data category as dimension) and also for the attrib
utes in attribute views.
Data lineage in SAP HANA modeler helps you visualize the origin of attributes and measures in information
views.
Context
You can use data lineage to visualize the flow of an attribute or measure within an information view. It is a useful
feature for impact analysis and to trace errors and debug them.
Procedure
Results
In the Scenario pane, modeler highlights the column, its data source, and the flow of the column from its data
source to the Semantics node.
If you are using attribute data to provide values to variables and input parameters at runtime, you can assign a
value help to that attribute to use values from other attributes, which are available within the same information
view or in other tables or other information views.
Context
For example, consider you have defined an input parameter in calculation view CV1 using the attribute
CUSTOMER_ID. If you want to provide values to the input parameter using the attribute CUSTOMER_ID of
calculation view CV2, then assign the value to attribute in CV1 with the reference column CUSTOMER_ID of
CV2.
Procedure
Modeler displays attributes that are available in the selected table or information view.
Results
At runtime, modeler provides a value help that has values from the selected attribute. You can use these values
for input parameters and variables.
In an information view, you can associate an attribute or a column having texts, as a label column to another
attribute or column.
Context
Based on user settings like KEY, TEXT, KEY(TEXT), TEXT(KEY), some of the reporting tools displays attribute
or dimension values in combination with their texts, For such scenarios, in your information view, you can
associate an attribute having texts, for example, PRODUCT TEXT, as a label column to another attribute, for
example, PRODUCT. In data preview, the attribute column and its label column, which contains its descriptions,
appear next to each other.
Procedure
If you have created an object using the old editor (which supported the old style of description mapping),
and if you try to open it using the new editor, then you will see a new column <attribute>.description (as an
attribute), which is hidden. You can rename the value and use it like other attributes based on your
requirements.
In your analytic views or calculation views, if you are using multiple measures and if you want to organize them,
for example, to segregate the planned measures with the actual measures, then you can create a folder and
group all related measures within this folder.
Context
SAP HANA modeler allows you to create a Display Folder, which essentially is a folder that you can use to group
related measures of attribute views and calculation views.
Procedure
Note
You can also create display folder from the Properties pane. For each measure, the value you provide in
its Display Folder property text field determines the name of the folder that the system creates to group
measures.
If you want to group the measure in just one folder, then in the Display Folder textbox property, enter
the folder name.
If you want to create a hierarchy structure of folders, then in the Display Folder text box property,
provide values for more than one display folder separated by slashes.
If you want to associate a measure with multiple Display Folders, then in the Display Folder text box
property, provide values for more than one display folder separated by semicolon (;).
7. Choose OK.
SAP HANA modeler allows you to use scalar functions that converts the formats of attribute values, both
internal and external values.
Context
After creating a scalar function that converts the formats of attribute values, you can assign these values to
attributes, input parameters, filters, variables, and more. For example, the ABAP table stores data in
YYYYMMDD format. You can use a scalar function that converts the internal value, 20150305 to 2015.03.05
and use the new value for reporting purposes.
The following are the scenarios where you can use such scalar functions:
• If you want to display attribute values in reporting tools or client tools in a specific format, and if these
values are stored in the database in a different format.
If you want to provide values to filter, variable or parameters in a specific format, but database accepts
formats different from the input format.
Note
This feature is currently not supported by SAP analytic client tools. You can convert attribute values
through external or customer applications that read column metadata from the
BIMC_DIMENSION_VIEW.
Procedure
If you want to format an external value, for example values of variables, input parameter, and more.
a. In the External to Internal Conversion Functions field, select the dropdown.
Note
You can assign two scalar functions to an attribute value, an internal to external conversion
function and an external to internal conversion function.
9. If the scalar functions preserve the order, for example, 20150305 as 2015.03.05, then set the Preserve
Order flag.
Note
These functions are not applied in the database layer. It is used for providing hints to analytic clients on
how to convert values before displaying them in the client user interface (UI).
SAP HANA modeler allows you to define certain properties for information views. The modeler refers to the
values of these properties, for example, to access the data from the database or identify how to execute the
information view.
This section describes the different calculation view properties, the possible values for each property and how
these values help modeler determine the activation or execution behavior of the information view.
For defining the view properties, select the Semantics node and define the properties in the View Properties tab.
Related Information
Deprecated information views in SAP HANA modeler indicated that although an information view is supported
in SAP HANA modeler, we recommend not to use it in other information views or in analytic privileges.
Context
As a data modeler, you can deprecate information views view, which you do not recommend for use in other
information views for various reasons based on your business requirement.
Procedure
Results
Modeler displays a warning for information views or analytic privileges with deprecated information views in
the menu bar of view editor.
Filter the view data either using a fixed client value or using a session client set for the user. You can also
specify and obtain data from all clients.
Context
Filtering data based on specific client values is typically applicable to SAP application tables having MANDT or
CLIENT columns within them. In addition, you can apply filters to the table (data source) only if the table
column is according to the following conditions:
Procedure
Related Information
Associate information views in your SAP HANA system with a default client value to filter and view data at
runtime relevant to specific clients.
Procedure
Note
The following are useful SAP Notes related to default client property.
• Handling Default Client Property after SPS 07 server, see SAP Note 0002079087 .
• Default client property behavior for calculation views in SPS 07 and SPS 06 servers, see SAP Note
0002079551 .
• Impact on Default Client property in Currency conversion definitions (For SPS 07 server and
before), see SAP Note 0002079554
Assign a default client to a calculation view and filter data at runtime based on the default client value. The
following table lists the default client value types you can assign and their description.
<blank> If you do not set any default client value, the tool does not filter the table data against any
client and you see values relevant to all clients.
Session Client If you use session client as the default client value, then at runtime, the tool filters the table
data according to the value you specify as the session client in the user profile.
Fixed Client If you want to use a fixed client value, for example, 001, then the tool filters the table data for
this client value.
Time travel queries are queries against the historical states of the database. When you execute a time travel
query on your information view, you can query the data at a specified time in past.
Context
If you have enabled time travel for information views, you can view data for a specific time in the past using the
AS OF SQL extension. For example, you can execute the following SQL statement on information views as a
timestamp query:
SAP HANA supports creating history tables, which allows you to track changes made to other database tables.
These tables help you associate time related information to your data. For example, you can use HISTORY table
to track changes performed to the CUSTOMER table. When you use history tables as data sources in
calculation views, specify a parameter that you can use to provide the timestamp at runtime, and execute time
travel queries on calculation views with history tables.
Procedure
Note
Use input parameters with data type DATE or SECONDDATE or TIMESTAMP or VARCHAR(8) of
semantic type DATE to specify the timestamp.
In order to maintain the significance of the cached data for your calculation views, the modeler supports cache
invalidation.
Prerequisites
You have enabled caching support for your SAP HANA system.
Context
The system invalidates or removes the data from the cache after specific time intervals or when underlying
data is modified. Time based cache invalidation is necessary to refresh the data after every specific time
period. By default, the cache invalidation period is null. This means that the result of the complex query that
you execute resides in the cache until you execute the next query. Similarly, if you set your cache invalidation
period as one hour, the result of the query resides in the cache for one hour, and system does not clear the
cache for all other queries that you execute until this time period.
Note
Cache invalidation is applicable only to complex SQL queries, which you execute for your calculation views.
Procedure
In the dropdown list, the values Hourly or Daily is for time-based cache invalidation and the value
Transactional invalidates the cache when underlying data is modified.
Related Information
Enable cache invalidation for your SAP HANA system to invalidate or remove data from the cache after specific
time intervals or when underlying data is modified.
Context
You enable support for cache invalidation on your SAP HANA system. This action, by default, enables cache
invalidation support for all views in the system.
Procedure
Note
You can also enable cache invalidation support for specific information views. Open the information
view in the view editor, and in the View Properties tab, select the Cache checkbox.
You can enable or disable the result set caching using the SAP HANA database explorer.
Use the following SQL queries to enable or disable caching in the system configuration.
Source Code
Source Code
SAP HANA modeler supports maintaining object label texts in different languages. For each object label, other
than in the default language text, you can choose additional languages to maintain object labels.
Prerequisites
You have enabled the Translate property for the information view.
Note
If you deselect the checkbox and save you information view, modeler deletes all existing language texts in
repository text tables.
Context
In multigeographical business environment, it is necessary to have the flexibility to view object label texts in
different languages in reporting tools. For example, as a modeler, if you are creating labels with the default
English language, you can choose an additional language Chinese and maintain the equivalent Chinese
language text for the same label. At runtime, analysts can select the Chinese language and view labels in
Chinese.
Note
You can maintain objects labels in multiple languages at the same time.
Modeler loads all the objects and its exiting labels on the dialog.
c. If you want to filter and view labels for specific object, in the Show dropdown list, select the objects.
d. In the Translated Label column, enter the translated text value for labels.
e. Choose Save to update changes in active repository text tables.
Note
Maintain labels in multiple languages. In the Language dropdown list, select other languages and follow
the same procedure to maintain labels in different languages at the same time.
Considering different business scenarios, SAP HANA modeler allows you to define certain properties for
information views. The value of these properties determines the characteristics of the view at runtime.
When you are modeling your information views, in the View Properties tab of the Semantics node, SAP HANA
modeler allows you to define the following properties.
Properties Description
Data Category The value of this property determines whether your calculation view supports analysis
with multidimensional reporting. For more information see, Supported Data Categories for
Information Views [page 71].
Default Client The value of this property determines whether modeler must filter data for a fixed client, a
session client, or a cross client (does not filter data). For more information, see Filter Data
for Specific Clients [page 150] .
Apply Privileges The value of this property specifies the analytic privilege type selected for data access re
strictions on the calculation view. For more information, see Defining Data Access Privi
leges [page 162].
Default Schema The value of this property helps modeler identify the default schema, which contains the
tables necessary for currency or unit conversions. For more information, see Using Cur
rency and Unit of Measure Conversions [page 135].
Default Member The value of this property helps modeler identify the default member for all hierarchies in
the information views.
Enable History The value of this property determines whether your calculation view supports time travel
queries. For more information see, Enable Information Views for Time Travel Queries
[page 152].
History Input Parameter Input parameter used to specify the timestamp in time travel queries.
Deprecate The value of this property determines whether a user does not recommend using an infor
mation view in other modeler objects. If the value is set to True, it indicates that although
an information view is supported in SAP HANA modeler for modeling activities, it is not
recommended for use. For more information, Deprecate Information Views [page 149].
Translate The value of this property determines whether SAP HANA modeler must support main
taining object label texts in the information view in multiple languages. For more informa
tion, see Maintain Modeler Object Labels in Multiple Languages [page 155].
Execute In The value of this property impacts the output data. It determines whether modeler must
execute the calculation view in SQL engine or column engine. For more information, see
SAP Note 1857202 .
Cache The value of this property determines whether you have enabled support for cache inva
lidation. For more information see, Enable Support for Cache Invalidation [page 154]
Cache Invalidation Period The value of this property impacts the output data. It determines whether modeler must
invalidate or remove the cached content based on a time interval or when any of the un
derlying data is changed. For more information, see Invalidate Cached Content [page 153].
Pruning Configuration Table The value of this property determines the pruning configuration table that modeler must
use to prune data in union nodes. For more information, see Prune Data in Union Nodes
[page 88].
Propagate Instantiation to SQL The value of this property helps modeler identify whether it has to propagate the instan
tiation handled by the calculation engine to the CDS or SQL views built on top of this cal
culation view. If the value is set to True, modeler propagates the instantiation to the CDS
or SQL views. This means that, attributes that a query (on a SQL view built on top of this
view) does not request are pruned and not considered at runtime.
For information on calculation engine instantiation process, see SAP Note 1764658
Analyticview Compatibility The value of this property helps the join engine identify whether it has to ignore joins with
Mode N:M cardinality, when executing the join. If the value of this property is set to True, the join
engine prunes N:M cardinality joins if the left table or the right table in the star join node
does not request for any field, and if no filters are defined on the join.
Count Star Column The value of this property is set to row.count in calculation views, which were created by
migrating analytic views having the row.count column. The row.count column was used in
ternally to store the result of SELECT COUNT(*) queries.
You can also select a column from the calculation view as Count Star Column. In this case,
the column you select is used to store the result of SELECT
COUNT(<column_name>).
Properties Description
Data Category The value of this property determines whether your calculation view supports analysis
with multidimensional reporting. For script-based calculation views, modeler supports
only data category of type cube or blank. For more information, see Supported Data Cat
egories for Information Views [page 71].
Default Client The value of this property determines whether modeler must filter data for a fixed client,
a session client, or a cross client (does not filter data). For more information, see Filter
Data for Specific Clients [page 150].
Apply Privileges The value of this property specifies the analytic privilege type selected for data access re
strictions on the calculation view. For more information, Defining Data Access Privileges
[page 162].
Default Schema This value of this property helps modeler identify the default schema, which contains the
tables used in the script-based calculation views.
Deprecate The value of this property determines whether an information view is recommended for
use in other modeler objects. If the value is set to True, it signifies that the information
view is supported in SAP HANA modeler, but is not recommended for use. For more infor
mation, Deprecate Information Views [page 149].
Translate The value of this property determines whether SAP HANA modeler must support main
taining object label texts in the information view in multiple languages. For more informa
tion, see Maintain Modeler Object Labels in Multiple Languages [page 155].
Enable History The value of this property determines whether your calculation view supports time travel
queries. For more information see, Enable Information Views for Time Travel Queries
[page 152].
History Input Parameter Input parameter used to specify the timestamp in time travel queries.
Run With The value of this property helps modeler identify the authorization it has to use while se
lecting the data from the database and for executing the calculation view or procedure. If
the property is set to Definer’s rights, then modeler uses the authorizations of the user
who defines the view or procedure. Similarly, if the property is set to Invoker’s right, then
modeler uses the authorizations of the current user to access data from the database.
Cache The value of this property determines whether you have enabled support for cache inva
lidation. For more information see, Enable Support for Cache Invalidation [page 154]
Cache Invalidation Period The value of this property impacts the output data. It determines whether modeler must
invalidate or remove the cached content based on a time interval or when any of the un
derlying data is changed. For more information, see Invalidate Cached Content [page
153].
Properties Description
Data Category The value of this property determines whether your information view supports analysis
with multidimensional reporting. For analytic views, modeler supports only data category
of type cube or blank. For more information, see Supported Data Categories for Informa
tion Views [page 71].
Default Client The value of this property determines whether modeler must filter data for a fixed client,
a session client, or a cross client (does not filter data). For more information, see Filter
Data for Specific Clients [page 150].
Apply Privileges The value of this property specifies the analytic privilege type selected for data access
restrictions on the information view. For more information, Defining Data Access Privi
leges [page 162].
Default Schema The value of this property helps modeler identify the default schema used for currency or
unit conversions. For more information, see Using Currency and Unit of Measure Conver
sions [page 135].
Deprecate The value of this property determines whether an information view is recommended for
use in other modeler objects. If the value is set to True, it signifies that the information
view is supported in SAP HANA modeler, but is not recommended for use. For more in
formation, Deprecate Information Views [page 149].
Translate The value of this property determines whether SAP HANA modeler must support main
taining object label texts in the information view in multiple languages. For more informa
tion, see Maintain Modeler Object Labels in Multiple Languages [page 155].
Enable History The value of this property determines whether your information view supports time
travel queries. For more information see, Enable Information Views for Time Travel Quer
ies [page 152].
History Input Parameter Input parameter used to specify the timestamp in time travel queries.
Cache The value of this property determines whether you have enabled support for cache inva
lidation. For more information see, Enable Support for Cache Invalidation [page 154]
Cache Invalidation Period The value of this property impacts the output data. It determines whether modeler must
invalidate or remove the cached content based on a time interval or when any of the un
derlying data is changed. For more information, see Invalidate Cached Content [page
153].
Allow Relational Optimization The value of this property determines whether the engine must perform relation query
optimizations. If set to True, engine performs relation optimizations. For example, opti
mize stacked SQL’s. This property can impact the results of counters and SELECT
COUNT queries.
Generate Concat Attributes The value of this property determines whether modeler must generate additional concat
attribute to improve the performance of multiple column joins. If set to True, then mod
eler generates additional concat attributes for those columns involved in the multiple col
umn joins of physical tables.
Properties Description
Data Category The value of this property determines whether your information view supports analysis
with multidimensional reporting. For attribute views, modeler supports only data cate
gory of type dimension only. For more information, see Supported Data Categories for
Information Views [page 71].
Default Client The value of this property determines whether modeler must filter data for a fixed client,
a session client, or a cross client (does not filter data). For more information, see Filter
Data for Specific Clients [page 150].
Type The value of this property specifies the attribute view type.
Base Attribute View The value of this property specifies the base attribute view used in the derived attribute
view type.
Apply Privileges The value of this property specifies the analytic privilege type selected for data access
restrictions on the information view. For more information, Defining Data Access Privi
leges [page 162].
Default Member The value of this property helps modeler identify the default member for all hierarchies
in the information views.
Deprecate The value of this property determines whether an information view is recommended for
use in other modeler objects. If the value is set to True, it signifies that the information
view is supported in SAP HANA modeler, but is not recommended for use. For more in
formation, Deprecate Information Views [page 149].
Translate The value of this property determines whether SAP HANA modeler must support main
taining object label texts in the information view in multiple languages. For more infor
mation, see Maintain Modeler Object Labels in Multiple Languages [page 155].
Enable History The value of this property determines whether your information view supports time
travel queries. For more information see, Enable Information Views for Time Travel Quer
ies [page 152].
History Input Parameter Input parameter used to specify the timestamp in time travel queries.
Cache The value of this property determines whether you have enabled support for cache inva
lidation. For more information see, Enable Support for Cache Invalidation [page 154]
Cache Invalidation Period The value of this property impacts the output data. It determines whether modeler must
invalidate or remove the cached content based on a time interval or when any of the un
derlying data is changed. For more information, see Invalidate Cached Content [page
153].
Generate Concat Attributes The value of this property determines whether modeler must generate additional concat
attribute to improve the performance of multiple column joins. If set to True, then mod
eler generates additional concat attributes for those columns involved in the multiple
column joins of physical tables.
This section describes how to create analytic privileges and assign them to different users to provide selective
data access control to activated information views.
Analytic privileges grant different users access to different portions of data in the same view based on their
business role. Within the definition of an analytic privilege, the conditions that control which data users see is
either contained in an XML document or defined using SQL.
Standard object privileges (SELECT, ALTER, DROP, and so on) implement coarse-grained authorization at
object level only. Users either have access to an object, such as a table, view or procedure, or they don't. While
this is often sufficient, there are cases when access to data in an object depends on certain values or
combinations of values. Analytic privileges are used in the SAP HANA database to provide such fine-grained
control at row level of which data individual users can see within the same view.
Example
Sales data for all regions is contained within one analytic view. However, regional sales managers should
only see the data for their region. In this case, an analytic privilege could be modeled so that they can all
query the view, but only the data that each user is authorized to see is returned.
SAP HANA modeler supports creating the following two types of analytic privileges, the classical XML-based
analytic privileges and the SQL analytic privileges.
Before you implement row-level authorization using analytic privileges, decide which type of analytic privilege
is suitable for your scenario. In general, SQL-based analytic privileges allow you to more easily formulate
complex filter conditions that might be cumbersome to model using XML-based analytic privileges.
The following are the main differences between XML-based and SQL-based analytic privileges:
• Attribute views
• Analytic views
• Calculation views
Design-time modeling in the Editor tool of the SAP HANA Yes Yes
Web Workbench
All column views modeled and activated in the SAP HANA modeler and the SAP HANA Web-based
Development Workbench automatically enforce an authorization check based on analytic privileges. XML-
based analytic privileges are selected by default, but you can switch to SQL-based analytic privileges.
Column views created using SQL must be explicitly registered for such a check by passing the relevant
parameter:
SQL views must always be explicitly registered for an authorization check based analytic privileges by passing
the STRUCTURED PRIVILEGE CHECK parameter.
Note
It is not possible to enforce an authorization check on the same view using both XML-based and SQL-based
analytic privileges. However, it is possible to build views with different authorization checks on each other.
Related Information
Create analytic privileges for information views and assign them to different users to provide selective access
that are based on certain combinations of data.
Prerequisites
If you want to use a classical XML-based analytic privilege to apply data access restrictions on information
views, set the Apply Privileges property for the information view to Classical Analytic Privileges.
Context
Analytic privileges help restrict data access to information views based on attributes or procedures. You can
create and apply analytic privileges for a selected group of models or apply them to all models across
packages.
After you create analytic privileges, assign it to users. This restricts users to access data only for certain
combinations of dimension attributes.
Procedure
Use attributes from the secured models to define data access restrictions.
Note
Select a model if you want to use all attributes from the model to define restrictions.
c. Choose OK.
11. Define attribute restrictions
Modeler uses the restrictions defined on the attributes to restrict data access. Each attribute restriction is
associated with only one attribute, but can contain multiple value filters. You can create more than one
attribute restriction.
If you have enabled SQL access for calculation views, modeler generates a node column. You can use the
node column to filter and perform SQL group by operation. For analytic privileges, you can maintain a filter
expression using this node column.
For example, if the node column is SalesRepHierarchyNode for a parent-child hierarchy, then you can
create a hierarchical analytic privilege for a calculation view that filters the subtree of the node at
runtime. "SalesRepHierarchyNode" = MAJESTIX
Note
You can create hierarchical analytic privileges only if all secured models are shared dimensions
used in star join calculation views and if the view property of the calculation views is enabled for
SQL access.
Note
Activate the analytic privilege only if you have defined at least one restriction on attributes in the
Associated Attributes Restrictions section.
Related Information
Analytic privileges are intended to control read-only access to SAP HANA information models (attribute views,
analytic views, calculation views
The attribute restriction of an analytic privilege specifies the value range that the user is permitted to access
using value filters. In addition to static scalar values, stored procedures can be used to define filters. This allows
user-specific filter conditions to be determined dynamically in runtime, for example, by querying specified
tables or views. As a result, the same analytic privilege can be applied to many users, while the filter values for
authorization can be updated and changed independently in the relevant database tables.
After activation, an analytic privilege must be assigned to a user before taking any effect. The user views the
filtered data based on the restrictions defined in the analytic privilege. If no analytic privilege applicable for
models is assigned to a user, he or she cannot access the model. If a user is assigned to multiple analytic
privileges, the privileges are combined with OR conditions.
Remember
In addition to the analytic privileges, a user needs SQL Select privileges on the generated column views.
For a view “MyView” in package “p1.p2” (that is, subpackage p2 of package p1), the generated column view lies
in the deployment schema. Ensure that the users who are allowed to see the view have select privileges on the
view (or the entire deployment schema).
Note
Multiple restrictions applied on the same column are combined by OR. However, restrictions across several
columns are always combined by AND.
An analytic privilege consists of a set of restrictions against which user access to a particular attribute view,
analytic view, or calculation view is verified. In an XML-based analytic privilege, these restrictions are specified
in an XML document that conforms to a defined XML schema definition (XSD).
Note
As objects created in the repository, XML-based analytic privileges are deprecated as of SAP HANA SPS
02. For more information, see SAP Note 2465027.
Each restriction in an XML-based analytic privilege controls the authorization check on the restricted view
using a set of value filters. A value filter defines a check condition that verifies whether or not the values of the
view (or view columns) qualify for user access.
• View
• Activity
• Validity
• Attribute
The following operators can be used to define value filters in the restrictions.
Note
The activity and validity restrictions support only a subset of these operators.
All of the above operators, except IS_NULL and NOT_NULL, accept empty strings (" ") as filter operands.
IS_NULL and NOT_NULL do not allow any input value.
• For the IN operator: IN ("", "A", "B") to filter on these exact values
• As a lower limit in comparison operators, such as:
• BT ("", “XYZ”), which is equivalent to NOT_NULL AND LE "XYZ”"GT "", which is equivalent to
NOT_NULL
• LE "", which is equivalent to EQ ""
• LT "", which will always return false
• CP "", which is equivalent to EQ ""
The filter conditions CP "*" will also return rows with empty-string as values in the corresponding attribute.
View Restriction
This restriction specifies to which column views the analytic privilege applies. It can be a single view, a list of
views, or all views. An analytic privilege must have exactly one cube restriction.
Example
IN ("Cube1", "Cube2")
Note
When an analytic view is created in the SAP HANA modeler, automatically generated views are included
automatically in the cube restriction.
Note
The SAP HANA modeler uses a special syntax to specify the cube names in the view restriction:
_SYS_BIC:<package_hierarchy>/<view_name>
For example:
<cubes>
<cube name="_SYS_BIC:test.sales/AN_SALES" />
<cube name="_SYS_BIC:test.sales/AN_SALES/olap" />
</cubes>
Activity Restriction
This restriction specifies the activities that the user is allowed to perform on the restricted views, for example,
read data. An analytic privilege must have exactly one activity restriction.
Example
EQ "read", or EQ "edit"
Currently, all analytic privileges created in the SAP HANA modeler are automatically configured to restrict
access to READ activity only. This corresponds to SQL SELECT queries. This is due to the fact that the
attribute, analytic, and calculation views are read-only views. This restriction is therefore not configurable.
Validity Restriction
This restriction specifies the validity period of the analytic privilege. An analytic privilege must have exactly one
validity restriction.
Example
GT 2010/10/01 01:01:00.000
Attribute Restriction
This restriction specifies the value range that the user is permitted to access. Attribute restrictions are applied
to the actual attributes of a view. Each attribute restriction is relevant for one attribute, which can contain
multiple value filters. Each value filter represents a logical filter condition.
Note
The SAP HANA modeler uses different ways to specify attribute names in the attribute restriction
depending on the type of view providing the attribute. In particular, attributes from attribute views are
specified using the syntax "<package_hierarchy>/<view_name>$<attribute_name>", while local
attributes of analytic views and calculation views are specified using their attribute name only. For example:
<dimensionAttribute name="test.sales/AT_PRODUCT$PRODUCT_NAME">
<restrictions>
<valueFilter operator="IN">
<value value="Car" />
<value value="Bike" />
</valueFilter>
</restrictions>
</dimensionAttribute>
• A static value filter consists of an operator and either a list of values as the filter operands or a single value
as the filter operand. All data types are supported except those for LOB data types (CLOB, BLOB, and
NCLOB).
For example, a value filter (EQ 2006) can be defined for an attribute YEAR in a dimension restriction to
filter accessible data using the condition YEAR=2006 for potential users.
Note
Only attributes, not aggregatable facts (for example, measures or key figures) can be used in
dimension restrictions for analytic views.
It is possible to combine static and dynamic value filters as shown in the following example.
Example
An analytic privilege can have multiple attribute restrictions, but it must have at least one attribute restriction.
An attribute restriction must have at least one value filter. Therefore, if you want to permit access to the whole
content of a restricted view, then the attribute restriction must specify all attributes.
Similarly, if you want to permit access to the whole content of the view with the corresponding attribute, then
the value filter must specify all values.
The SAP HANA modeler automatically implements these two cases if you do not select either an attribute
restriction or a value filter.
Example
<dimensionAttributes>
<allDimensionAttributes/ >
</dimensionAttributes>
Example
<dimensionAttributes>
<dimensionAttribute name="PRODUCT">
<all />
</dimensionAttribute>
</dimensionAttributes>
The result of user queries on restricted views is filtered according to the conditions specified by the analytic
privileges granted to the user as follows:
• Multiple analytic privileges are combined with the logical operator OR.
• Within one analytic privilege, all attribute restrictions are combined with the logical operator AND.
• Within one attribute restriction, all value filters on the attribute are combined with the logical operator OR.
Example
You create two analytic privileges AP1 and AP2. AP1 has the following attribute restrictions:
• Restriction R11 restricting the attribute Year with the value filters (EQ 2006) and (BT 2008, 2010)
• Restriction R12 restricting the attribute Country with the value filter (IN ("USA", "Germany"))
Given that multiple value filters are combined with the logical operator OR and multiple attribute restrictions
are combined with the logical operator AND, AP1 generates the condition:
((Year = 2006) OR (Year BT 2008 and 2010)) AND (Country IN ("USA", "Germany"))
Restriction R21 restricting the attribute Country with the value filter (EQ "France")
(Country = "France")
Any query of a user who has been granted both AP1 and AP2 will therefore be appended with the following
WHERE clause:
((Year = 2006) OR (Year BT 2008 and 2010)) AND (Country IN ("USA", "Germany"))) OR
(Country = "France")
Related Information
The attribute restriction of an XML-based analytic privilege specifies the value range that the user is permitted
to access using value filters. In addition to static scalar values, stored procedures can be used to define filters.
By using storing procedures to define filters, you can have user-specific filter conditions be determined
dynamically in runtime, for example, by querying specified tables or views. As a result, the same analytic
Procedures used to define filter conditions must have the following properties:
In static value filters, it is not possible to specify NULL as the operand of the operator. The operators IS_NULL
or NOT_NULL must be used instead. In dynamic value filters where a procedure is used to determine a filter
condition, NULL or valid values may be returned. The following behavior applies in the evaluation of such cases
during the authorization check of a user query:
Filter conditions of operators with NULL as the operand are disregarded, in particular the following:
If no valid filter conditions remain (that is, they have all been disregarded because they contain the NULL
operand), the user query is rejected with a “Not authorized” error.
Example
Dynamic analytic privilege 1 generates the filter condition (Year >= NULL) and dynamic analytic privilege 2
generates the condition (Country EQ NULL). The query of a user assigned these analytic privileges
(combined with the logical operator OR) will return a “Not authorized” error.
Example
Dynamic analytic privilege 1 generates the filter condition (Year >= NULL) and dynamic analytic privilege 2
generates the condition (Country EQ NULL AND Currency = “USD”). The query of a user assigned these
analytic privileges (combined with the logical operator OR) will be filtered with the filter Currency = ‘USD’.
• The BT operator has as input operands a valid scalar value and NULL, for example, BT 2002 and NULL or
BT NULL and 2002
• The IN operator has as input operand NULL among the value list, for example, IN (12, 13, NULL)
If you want to allow the user to see all the values of a particular attribute, instead of filtering for certain values,
the procedure must return "*" and '' '' (empty string) as the operand for the CP and GT operators respectively.
These are the only operators that support the specification of all values.
Implementation Considerations
When the procedure is executed as part of the authorization check in runtime, note the following:
• The user who must be authorized is the database user who executes the query accessing a secured view.
This is the session user. The database table or view used in the procedure must therefore contain a column
to store the user name of the session user. The procedure can then filter by this column using the SQL
function SESSION_USER. This table or view should only be accessible to the procedure owner.
Caution
Do not map the executing user to the application user. The application user is unreliable because it is
controlled by the client application. For example, it may set the application user to a technical user or it
may not set it at all. In addition, the trustworthiness of the client application cannot be guaranteed.
• The user executing the procedure is the _SYS_REPO user. In the case of procedures activated in the SAP
HANA modeler, _SYS_REPO is the owner of the procedures. For procedures created in SQL, the EXECUTE
privilege on the procedure must be granted to the _SYS_REPO user.
• If the procedure fails to execute, the user’s query stops processing and a “Not authorized” error is
returned. The root cause can be investigated in the error trace file of the indexserver,
indexserver_alert_<host>.trc.
When designing and implementing procedures as filter for dynamic analytic privileges, bear the following in
mind:
• To avoid a recursive analytic privilege check, the procedures should only select from database tables or
views that are not subject to an authorization check based on analytic privileges. In particular, views
activated in the SAP HANA modeler are to be avoided completely as they are automatically registered for
the analytic privilege check.
• The execution of procedures in analytic privileges slows down query processing compared to analytic
privileges containing only static filters. Therefore, procedures used in analytic privileges must be designed
carefully.
When a user requests access to data stored in an attribute, analytic, calculation, or SQL views, an authorization
check based on analytic privileges is performed and the data returned to the user is filtered accordingly. The
EFFECTIVE_STRUCTURED_PRIVILEGES system view can help you to troubleshoot authorization problems.
Access to a view and the way in which results are filtered depend on whether the view is independent or
associated with other views (dependent views).
Independent Views
The authorization check for a view that is not defined on another column view is as follows:
Note
The user does not require SELECT on the underlying base tables or views of the view.
• The user has been granted an analytic privilege that is applicable to the view.
Applicable analytic privileges are those that meet all of the following criteria:
A view restriction that includes the accessed view An ON clause that includes the accessed view
A validity restriction that applies now If the filter condition specifies a validity period (for ex
ample, WHERE (CURRENT_TIME BETWEEN ...
AND ....) AND <actual filter>) ), it must
apply now
An action in the activity restriction that covers the ac An action in the FOR clause that covers the action re
tion requested by the query quested by the query
Note Note
All analytic privileges created and activated in the All analytic privileges created and activated in the
SAP HANA modeler and SAP HANA Web-based De SAP HANA Web-based Development Workbench
velopment Workbench fulfill this condition. The only fulfill this condition. The only action supported is
action supported is read access (SELECT). read access (SELECT).
An attribute restriction that includes some of the view’s A filter condition that applies to the view
attributes
Note
When the analytic privilege is created, the filter is
checked immediately to ensure that it applies to the
view. If it doesn't, creation will fail. However, if the
view definition subsequently changes, or if a dy
namically generated filter condition returns a filter
string that is not executable with the view, the au
thorization check will fail and access is rejected.
If the user has the SELECT privilege on the view but no applicable analytic privileges, the user’s request is
rejected with a Not authorized error. The same is true if the user has an applicable analytic privilege but
doesn't have the SELECT privilege on the view.
2. The value filters specified in the dimension restrictions (XML-based) or filter condition (SQL-based) are
evaluated and the appropriate data is returned to the user. Multiple analytic privileges are combined with
the logical operator OR.
For more information about how multiple attribute restrictions and/or multiple value filters in XML-based
analytic privileges are combined, see XML-Based Analytic Privileges.
Dependent Views
The authorization check for a view that is defined on other column views is more complex. Note the following
behavior.
• Individual views in the hierarchy are filtered according to their respective analytic privileges, which use the
logical OR combination.
• The filtered result of the calculation view is derived from the filtered result of its underlying views. This
corresponds to a logic AND combination of the filters generated by the analytic privileges for the individual
views.
• The user has been granted the SELECT privilege on the view or the schema that contains the view.
• The user has been granted analytic privileges that apply to the view itself and all the other column views in
the hierarchy that are registered for a structured privilege check.
A user can access a calculation or SQL view based on other views if both of the following prerequisites are met:
If a user requests access to a calculation view that is dependent on another view, the behavior of the
authorization check and result filtering is performed as follows:
Calculation views and SQL views can be defined by selecting data from other column views, specifically
attribute views, analytic views, and other calculation views. This can lead to a complex view hierarchy that
requires careful design of row-level authorization.
This represents a view hierarchy for which the prerequisites described above for calculation views also apply.
If an analytic view designed in the SAP HANA modeler contains one of the elements listed below, it will
automatically be activated with a calculation view on top. The name of this calculation view is the name of the
analytic view with the suffix /olap.
• Which analytic privileges apply to a particular view, including the dynamic filter conditions that apply (if
relevant)
• Which filter is being applied to which view in the view hierarchy (for views with dependencies)
• Whether or not a particular user is authorized to access the view
Related Information
Examples to explain how you can use analytic privileges to control access to data in HANA information models.
Example
• Consider an analytic view (without fields coming from attribute views) or a calculation view SALES, which
is added as part of an analytic privilege’s secured models having the following data.
1 GRP1 1000
2 GRP2 1500
3 GRP3 1200
1 GRP4 1300
If you create a restriction on column CUST_ID to filter data for CUST_ID 1 and 2, the conditions are
combined with OR and the data available for a user is:
1 GRP1 1000
2 GRP2 1500
1 GRP4 1300
If you create restrictions on columns CUST_ID and CUST_GROUP such as CUST_ID = 1 and CUST_GROUP
= 1, the conditions are combined with AND, and the data available for a user is:
1 GRP1 1000
Note
• The technical name used for attributes of calculation views and local attributes of analytic views, is
same as that of the attribute name. Hence any restriction applied to a local attribute of an analytic
or calculation view attribute is also applied to any other local attribute of an analytic view and
calculation view attribute having the same name.
In the preceding example, if there is any other analytic view or calculation view, which is part of a
privilege’s secured list of models, and has a field called “CUST_ID” (not coming from any attribute
view), the data for these privileges also gets restricted.
• If Applicable to all information models is selected, any analytic view/calculation view (even if not
part of the secured models) which has a (private) attribute called “CUST_ID”, the data for these
privileges also get restricted.
• The behavior for the calculation view is the same as that of the analytic view described above.
• Consider an attribute view CUSTOMER, which is part of an analytic privilege’s secured list of models having
the following data.
1 IN 1
2 IN 1
3 US 1
1 DE 2
If you create a restriction on column CUST_ID to filter data for CUST_ID 1 and 2, the conditions are
combined with OR and the data is shown as follows:
1 IN 1
2 IN 1
1 DE 2
If you create restrictions on columns CUST_ID and COUNTRY such as CUST_ID = 1 and COUNTRY = IN, the
conditions are combined with AND, and the data available for a user is:
1 IN 1000
Note
• The technical name used for an attribute view attribute is <package name>/<attribute view
name>$<attribute name>. In the preceding example, the technical name for CUST_ID is
mypackage/CUSTOMER$CUST_ID. This implies that if there is any other attribute view “STORE”
which is a part of the analytic privilege and has CUST_ID as its attribute, it is restricted.
• Any analytic view that is part of the privilege’s secured list of models and has this attribute view as
its required object, is also restricted. In the example above, if an analytic view contains the attribute
views CUSTOMER and STORE, both CUST_ID attributes are handled independently, because their
internal technical name used for the privilege check are mypackage/CUSTOMER$CUST_ID and
myotherpackage/STORE$UST_ID.
• If Applicable to all information models is selected, any analytic view (even if it is not part of the
secured models) having this attribute view as its required object, is also restricted.
Use the CREATE STRUCTURED PRIVILEGE statement to create an XML-based analytic privilege that contains a
dynamic procedure-based value filter and a fixed value filter in the attribute restriction.
Context
Note
The analytic privilege in this example is created using the CREATE STRUCTURED PRIVILEGE statement.
Under normal circumstances, you create XML-based analytic privileges using the SAP HANA modeler or
the SAP HANA Web-based Development Workbench. Analytic privileges created using CREATE
STRUCTURED PRIVILEGE are not owned by the user _SYS_REPO. They can be granted and revoked only by
the actual database user who creates them.
To be able to implement the second filter condition, you need to create a procedure that will determine which
products a user is authorized to see by querying the table PRODUCT_AUTHORIZATION_TABLE.
Procedure
1. Create the table type for the output parameter of the procedure:
2. Create the table that the procedure will use to check authorization:
3. Create the procedure that will determine which products the database user executing the query is
authorized to see based on information contained in the product authorization table:
Note
The session user is the database user who is executing the query to access a secured view. This is
therefore the user whose privileges must be checked. For this reason, the table or view used in the
procedure should contain a column to store the user name so that the procedure can filter on this
column using the SQL function SESSION_USER.
Caution
Do not map the executing user to the application user. The application user is unreliable because it is
controlled by the client application. For example, it may set the application user to a technical user or it
may not set it at all. In addition, the trustworthiness of the client application cannot be guaranteed.
Results
Now when a database user requests access to a secured view containing product information, the data
returned will be filtered according to the following condition:
Define data access restrictions on information views using fixed restrictions or dynamic restrictions.
Fixed Value A fixed value or static value filter For example, a value filter (EQ 2006) can be defined for an attrib
consists of an operator and ei ute YEAR in a dimension restriction to filter accessible data using
ther a list of values as the filter the condition YEAR=2006 for potential users.
operands or a single value as
the filter operand. All data types
are supported except those for
LOB data types (CLOB, BLOB,
and NCLOB).
Catalog Procedure or Catalog Procedures or Reposi For example, a value filter (IN (GET_MATERIAL_NUM
Repository Proce tory Procedures are dynamic BER_FOR_CURRENT_USER())) is defined for the attribute MATE
dure. value filters, which consists of RIAL_NUMBER. This filter indicates that a user with this analytic
an operator and a stored proce privilege is only allowed to access material data with the numbers
dure call that determines the returned by the procedure GET_MATERIAL_NUMBER_FOR_CUR
operand value at runtime. RENT_USER.
SQL based analytic privileges provides you the flexibility to create analytic privileges within the familiar SQL
environment. You can create and apply SQL analytic privileges for a selected group of models or apply them to
all models across packages.
Prerequisites
If you want to use a SQL analytic privilege to apply data access restrictions on information views, set the Apply
Privileges property for the information view to SQL Analytic Privileges.
Context
SAP HANA modeler support types SQL analytic privileges, the static SQL analytic privileges with predefined
static filter conditions, and dynamic SQL analytic privileges with filter conditions determined dynamically at
runtime using a database procedure.
Procedure
Note
You can also use the attribute editor to create the analytic privilege using the attribute restrictions and
then switch to the SQL editor to deploy the same privilege as SQL analytic privilege.
Note
If you have enabled SQL access for calculation views (of type dimensions used in a star join
calculation view), modeler generates a node column. For analytic privileges, you can maintain a
filter expression using this node column.. For example, if SalesRepHierarchyNode is the node
column that modeler generates for a parent-child hierarchy, then "SalesRepHierarchyNode" =
"MAJESTIX" is a possible filter expression.
Note
Activate the analytic privilege only if you have defined at least one restriction on attributes in the
Associated Attributes Restrictions section.
Static SQL analytic privileges or fixed analytic privileges allows you to combine one or multiple filter conditions
on the same attribute or different attributes using the logical AND or OR operators.
Static SQL analytic privileges conditions typically have the following structure, <attribute> <operator>
<scalar_operands_or_subquery>. For example, "country IN (scalar_operands_or_subquery) AND product =
(scalar_operands_or_subquery)." The supported operator types are IN, LIKE, BETWEEN, <=, >=, <, >.
If you want to create static SQL analytic privileges using subqueries, then the user creating the analytic
privileges must have corresponding privileges on the database objects (tables/views) involved in the
subqueries.
In dynamic analytic privileges, you use a database procedure to dynamically obtain the filter condition string at
runtime. You can provide the database procedure value within the CONDITION PROVIDER clause.
You can use only procedures, which achieve the following conditions to define dynamic SQL analytic privileges.
• DEFINER procedures.
• Read-only procedures.
• Procedure with no input parameters
• Procedure with only one output parameter of type VARCHAR or NVARCHAR for the filter condition string.
• Procedures executable by _SYS_REPO. This means that, _SYS_REPO is either the owner of the procedure
or the owner of the procedure has all privileges on the underlying tables/views with GRANT OPTION and
has granted the EXECUTE privilege on the procedure to the _SYS_REPO user.
Note
Modeler supports only simple filter conditions in dynamic SQL analytic privileges and you cannot use
subqueries for dynamic analytic privileges.
An analytic privilege consists of a set of restrictions against which user access to a particular attribute view,
analytic view, calculation view, or SQL view is verified. In an SQL-based analytic privilege, these restrictions are
specified as filter conditions that are fully SQL based.
SQL-based analytic privileges are created using the CREATE STRUCTURED PRIVILEGE statement:
The FOR clause is used restrict the type of access (only the SELECT action is supported). The ON clause is used
to restrict access to one or more views with the same filter attributes.
A fixed filter clause consists of an WHERE clause that is specified in the definition of the analytic privilege itself.
You can express fixed filter conditions freely using SQL, including subqueries.
By incorporating SQL functions into the subqueries, in particular SESSION_USER, you can define an even more
flexible filter condition.
Example
Note
A calculation view cannot be secured using an SQL-based analytic privilege that contains a complex filter
condition if the view is defined on top of analytic and/or attributes views that themselves are secured with
an SQL-based analytic privilege with a complex filter condition.
Remember
If you use a subquery, you (the creating user) must have the required privileges on the database objects
(tables and views) involved in the subquery.
Comparative conditions can be nested and combined using AND and OR (with corresponding brackets).
Tip
To create an analytic privilege that allows either access to all data or no data in a view, set a fixed filter
condition such as 1=1 or 1!=1.
With a dynamically generated filter clause, the WHERE clause that specifies the filter condition is generated
every time the analytic privilege is evaluated. This is useful in an environment in which the filter clause changes
Sample Code
Procedures in the CONDITION PROVIDER clause must have the following properties:
Tip
A procedure that returns the filter condition 1=1 or 1>1 can be used to create an analytic privilege that
allows access to all data or no data in a view.
• The procedure must be executable by either the user _SYS_REPO (default) or the owner of the analytic
privilege. This means that:
• _SYS_REPO or the owner of the analytic privilege must be the owner of the procedure, or
• The owner of the procedure has all privileges on the underlying tables/views with WITH GRANT
OPTION and has granted the EXECUTE privilege on the procedure to _SYS_REPO or the owner of the
analytic privilege
The parameter [authorization] execute_dynamic_analytic_privilege_by_sys_repo_user in
the indexerver.ini configuration file determines which user must be able to execute the procedure.
Caution
Changing the value of this parameter while there are views already secured using analytic privileges
with a dynamically generated filter may result in the procedure no longer being executable.
• The procedure must return a valid filter string. In particular, the filter string must not be empty and must
represent a valid WHERE condition for the view.
If errors occur in procedure execution or an invalid filter string (empty or not applicable) is returned, the user
receives a Not authorized error, even if he has the analytic privileges that would grant access.
This section describes the different migration activities that users can perform within the SAP HANA modeler
tool. The migration activities involve converting analytic views or attribute views or script-based calculation
views to graphical calculation views or converting classical XML-based analytical privileges to SQL analytic
privileges.
You can perform the migration activity at the package level or at the object level. This means that, you can
select a package and convert all objects within this package to the new object type or select any individual
objects and convert the same to the new object type. After performing a migration activity, the modeler
converts and replaces the target object types with the new object types having the same name.
Note
Migrating from object type to another object type is an optional task that we recommend and is required
only if you want to move to the XSA advanced model (HDI). Before you perform the migration activity, we
recommend that you read the section, Best Practice: Migrating an Object Type to a Different Object Type
[page 199].
Related Information
Convert Attribute Views and Analytic Views to Graphical Calculation Views [page 188]
Convert Script-based Calculation Views to Graphical Calculation Views [page 192]
Convert Classical XML-based Analytic Privileges to SQL-based Analytic Privileges [page 194]
Simulate a Migration Activity [page 196]
Undo Migration Changes [page 197]
Activate Migrated Objects [page 198]
Migration Log [page 199]
In the future, as graphical calculation views becomes the standard for creating information views, we
recommend that you create graphical calculation views for all analytical use cases and also convert existing
analytic views or attribute views to graphical calculation views.
Prerequisites
You have the permissions to perform the modeling activities such as, create, activate, and data preview
information views and analytic privileges.
Before you perform the migration activity, you have read the section, Best Practice: Migrating an Object Type to
a Different Object Type [page 199].
Context
SAP HANA modeler (SPS 11 version onwards) allows you to perform a migration activity within the SAP HANA
modeler tool to convert analytic views or attribute views to graphical calculation views. The migration activity
converts and replaces the target attribute views or analytic views with new graphical calculation views having
the same name.
Note
Converting analytic views and attribute views to graphical calculation views is an optional task that SAP
recommends and is required only if you want to move to the XSA advanced model (HDI).
Procedure
Migration log records changes that modeler performs during this migration activity and provides
information on the status of each migrated content.
a. Select Create Migration Log.
b. Browse to the folder location where you want to save the migration log.
If you are converting analytic views to graphical calculation views, then the join engine prunes N:M
cardinality joins if either the left table or the right table does not request any field and if no filters are
defined on the join path.
10. If you want to use the hidden columns of analytic views or attribute views and if you want to perform any
operations using these columns in the new graphical calculation views, select Unhide Hidden Columns.
Note
The behaviors of hidden columns are different for graphical calculation views. If you convert the views
without selecting the checkbox, then the hidden columns appear as proxies in the new graphical
calculation views. If you want to use these hidden columns in the graphical calculation views and also to
avoid activation errors due to missing columns, it is necessary to unhide the hidden columns. Now, the
columns appear in the client tools for end users as they are not hidden anymore.
If you want to automatically activate the new object types after the migration activity, select Activate
objects after migration.
Note
You can also use the workspace activation if you want to activate all migrated content that are in the
inactive state and also if you want to delete those objects that modeler has marked for delete.
Note
Simulate the migration activity. If you want to perform a migration activity without impacting any of the
exiting objects, for example, to preview the impact of the migration activity, then you can simulate
using the Copy and migrate feature. Simulating a migration activity does not adjust reference of
impacted objects. For more information, see Simulate a Migration Activity [page 196].
In the Impacted Objects dialog, modeler displays the list of objects that you have selected for the migration
activity, the impacted objects, and the references of impacted objects that modeler can automatically
adjust.
a. After verifying the changes, choose Finish to proceed and complete the migration activity.
Note
If the impacted objects are in different package and not in the package that you have selected for
migration, then modeler cannot automatically adjust the references of those impacted objects. In such
cases, it is recommended that you also migrate the impacted attribute and analytic views. You can refer
to the migration log to identify those impacted objects for which modeler could not automatically
adjust the references.
An analytic view or attribute view comprises of entities such as the joins, columns, analytic privileges, view
properties, and more. The behavior of some of these entities change after you convert the analytic view or
attribute view to graphical calculation view.
For converting an attribute view or analytic view to graphical calculation view, it is necessary to modify the
behavior of some of its entities. The following table provides information on the entities and the impact of
migration on the behaviors of such entities.
row.counter Analytic views contain a row.count column that modeler uses internally to calculate the result of select
count(*) queries. You can also use row.count in SQL statements, SQL script, and calculation views built
on top of such analytic views.
The migration activity to convert the analytic view to graphical calculation views, updates the calcula
tion view property, Row Counter with value of row.count. This is to ensure that modeler generates the
column into the catalog calculation view with same semantics as that of the analytic view. If you do not
want to use this column in SQL or upper views, remove it by setting the view property Row Counter to
<BLANK>. If the property is set to <BLANK>, the column is no longer visible in the calculation view, but
is used internally only to calculate the result of select count(*) queries.
Referential Joins All referential joins in the analytic views and attribute views are preserved after converting these views
to new graphical calculation views. However, if a filter is applied on any of the tables in the join, then all
referential joins, which exists in the join path of this table and the fact table, are converted to inner joins
in the new graphical calculation views.
Filters All column filters in the attribute views and analytic views are converted to an equivalent filter expres
sion in the new graphical calculation views.
Input Parameters If you have created any input parameters in the analytic views or attribute views, then the input param
eter definitions are preserved in the new graphical calculation views.
.description col If you are using .description columns in analytic views or attribute views to store attribute descriptions,
umns then these columns are converted to Label Columns in the new graphical calculation views after migra
tion.
Translation Texts All translation texts available in the attribute views and analytic views are preserved in the new graphi
cal calculation views.
Derived Attribute Consider you are converting a derived attribute view, V_DERIVED_ATTRIBUTE, which is derived from
Views the base attribute view, V_BASE_ATTRIBUTE. Modeler converts these views to graphical calculation
views by creating two identical calculation views, V_DERIVED_ATTRIBUTE and V_BASE_ATTRIBUTE.
These views are independent of each other. Any change to V_BASE_ATTRIBUTE does not impact
V_DERIVED_ATTRIBUTE.
Client Dependent In analytic views or attribute views, if the session client for a user is not set to NULL, the system does
Views not filter the data and displays data of all clients. But, after converting such analytic views or attribute
views to graphical calculation views, the session client value is preserved, but the system does not re
turn or display any data.
Hidden Columns If you are converting analytic views or attribute views that have hidden columns, then these columns
are not visible and are available only as proxies in the new graphical calculation views. The migration
tool allows you to unhide the hidden columns before converting them to new graphical calculation
views. This helps avoid activation errors due to missing columns and also to use the hidden columns in
the graphical calculation views. The columns now appear in the client tools for end users as they are
not hidden anymore.
Generate Concat Generate concat attribute property is not supported in graphical calculation views.
Attribute Property
Temporal Joins Temporal joins are supported only in calculation views with star join. Hence, during migration the ana
lytic view is transformed into a calculation view with star join. From SPS 03 onward, you can model
temporal joins using non equi join nodes to cover more modeling scenarios than just with temporal
joins.
For more information on non equi joins, see the related link or refer to SAP HANA Modeling Guide for
SAP Web IDE for SAP HANA.
Search Attributes Search attributes are not supported in graphical calculation views.
Analytic Privileges The value of the view property, Apply Privileges of analytic views or attribute views are preserved in the
new graphical calculation views. For example, if you are converting an analytic view with the property
Apply Privileges set as Classical Analytic Privileges, then the new graphical calculation view also con
tains the same value for its Apply Privileges. property.
Related Information
In the future, as graphical calculation views becomes the standard for creating information views, we
recommend that you create graphical calculation views for all analytical use cases and also convert existing
script-based calculation views to new graphical calculation views that have table functions (containing scripts)
as data sources.
Prerequisites
You have the permissions to perform the modeling activities such as, create, activate, and data preview
information views and analytic privileges.
Before you perform the migration activity, you have read the section, Best Practice: Migrating an Object Type to
a Different Object Type [page 199].
Context
SAP HANA modeler (SPS 11 version onwards) allows you to perform a migration activity within the SAP HANA
modeler tool to convert script-based calculation views to both table functions and to graphical calculations.
Modeler creates the new graphical calculation views by first converting a script-based calculation view to an
equivalent table function and then includes the table function as a data source in a new graphical calculation
view.
The migration activity converts and replaces the target script-based calculation views with new table functions
and new graphical calculation views. The table functions have the prefix TABLE_FUNCTION_ and the new
graphical calculation views have the same name as that of the target script-based calculation views.
Note
Converting script-based calculation views to graphical calculation views is an optional task that SAP
recommends and is required only if you want to move to the XSA advanced model (HDI).
Procedure
Migration log records changes that modeler performs during this migration activity and provides
information on the status of each migrated content.
a. Select Create Migration Log.
b. Browse to the folder location where you want to save the migration log.
7. Choose Next.
8. Select an object or a package that contains the objects that you want to convert to graphical calculation
views and table functions.
9. Choose Add.
10. Activate the objects, if required.
If you want to automatically activate the new object types after the migration activity, select Activate
objects after migration.
Note
You can also use the workspace activation if you want to activate all migrated content that are in the
inactive state and also if you want to delete those objects that modeler has marked for delete.
After migration, the security mode of the new graphical calculation views is DEFINER even if its target
script-based calculation view had INVOKER as the security mode.
Note
Simulate the migration activity. If you want to perform a migration activity without impacting any of the
exiting objects, for example, to preview the impact of the migration activity, then you can simulate
using the Copy and migrate feature. Simulating a migration activity does not adjust reference of
impacted objects. For more information, see Simulate a Migration Activity [page 196].
Related Information
In the future, as SQL-based analytic privileges becomes the standard for creating analytic privileges, we
recommend that you convert existing classical XML-based analytic privileges to SQL-based analytic privileges
and use SQL analytic privileges to provide restricted access to information views.
Prerequisites
You have the permissions to perform the modeling activities such as, create, activate, and data preview
information views and analytic privileges.
Before you perform the migration activity, you have read the section, Best Practice: Migrating an Object Type to
a Different Object Type [page 199].
Context
SAP HANA modeler (SPS 11 version onwards) allows you to perform a migration activity within the SAP HANA
modeler tool to convert the classical XML-based analytic privileges to SQL-based analytic privileges. These new
SQL analytic privileges essentially provide the same restricted access to information views that was earlier
restricted using the classical XML-based analytic privileges.
Note
Converting XML-based analytic privileges to SQL analytic privileges is an optional task that SAP
recommends and is required only if you want to move to the XSA advanced model (HDI). Based on a
selected classical XML-based analytic privilege, the modeler may convert a single classical XML-based
analytic privilege to multiple SQL analytic privileges. These multiple SQL analytic privileges together
provide the restricted access the information views. But, in such cases, where modeler creates multiple
SQL analytic privileges, manually reassign the users or roles to the new SQL analytic privileges. You can
refer to the migration log or the job log for information on SQL analytic privileges that does not have any
roles or users assigned to it.
Procedure
Migration log records changes that modeler performs during this migration activity and provides
information on the status of each migrated content.
a. Select Create Migration Log.
b. Browse to the folder location where you want to save the migration log.
7. Choose Next.
8. Select an object or a package that contains the objects that you want to convert to SQL analytic privileges.
Note
If the package already contains SQL analytic privileges, then modeler skips converting these objects.
Modeler does not convert the classical XML-based analytic privileges in the selected package to SQL
analytic privileges,
• If the analytic privilege is used to apply the data access restrictions to all models in the system
using the Apply to all information models property.
• If the analytic privilege is of read-only type.
• If the analytic privilege is defined using stored procedures to apply data access restrictions on
models.
9. Choose Add.
10. Activate the objects, if required.
If you want to automatically activate the new object types after the migration activity, select Activate
objects after migration.
Note
You can also use the workspace activation if you want to activate all migrated content that are in the
inactive state and also if you want to delete those objects that modeler has marked for delete.
In the Impacted Objects page, modeler displays the list of objects that you have selected for the migration
activity.
a. Choose Finish to convert the selected object to SQL analytic privileges.
Note
Activate the impacted objects. While modeling an information view, you can set the value of the view
property, Apply Privileges as either Classical Analytic Privilege or as SQL Analytic Privilege. If the Apply
Privileges value is Classical Analytic Privilege for information views impacted during this migration
activity, then modeler automatically changes the view property of these impacted information views to
SQL Analytic Privilege after the migration.
Related Information
Simulating a migration activity helps perform a migration activity without impacting any of the existing objects.
For example, you can simulate only to preview the impact of a migration.
Context
Previewing the impact of migration activity is necessary to decide whether to proceed with the migration
activity. By default, a migration activity converts modeler objects to different object types having the same
name, and replaces the old objects in the package with the new objects. This means that, the old objects are no
longer available in the selected package. However, if you do not want the migration activity to impact any of the
existing objects, and instead if you want to just preview the possible impact of a migration activity, then you can
simulate the migration activity. The simulation operation allows you to select a target package to save the new
object types.
Procedure
Note
You can perform simulation operation only for converting analytic views and attribute views to
graphical calculation views and for converting script-based calculation views to table functions and
graphical calculation views.
6. Choose Next.
7. Select a package that contains the objects that you want to convert.
8. Choose Next.
9. Select a package that contains the objects that you want to convert to graphical calculation views.
Related Information
Convert Attribute Views and Analytic Views to Graphical Calculation Views [page 188]
Convert Script-based Calculation Views to Graphical Calculation Views [page 192]
Convert Classical XML-based Analytic Privileges to SQL-based Analytic Privileges [page 194]
Best Practice: Migrating an Object Type to a Different Object Type [page 199]
SAP HANA modeler allows you to undo the changes to the modeler objects due to a migration activity.
Context
After performing a migration activity, modeler creates new object types that remain in the inactive state and
marks the old objects types for delete. Modeler helps undo the migration changes by restoring all inactive
objects in the workspace to its last active version and also by restoring the objects that was marked for delete
to its last active state.
Note
We recommend that you start the migration process with a clean workspace. You can undo the changes
only if you have not yet activated the new object types that modeler has created after a migration activity.
In other words, if you have performed a workspace activation after a migration activity, you cannot undo
the changes to the modeler objects.
Procedure
Context
For example, after converting analytic views to graphical calculation views, the new graphical calculation views
remain in the inactive state and modeler marks the target analytic views for delete. SAP HANA modeler
provides workspace activation to activate all migrated content that are in the inactive state and also to delete
those objects that modeler has marked for delete.
Note
Start migration with a clean workspace. Before you perform the migration activity we recommend you to
have only objects that are in the active state. After a migration is complete, the new object types are in the
inactive state. Starting with a clean workspace helps identify those objects that are impacted by the
migration activity and also activate all inactive objects at the same time.
Procedure
The Activate dialog displays all inactive objects in the workspace and the objects that are marked for delete
due to migration activity. You can activate selected objects or activate all the inactive objects.
a. Choose Finish to complete activation.
Note
If you are activating selected objects, first delete the objects that are marked for delete and then
activate the inactive objects. For example, if you are converting an analytic view to graphical calculation
view, first delete the analytic view that the modeler has marked for delete and then activate the
graphical calculation view.
SAP HANA modeler allows you to create a migration log while performing a migration activity. This log records
critical changes that modeler performs during the migration activity and provides information on the status of
the migration activity.
The migration log is an .html file that you can save on your local system. The following are some of the details
that a migration log records during a migration activity.
• The total number of objects that you have selected for migration.
• The total number of objects successfully converted to new object types.
• The objects that need user actions before you can successfully activate those objects.
• The list of impacted objects whose references modeler could automatically adjust.
• The list of impacted objects whose references modeler could not automatically adjust. Manually select
such impacted objects and convert it to the new object types in a new migration activity.
• For converting classical XML-based analytical privileges, the log records the roles and users associated
with the analytic privileges.
Note
While converting attribute views and analytic views to graphical calculation views, if the impacted objects
are in different packages and not in the package selected for the migration, then modeler cannot
automatically adjust references of those impacted objects. In such cases, you can refer to the migration log
and identify these impacted objects and convert them to graphical calculation views.
The best practices or recommendations described in this section, if adopted, helps perform a smooth
migration activity to convert one modeler object type to another.
Related Information
Convert Attribute Views and Analytic Views to Graphical Calculation Views [page 188]
Convert Script-based Calculation Views to Graphical Calculation Views [page 192]
Convert Classical XML-based Analytic Privileges to SQL-based Analytic Privileges [page 194]
After modeling information views or at design time you can perform certain additional functions, which helps
improve the efficiency of modeling information views.
This section describes the different additional functions that SAP HANA modeler offers and how you can use
these functions to efficiently model views.
Related Information
Open information views in performance analysis mode and obtain information on the catalog tables. This
information helps you analyze the possible performance impacts on information views at runtime. For example,
you can obtain information on table partitions, number of rows in tables, and more.
In performance analysis mode, you obtain following key information and much more depending on data
sources in your information view.
Identify those data sources that have number of rows above a certain threshold value. You can configure this
threshold value in the SAP HANA studio preferences.
1. In the menu bar, choose Windows Preferences SAP HANA Modeler Performance Analysis .
2. In the Threshold values for rows in a table textbox, provide a threshold value.
3. Choose Apply.
4. Choose OK.
If you have partitioned tables, identify the partitioned tables along with partition type (Hash, Range, Round
Robin) and columns used for partitioning the tables. In addition, you can also obtain information on the table
type. For example, if you are using virtual tables, then modeler provides information on the virtual table
properties (remote DB, remote source, remote owner, and remote object) and its values.
If you have enabled performance analysis mode, modeler displays a warning icon across those catalog
tables that have number of rows more than the threshold value that you have defined. Similarly, modeler
also displays certain indicators across the tables, which helps you to identify the partition type of those
particular tables.
Related Information
The number of rows in a data source and table partitions impact the performance of your queries. When you
open an information view in performance analysis mode, you can obtain information on join tables, table
partitions, table types, and other useful information, which helps you analyze your calculation view and its
possible performance impacts at runtime.
Procedure
Note
Note
4. In the menu bar, choose the icon (Switch to Performance Analysis Mode) xto enable performance
analysis mode.
The modeler displays the following information in the Performance Analysis tab.
• Join Details
• Data Source Details
You can choose the same icon to hide performance analysis mode for an information view.
Note
If you want to always open an information view in performance analysis mode by default, configure the
SAP HANA studio preferences.
1. In the menu bar, choose Windows Preferences SAP HANA Modeler Performance
Analysis .
2. In the Threshold values for rows in a table textbox, provide a threshold value.
3. Choose Apply.
4. Choose OK.
Related Information
Open a calculation view in performance analysis mode and select a join view node that has catalog tables as
data sources.
If you have defined a join for the catalog tables, then the JOIN DETAILS section in Performance Analysis tab
provides the following information:
• Catalog tables participating in the join. The tables include the left table and right table.
• The cardinality and join type that you have selected for each join.
• Information on whether you have maintained the referential integrity for the join table.
• If the cardinality that the tool proposes is different from the cardinality that you select or if you have not
maintained referential integrity, tool displays a warning.
Note
Only users with SELECT privileges on the catalog tables participating in the join can view join validation
status.
Restriction
The tool does not support performance analysis and debug queries for join view nodes with multi join
definitions.
Open a calculation view in performance analysis mode and select a view node that has catalog tables as data
sources.
For a selected view node, the DATA SOURCE DETAILS section in Performance Analysis tab provides the
following information:
Note
Only users with system privilege INIFILE ADMIN can identify whether a system is using a scale-out
architecture.
SAP HANA modeler provides a debugger editor, which allows you to write and use SQL queries for debugging
calculation views. This debugging operation helps you analyze the runtime behavior of calculation views and to
improve the performance of calculation views.
Context
You can use the SAP HANA debugger editor to write and execute SQL queries that debug and perform a
runtime analysis of objects. This editor offers several debug actions. For example, you can write a SQL query
for debugging a calculation view and identify those attributes or data sources that engine consumes, while
executing the query.
Note
The debugger editor only supports debugging calculation views that you have activated at least once after
upgrading to SPS 09 server. Only users with SELECT and CREATE ANY privilege on _SYS_BIC can debug
calculation views.
This operation launches a SQL editor. The SQL editor, by default, proposes a query, which you can execute
and debug the calculation view. The modeler proposes this query after analyzing the existing version of
your calculation view. If you changed the existing version, then ensure to reactivate the information view
before debugging it.
You can modify the SQL query to debug the information view. The system always saves this query for that
editor session. In other words, if you close this debug session, and start a new debug session for the same
information view, you can still see your last saved query.
At any point, you can refer to the query that modeler had proposed and compare it with your query by
Related Information
Once you have identified the query to debug your object, you can begin the debugging process and analyze the
runtime behavior calculation views. This analysis can provide you with the information to improve performance
of calculation views.
Procedure
From the tool bar of the SQL editor, select (Execute Query) to execute the query and start debugging.
This operation launches a debugger editor in read-only mode. The debugger editor does not allow you to
modify the content of the calculation view. You can only modify it in the view editor outside of this debug
session.
You use the debugger editor in SAP HANA modeler for the runtime performance analysis of an information
view.
Based on the query you execute, the debugger editor displays the following debugging information:
• For each node in the target column view, the debugger editor displays an icon for the view node type (join,
projection, aggregation, or union). This means that, the engine must convert the node to be able to execute
your query. For example, if you have used a join node, then the engine may convert it and execute it as a
projection node based on the query you execute. This information helps you to use optimal nodes in your
column views.
• Pruned and unpruned data sources in your target information view. Pruned data sources refer to those
data sources that engine does not require for executing the query on your target information view, and
unpruned data sources refer to those involved in the execution of the view. For all unpruned data sources of
type calculation view, choose for further analysis. Similarly, the output pane provides information on
pruned and unpruned attributes and measures in your target information view.
• The details pane provides more information for each node. You can use the query area in details pane for a
simple data preview. For the default node, the query area displays the query that you use for debugging
your information view, and for all other underlying nodes, the query area displays the subqueries.
Note
The Planviz editor provides the execution plan of the engine in a graphical editor. This editor helps you
analyze the performance impacts of the calculation view at runtime. You can then use this analysis
information to rectify the calculation view at design time. Choose on the debugger editor toolbar
to launch its relevant Planviz editor.
SAP HANA modeler provides certain validation rules that when executed, validate the calculation view and help
identify if there are any design time factors that affect the performance of your calculation views.
Context
SAP HANA modeler provides you the following validation rules for performance validations. Execute these rules
to identify the impact on the performance of the calculation view, and make corrections to the view
accordingly.
Note
You can view these validation rules and its definition under the Performance_Workbench category in
Windows Preferences SAP HANA Modeler Validation Rules . Select these rules if you want
modeler to execute them when you Save and Validate or when you Save and Activate your calculation view.
Procedure
2. In the menu bar, choose (Switch to Performance Analysis Mode) dropdown list.
3. Choose Run Performance Validations for this View.
SAP HANA modeler executes the validation rules and provides the validation results in the job log.
When you are modeling an information view, you can also maintain comments for the view or for its objects
such as parameters, hierarchies, view nodes, and more. The comments can include, for example, information
that provides more clarity on the information view or its objects for data modelers accessing the same view or
its objects.
Context
Maintaining comments helps your store more information related to the information view or to store and
provide reference information for other data modelers working on the same information view. You can also use
the comments for documentation purposes:
Note
You cannot translate comments that you maintain for the modeler objects.
e. In the footer region of Calculated Column dialog box, choose (Maintain Comment).
f. Enter a new comment or edit an existing comment.
e. In the footer region of Edit Restricted Column dialog box, choose (Maintain Comment).
f. Enter a new comment or edit an existing comment.
Note
When you choose to generate an object documentation using the Auto Documentation menu option in
the Quick View, modeler also documents any comments that you have maintained within the
document that it generates.
Replace a view node with another view node, or replace a data source with other available data sources in the
HDI container.
A calculation view may contain multiple levels of view nodes. If you manually delete a node in a calculation view
(without using the Replace funtion), and add a new node, you lose the semantic information for the deleted
node.
However, if you want to replace the deleted view node with another view node, you can retain at least some of
the semantic information by using the Replace function to map, for example, columns or input parameters.
Related Information
Replace a view node in a calculation view with any of its underlying nodes or with available data sources in the
catalog object. This operation removes the view node from the calculation view but is still visible as an
individual entity (an orphaned node) in the scenario pane.
Context
For example, in the below calculation view, if you want to replace the node Union_1 with the node Projection_1,
then execute the following procedure.
Procedure
Note
Remove and replace a view node in a calculation view with any of its underlying nodes or with available data
sources in the catalog object. This operation removes the node from the calculation view and the Scenario
pane.
Context
For example, in the below calculation view, if you want to remove and replace the node Union_1 with the node
Projection_1, then execute the following procedure.
Procedure
Replace a data source in calculation views with other available data sources in the catalog object, without
losing the semantic information of the replaced data source.
Procedure
Rename information views or their columns without losing their existing behavior. If these information views or
columns are referred in other modeler objects, then SAP HANA modeler automatically adjusts the references
of these information views or columns in impacted objects.
Related Information
Rename information views without losing its existing behavior. If there are any impacted objects, modeler
automatically adjusts the references of these information views in impacted objects.
Procedure
In the Summary page, modeler displays the existing name and the new name of the information view.
Expand the node to view impacted objects.
Note
Modeler cannot adjust references if the impacted objects are script-based calculation views, table
functions, or procedures. In such cases, manually adjust the references.
7. If you want to copy the information view references to the clipboard, choose (Copy References to
Clipboard).
8. Choose Finish.
Note
After you rename an information view, manually activate the impacted objects and activate your
workspace to delete the objects with earlier names.
Rename columns available in the information views without losing its existing behavior. If there are any
impacted objects, modeler automatically adjusts the references of these columns in impacted objects.
Procedure
Note
In the Rename and Adjust References wizard, modeler displays all columns available in the information
view output.
Note
Modeler cannot adjust references if the impacted objects are analytic privileges, script-based
calculation views, table functions, expressions (for calculated column, restricted columns, and so on.)
and procedures. In such cases, manually adjust the references.
7. If you want to copy column references to the clipboard, choose (Copy References to Clipboard).
8. If you want to skip the rename operation for a particular impacted object, deselect the impacted object.
Note
If you deselect an impacted object, you have to manually adjust the column references in the impacted
objects after rename.
9. Choose Finish.
Note
After you rename columns, you must manually activate the impacted objects.
You can create expressions, for example in calculated columns using the column engine (CS) language or the
SQL language.
Note
Related SAP Notes. The SAP Note 2252224 describes the differences between the CS and SQL string
expression with respect to Unicode or multibyte encoding. The SAP Note 1857202 describes the SQL
execution of calculation views.
Data type conversion functions are used to convert arguments from one data type to another, or to test
whether a conversion is possible.
fixed fixed fixed (arg, int, int) arg2 and arg3 are the fixed(3.2, 8, 2) + fixed(2.3, 8,
intDigits and fractdigits 3)
parameters, respectively.
Convert arg to a fixed type of
either 8, 12, or 16 byte
length, depending on
intDigits and fractDigits
date date date(stringarg) convert arg to date type. The date(2009) -> date('2009')
first version parses a string
date date(fixedarg) in the format "yyyy-mm-dd date(2009, 1, 2) ->
hh:mi:ss" where trailing date('2009-01-02')
date date(int, int) components except for the
year may be omitted. The date(fixed(2000020313502
date date(int, int, int)
version with one fixed 6.1234567, 10, 4)) ->
date date(int, int, int, int) number arg strips digits date('2000-02-03 13:50:26')
behind the comma and tries
date date(int, int, int, int, int) to make a date from the rest.
The other versions accept
date date(int, int, int, int, int, the individual components to
int) be set.
secondtime(string, string)
String functions are scalar functions that perform an operation on a string input value and return a string or
numeric value.
midstr string midstr(string, int, int) Returns a part of the string starting at
arg2, arg3 bytes long.
leftstr string leftstr(string, int) Returns arg2 bytes from the left of the
arg1. If arg1 is shorter than the value of
arg2, the complete string is returned.
rightstr string rightstr(string, int) Returns arg2 bytes from the right of
the arg1. If arg1 is shorter than the
value of arg2, the complete string is
returned.
leftstru string leftstru(string, int) return arg2 characters from the left of
the string. If arg1 is shorter than arg2
characters, the complete string is
returned.
rightsru string rightstru(string, int) return arg2 characters from the right of
the string. If arg1 is shorter than arg2
characters, the complete string is
returned.
- trim(s) = ltrim(rtrim(s))
Scalar math functions perform a calculation, based on input values that are provided as arguments, and return
a numeric value.
time abs(time)
round(-123.456, 1) = -123.5
Date and time functions are scalar functions that perform an operation on a date and time input value and
returns either a string, numeric, or date and time value.
component component(date, int) The int argument may be int the range
1..6, the values mean year, month, day,
hour, minute, second, respectively. If a
component is not set in the date, the
component function returns a default
value, 1 for the month or the day, 0 for
other components. You can also apply
the component function to longdate
and time types.
adddays(longdate, int)
The following table lists the miscellaneous functions that you can use while creating expressions.
Note
If the combination of
cmp, value, and default
is not given, the default
value returned if no
match found, is
determned by the
following:
• odd number of
parameters return
arg1 as default. For
example, case
(arg1) or case (arg1,
cmp1, value1)
returns arg1 as
default.
• even number of
parameters return
the last value as
default. For
example, case(arg1,
x) or
case(arg1,cmp1,valu
e1,x) returns x as
default.
max max(arg1, arg2, arg3, ...) return the maximum value of max(0, 5, 3, 1)
the passed arguments list.
An arbitrary number of
arguments is allowed.
Arguments must be at least
convertible into a common
type.
min min(arg1, arg2, arg3, ...) return the minimum value of min(1, 2, 3, 4)
the passed arguments list.
An arbitrary number of
arguments is allowed.
Arguments must be at least
convertible into a common
type.
The following table lists the supported spatial functions for expressions in the column engine language.
ST_Buffer ST_Geometry Returns the ST_Geometry value that represents all points whose dis
tance from any point of a ST_Geometry value is less than or equal to a
specified distance in the given units.
ST_Difference ST_Geometry Returns the geometry value that represents the point set difference of
two geometries.
ST_Distance ST_Geometry Returns the distance between two geometries in the given unit, ignor
ing z- and m-coordinates in the calculations.
ST_Envelope ST_Geometry Returns the bounding rectangle for the geometry value.
ST_GeometryType ST_Geometry Returns the name of the type of the ST_Geometry value.
ST_Intersection ST_Geometry Returns the geometry value that represents the point set intersection
of two geometries.
ST_IsEmpty ST_Geometry Determines whether the geometry value represents an empty set.
ST_SRID ST_Geometry Retrieves or modifies the spatial reference system associated with the
geometry value.
ST_SRID(INT) ST_Geometry Changes the spatial reference system associated with the geometry
without modifying any of the values.
ST_SymDifference ST_Geometry Returns the geometry value that represents the point set symmetric
difference of two geometries.
ST_Transform ST_Geometry Creates a copy of the geometry value transformed into the specified
spatial reference system.
ST_Union ST_Geometry Returns the geometry value that represents the point set union of two
geometries.
Geometry Serialization
Geometry Transformation
Geometry Inspection
The following table lists the supported spatial predicates for expressions in the column engine language.
You use performance tracing to look into the reasons that cause different Modeler operations a longer time to
complete. For example, you may have problems in mass activation of objects and you want to troubleshoot this
problem.
Context
The performance trace contains the information about the total time spent in performing a specific operation,
which includes various function calls and their execution. The performance trace log file supports two formats:
• CSV- in this format the log file contains entries in the comma separated format. Developers can use
external tools like Microsoft Excel to read the trace to troubleshoot the problem.
• Simple - in this format the log file contains information in the simple user readable text format.
We recommend to keep the log file as small as possible by recording the activity or action you want to
troubleshoot. Also, the recommended format for log file is CSV.
Procedure
You use this procedure to enable an attribute search for an attribute used in a view. Various properties related
to attribute search are as follows:
• Freestyle Search: Set to True if you want to enable the freestyle search for an attribute. You can exclude
attributes from freestyle search by setting the property to False.
• Weights for Ranking: To influence the relevancy of items in the search results list, you can vary the
weighting of the attribute. You can assign a higher or lower weighting (range 0.0 to 1.0). The higher the
weighting of the attribute, the more influence it has in the calculation of the relevance of an item. Items
with a higher relevance are located higher up the search results list. Default value: 0.5.
Note
To use this setting the property Freestyle Search must be set to True.
• Fuzzy Search: This parameter enables the fault-tolerant search. Default: False.
• Fuzziness Threshold: If you have to set the parameter Fuzzy Search to True you can fine-tune the threshold
for the fault-tolerant search between 0 and 1. Default: 0.8
Note
We recommend using the default values for Weights for Ranking and Fuzziness Threshold to start with.
Later on, you can fine-tune the search settings based on your experiences with the search. You can also
fine-tune the search using feedback collected from your users.
Configuring tracing support for SAP HANA modeler that helps you generate trace files, which you can use to
report any modeling-related issues to SAP support team.
Context
If you want to configure logging and tracing, then execute the following steps:
Procedure
1. In SAP HANA studio, choose Windows Preferences Tracing SAP HANA Modeler .
2. Set the debug plug-in value to true.
3. Select Output to file radio button.
4. In the Output to file field, provide the path to save the trace file.
5. Choose OK.
Note
We recommend to stop recording the trace as soon you finish replicating and reporting the issue.
The job log displays information related to requests entered for a job. A job log consists of two tab pages as
follows:
Note
You can perform the following operations using the job log:
• Open Job Details: Use this to view the job summary in the current tab page.
• Open Job Log File: Use this to view the information pertaining to a job in detail using the internal
browser.
• Clear Log Viewer: Use this to delete all the job from the current tab page.
• Export Log File: Use this to export the log file to a target location other than the default location for
further reference.
You can check if there are any errors in an information object and if the object is based on the rules that you
specified as part of preferences. For example, the "Check join: SQL" rule checks that the join is correctly
formed.
Procedure
You use this procedure to adjust the data foundation and logical view layout comprising user interface controls
like, tables and attribute views in a more readable manner. This functionality is supported for attribute views
and analytic views.
Highlight related tables in Data Use this option if you want to view only 1. In the editor, right-click the
Foundation those tables that are related to a table selected table.
selected in the editor. 2. From the context menu, choose
Highlight related tables.
Display Use this option if you have a table with 1. In the editor, right-click the
a large number of columns in the relevant table.
editor, and you want to view them in a
way that meet your needs: for example,
only the table name, or only joined 2. From the context menu, choose
columns, or the expanded form with all Display.
the columns. 3. If you want to view only the table
name, choose Collapsed.
4. If you want to view all the columns
of the table, choose Expanded.
5. If you want to view only the joined
columns of the table, choose Joins
only.
Show Complete Name Use this option to view the complete 1. In the Scenario pane, choose a
name of a truncated column. view node.
2. In the Details pane, choose the
required input.
3. In the context menu, choose Show
Complete Name.
Show Description Use this option to view the column 1. In the Scenario pane, choose a
description. view node.
2. In the Details pane, choose the
required input.
3. In the context menu, choose Show
Description.
Context
SAP HANA modeler supports search operations, which you case use for the following:
• Searching tables available in SAP HANA system to either view table definitions or to add them to your
information view editor.
• Searching models available in your SAP HANA system to either open the models in the view editor, or if you
want to add them to your analytic view or calculation view editors.
• Searching column views available in your SAP HANA system to either open column view definitions, or if
you want to add them to your calculation view editor.
Note
You can add matching search entities (tables, models, column views) to your information views only if your
view editor is open in edit mode and supports adding the search entity. For example, you cannot search for
an attribute view and add it to the view editor of another attribute view.
1. In the SAP HANA search bar, enter the name of the table you want to search.
2. Choose (Search).
3. Choose Tables or Models or Column Views tabs based on the search entity.
4. In the search list, select the required entity.
5. If you want to open matching search entity, then choose Open.
6. If you want to add table to the information view, choose Add.
Note
Search in specific SAP HANA systems by selecting the required system from the dropdown list of
(Search).
This section describes how you can manage objects within the SAP HANA systems. In addition, it also
describes a few important functions that you can perform on these objects.
You activate objects available in your workspace to expose the objects for reporting and analysis.
• Activate and ignore the inconsistencies in affected objects - To activate the selected objects even if it results
in inconsistent affected objects. For example, if you choose to activate an object A that is used by B and C,
and it causes inconsistencies in B and C but you can choose to go ahead with the activation of A. This is the
default activation mode.
• Stop activation in case of inconsistencies in affected objects - To activate the selected objects only if there
are no inconsistent affected objects.
Note
If even one of the selected objects fails (either during validation or during activation), the complete
activation job fails and none of the selected objects is activated.
Depending on where you start the activation, redeployment or cascade activation, the behavior is as follows:
Quick View pane A dialog box appears with a A dialog box appears with a list of
preselected list of all your inactive active objects in your workspace.
objects.
Package context menu A dialog box appears with a A dialog box appears with a list of
preselected list of all your inactive active objects in your workspace.
objects.
Object context menu A dialog box appears with a A redeployment job is submitted for
preselected list of the selected object the selected object.
along with all the required objects.
Note
• If an object is the only inactive object in the workspace, the activation dialog box is skipped and the
activation job is submitted.
• If an object is inactive and you want to revert back to the active version, from the editor or object
context menu, choose Revert To Active.
• In the Activate dialog, you can select the Bypass validation checkbox to skip validation before activation
to improve the activation time. For example, if you have imported many objects and want to activate
them without spending time on validation.
Note
During delivery unit import, full server side activation is enabled, activation of objects after import is done.
In this case all the imported objects are activated (moved to active table), even if there are errors in
activated or affected objects. But the objects for which activation results in error are considered as broken
or inconsistent objects, which means that the current runtime representation of these objects is not in sync
with the active design time version. The broken objects are shown in the Navigator view with an ‘x’
alongside.
Note
• The status (completed, completed with warnings, and completed with errors) of the activation job
indicates whether the activation of the objects is successful or failed.
• In case of failure that is when the status is completed with errors, the process is rolled back. This
means that, even if there are individual objects successfully activated, since the activation job is rolled
back, none of the objects are activated.
• If you redeploy a repository view, all privileges that you granted for it are dropped. For the main view,
the system remembers the users and roles that had these privileges, and grants them again at the end
of the activation phase. However, the system does not support this for the hierarchy views. So after
activating each view, you must again grant the privileges for its hierarchy views. For more information,
see SAP Note 1907697 .
The following table describes the availability and behavior of take over and activate options for an object from
the view editor in the SAP HANA Modeler perspective.
1 OBJ1 Inactive Inactive Inactive Not Ap Allowed If an object has multiple inactive ver
plicable
sions, and the object version in Mod
eler is also inactive, for example,
through delivery unit import or an
other workspace in Project Explorer,
user can activate his own inactive ob
ject. After activation, the object is the
scenario 2 as in the next row.
Note
If the logged-in user and the user
to whom the object belongs are
different, the activation is not al
lowed. For example, if the object is
inactive in SYSTEM user’s work
space and MB user opens the ob
ject, the object opens in read-only
mode, and the activation is not al
lowed.
2 OBJ1 Inactive Inactive Active Not Al Not Al If an object has multiple inactive ver
lowed lowed
sions in the Project Explorer and the
object version in Modeler is active, nei
ther activation nor take over option is
enabled.
3 OBJ1 Inactive Active Active Al Not Al If an object has single inactive version
lowed lowed in the Project Explorer, and the object
version in Modeler is active, only take
over option is enabled.
4 OBJ1 Inactive Active Inactive Not Ap Allowed If an object has inactive versions in the
plicable Project Explorer and Modeler, only ac
tivation option is enabled.
5 OBJ1 Active Inactive Active Al Not Al If an object has multiple active ver
lowed lowed sions such as, one in the Project Ex
plorer and one in the Modeler, only
take over option is enabled.
6 OBJ1 Active Active Inactive Not Ap Allowed If an object has single inactive version,
plicable and the object version in Modeler is in
active, only activation option is ena
bled.
7 OBJ1 Active Inactive Inactive Not Al Allowed If an object has single active version,
lowed and the object version in Modeler is in
active, only activation option is ena
bled.
8 OBJ1 Active Active Active Not Ap (Rede If an object has multiple active ver
plicable ploy) sions, and the object version in Mod
eler is active, only take over activation
(redeploy) option is enabled.
You can copy an object in the SAP HANA Systems view and paste it to a required package.
Context
You must have write permissions on the target package where you are pasting the object. The copy-paste
feature is supported for all Modeler objects that is, attribute view, analytic view, calculation view, procedure,
and analytic privilege. The object that is copied to the target package is always inactive, even if the source
package is in active state.
By default, the keyboard shortcut for copy and paste is CTRL + C and CTRL + V respectively. To enable
keyboard shortcut for copy and paste, apply the Modeler keyboard shortcuts from the Window Preferences
General Keys and select Modeler as scheme.
Note
1. In the SAP HANA Systems view, select an object and in the context menu, choose Copy.
Note
If you have applied the keyboard shortcuts then you can also press CTRL + C to copy an object.
2. Navigate to the package where you want to paste the object, and choose Paste.
Note
If you have applied the keyboard shortcuts then you can also press CTRL + V to paste an object.
If objects within an information view are missing, for example, if the objects or its references are deleted, then
the information view is referred to as broken models. By using proxies, SAP HANA modeler helps you work with
broken models and fix inconsistencies.
When you open broken models, the system displays red decorators for all missing objects, which are essential
to activate the information view.
Example
If you have defined an attribute view ATV1 on table T1 (C1, C2, C3) such that Attributes A1, A2, A3 is defined
on columns C1, C2, C3 respectively. Now, if you remove column C2 and C3 from the table T1, then the
attribute A3 becomes inconsistent. In such cases, the system injects proxies for C3, and when you open the
attribute view in the editor, the system displays a red decorator for C2, C3, and an error marker for A3 to
Note
If the connection to SAP HANA system is not available, and if you try to open a view, then the system uses
proxies for all required objects and opens the view in read-only mode. But, since the model is not broken,
the red decorators and the error markers are not shown..
You can resolve inconsistencies in analytic views or attribute views or calculation views by performing one of
the following:
• Deleting the missing objects, which the information view requires. This clears all references of missing
object.
Note
The system logs inconsistencies within information view in the Problems view of SAP HANA Development
perspective.
For a selected object, identify all other object that uses this object. Checking object references are helpful while
editing or deleting an object in a distributed development environment.
Context
For example, you can select an information view and identify all other objects, which use this information view,
or you can identify where you are using an input parameter within a calculation view.
Procedure
1. If you want to check the references of an information view, then perform the following substeps:
a. In the SAP HANA Systems view, expand a system node.
b. Expand the Content node.
c. Expand the required package node.
d. Select the required object.
e. In the context menu, choose Where-Used.
2. If you want to check the references of view elements like, input parameters, columns, then perform the
following substeps:
a. Select the element.
b. In the context menu, choose References.
Note
You can also check object references for elements in Output pane. Select an object, and choose
References from the context menu. In Details pane, you can select an element from Parameters/
Use this procedure to capture the details of an information model or a package in a single document. This helps
you view the necessary details from the document, instead of referring to multiple tables.
Context
The following table specifies the details that you can view from the document.
Type Description
Procedure
Option Description
Refactoring content objects restructures your content objects in the SAP HANA systems view (SAP HANA
modeler perspective) or the repository workspace (SAP HANA development perspective) without changing the
object behavior.
While refactoring the objects, the system automatically adjusts all objects references. The modeler objects
available for refactoring are, attribute views, analytic views, graphical calculation views, and analytic privileges.
Related Information
Refactoring objects means moving objects from one package to another within the SAP HANA Systems view,
without losing the behavior of these objects.
Context
The refactoring process deletes the object that you move from the source package, and creates an object with
the same name and behavior in the destination package.
The following table provides information on the activation status of objects before and after the refactoring
process:
Note
An impacted object is the one that uses or has references to the base object. For example, an analytic view
using an attribute view is an impacted object for that attribute view.
Procedure
3. In the context menu, select the required objects and choose Refactor Move .
4. In the Move dialog box, select the target package.
5. Choose Next.
6. If you want to skip any of the refactor steps, then in the Changes to be performed pane, deselect the
required steps.
7. Choose Finish.
8. Assign changes
a. In the Select Change dialog box, either create a new ID or select an existing change ID that you want to
use to assign your changes.
b. Choose Finish.
For more information on assigning changes, see chapter SAP HANA Change Recording of the SAP
HANA Developer Guide.
9. Choose Finish.
Refactoring objects in the SAP HANA development perspective are moving modeler objects within or across
projects in the same repository workspace, without losing the behavior of these objects.
Context
You can refactor objects from both the Project Explorer and Repositories view. The refactoring process deletes
the object that you move from source package, and creates an object with same name and behavior in the
destination package. If the object that you move has references to other objects, then the refactor process
automatically updates all references in the affected objects residing in the same repository workspace. You
manually activate all moved objects and their affected objects.
Note
You can use the keyboard shortcut Ctrl + Z to undo the refactoring process.
This section describes how you can import and use SAP BW objects, within the modeling environment, for
reporting purposes.
Additional information is available for using Multidimensional Expressions (MDX) in the SAP HANA Developer
Guide (for SAP HANA Studio). The link is included in Related Information.
Related Information
You can import SAP Business Warehouse (SAP BW) models that are SAP HANA-optimized InfoCubes,
Standard DataStore Objects, Query Snapshot InfoProviders, and InfoObjects of type Characteristics to the SAP
HANA modeling environment.
Prerequisites
• You have implemented SAP Notes 1703061 , 1759172 , 1752384 , 1733519 , 1769374 , 1790333
, 1870119 , 1994754 , and 1994755 .
• You have installed SAP HANA 1.0 SPS 05 Revision 50 or above.
• You have added BW schema in the SQL privileges for the Modeler user to import BW models.
• _SYS_REPO user has SELECT with GRANT privileges on the schema that contains the BW tables.
Context
Import SAP BW objects to expose it as SAP HANA models to the reporting tools.
Note
• You can only import those Standard DataStore objects that have SID Generation set to During
Activation.
Procedure
Note
To add new connection details, select New Connection option from the Connection dropdown list. The
connection details are saved and are available as dropdown options on subsequent logons.
You can use SAProuter string to connect to the SAP BW System over the internet. You can obtain the
SAProuter string information of your SAP BW system from your SAP Logon. In your SAP Logon screen,
choose your SAP BW system Edit Connection
7. Optional Step: Activate Secure Network Connections (SNC)
Select Activate Secure Network Connections and provide the SNC Name of your communication partner.
You can use SNC to encrypt the data communication paths that exist between an SAP HANA Studio and
your SAP BW system. You can obtain the SNC name of your SAP BW system from SAP Logon. In your SAP
Logon screen, choose your SAP BW system Edit Network
8. Select the target system (an SAP BW on SAP HANA) to, which you want to import the models, and choose
Next.
9. Select the BW InfoProviders that you want to import and expose as SAP HANA information models.
Remember
In order to import the QuerySnapshot InfoProvider, make sure that the BW Query is unlocked in
transaction RSDDB, and an index is created via the same transaction before it can be used as
InfoProviders.
10. Select the target package where you want to place the generated models, and analytic privileges.
Note
Your package selection is saved during the subsequent import. Hence, the next time you visit the same
wizard you get to view the package that was selected previous time. You can though change the
package where you want to import objects.
11. If you want to import the selected models along with the display attributes for IMO Cube and Standard
DSO, select Include display attributes.
For InfoObjects all the attributes are added to the output and joined to their text tables if exists.
Note
While importing your SAP BW models, the SAP HANA system imports the column labels of these
models in the language that you specify in its properties. However, in your SAP BW system, for any of
the columns, if you do not maintain column labels in the language that you specify in your SAP HANA
system properties, then those column labels appears as blank after import. If you want to check the
default language for your SAP HANA system, then:
1. In the Systems View, select the SAP HANA system in which you are importing the models.
2. In the context menu, choose Properties.
3. In the Additional Properties tab, the dropdown list Locale specifies the language of objects, which
you create in SAP HANA repository.
Results
The generated information models and analytic privileges are placed in the package selected above. In order to
view the data of generated models, you need to assign the associated analytic privileges that are generated as
part of the model import to the user. If these privileges are not assigned, user is not authorized to view the data.
Related Information
If you select a DataStore object, the resultant SAP HANA model is an analytic view with the same name as that
of the DataStore object. If you select an InfoCube, two objects are created: analytic view and calculation view.
If you select an InfoObject Characteristic, the resultant SAP HANA model is an attribute view with the same
name as that of the InfoObject. Both Display and Navigational attributes are included in the generated attribute
view. If the selected characteristic contains time dependent attributes or time dependent text, then two
additional fields DATETO and DATEFROM have the following filters:
The filter value $$keydate$$ is a placeholder for the input parameter. When this attribute view is used in any
analytic view or calculation view, this parameter can be mapped with the input parameter of the same name of
the analytic or calculation view to filter data based on the keydate. The name of the input parameter in the
analytic view or calculation view shoube be named as keydate.
The SAP HANA Modeler imports BW analysis authorizations as analytic privileges. You can associate these
privileges with the InfoProviders or roles.
• • Only import InfoProvider-specific analysis authorizations. In this case, for all the authorization objects
specific to the InfoProvider having 0CTAIPROV = <InfoProvider name>, the corresponding analytic
privileges are generated. The name of the analytic privilege is the same as that of the BW analysis
authorization object.
• You can choose to import analysis authorizations associated with the BW roles for the InfoProviders. In
this case, all the analysis authorizations assigned to the selected roles are merged as one or more
analytic privileges. The name of the generated analytic privilege is <InfoProvider
name>_BWROLE_<number>, such as, MyCube_BWROLE_1.
These analysis authorizations set on the InfoProviders are applicable at runtime for reporting. For example,
consider that a user has the following authorizations in BW:
AO1
0CUSTOMER 1000 - 2000
0PRODUCT ABC*
AO2
0CTAIPROV CUBE1, CUBE2
0CTAACTVT 03 (display)
Note
• In the case of Query Snapshot, all the BW Analysis Authorization objects that are applicable for the
underlying InfoProvider of the query, will also be applicable for the Query Snapshot.
• These BW analysis authorization objects will be imported as analytic privileges when importing the
query snapshot.
You can choose to place the generated models and analytic privileges in any of the user-defined packages in
the import wizard where you can enhance the generated models. However, with the subsequent import of the
same objects, the changes are overridden. Also, changes made to the models on the BW side are not
automatically reflected in the generated models. This may lead to inconsistent generated models based on the
changes made to the physical tables. To avoid this, you need to reimport the models.
Caution
• The calculated key figures (CKFs) and restricted key figures (RKFs) defined on the SAP BW models are
not created for the generated SAP HANA models. In this case, you can create an RKF as restricted
measure in the generated analytic view. For CKF you can create calculated measures in the generated
calculation view or analytic view. These CKFs and RKFs are retained during subsequent import.
Additionally, the calculated attributes created on the generated analytic views (in case of InfoCubes,
DSOs and Query Snapshot) are also retained during subsequent import. If a change is made to the
characteristics or key figures based on which these restricted measures and calculated measures are
created, this may lead to inconsistency in the generated models. In this case, you need to manually
adjust these restricted measures and calculated measures.
• The hierarchies defined on the selected InfoObejcts are not created for the generated SAP HANA
Models. However, you can create calculated attributes and hierarchies on the generated attribute view.
These calculated attributes and hierarchies are not retained during the subsequent import.
• The BW analysis authorization objects are not always mapped 1:1 with the generated analytic privileges
on the SAP HANA Modeler side. If the BW Analysis Authorization object does not include 0TCAIPROV,
the authorization is not moved to SAP HANA. Also, restrictions created in the BW analysis
authorization are skipped if they do not match with the restrictions supported by the SAP HANA
Modeler. In such cases, the data available for reporting for an SAP HANA Modeler user differs from the
SAP BW user with the assigned restrictions.
• For a DSO generated analytic view, all the data in the active table is available for reporting.
• For an InfoCube generated calculation view, only successfully loaded requests are available for reporting
(these are the green requests in Manage InfoCube section).
Restriction
• The following features are not supported on the generated SAP HANA models:
• DSO without any key figure
• Currency and unit of measure conversion
This section describes how you can create and manage decision tables within the SAP HANA modeling
environment.
Related Information
Perform the migration activity to convert decision tables to different object types. Based on the decision table
definition, modeler converts it to different object types.
Context
Migrating decision tables is an optional step that SAP recommends and is required if you want to move the XS
advanced model (HD).
Note
Procedure
Example
5. Select the generated artifacts, in the context menu, choose Team Activate .
Note
You use this procedure to create a decision table to model related business rules in a tabular format for
decision automation. You can use decision tables to manage business rules, data validation, and data quality
rules, without needing any knowledge of technical languages such as SQL Script or MDX. A data architect or a
developer creates the decision table and activates it. The active version of the decision table can be used in
applications.
Prerequisites
This task describes how to create a decision table. Before you start this task, note the following prerequisites:
Note
For more information about projects, repository workspaces, and sharing of projects, see Using
SAP HANA Projects in the SAP HANA Developer Guide for SAP HANA Studio.
You can create a decision table by using one of the following options:
• If you are in the SAP HANA Modeler perspective, perform the following steps:
1. In the SAP HANA Modeler perspective, expand <System Name> Content <Package Name> .
2. In the context menu of the package, choose New Decision Table .
3. In the New Decision Table dialog box, enter a name and description for the decision table.
4. To create a decision table from scratch or from an existing decision table, perform the following
substeps:
Scenario Substeps
Create a decision table from an existing decision table 1. Choose Copy From.
2. Browse the required decision table.
3. Choose Finish.
• If you are in the SAP HANA Development perspective, perform the following steps:
1. Go to the Project Explorer view in the SAP HANA Development perspective, and select the project.
2. In the context menu of the selected project, choose New Other...
Note
You can also create a decision table from the File menu. Choose New Other...
3. In the popup wizard, open SAP HANA and expand Database Development Modeler .
1. Select Decision Table.
Note
You can also search for the decision table directly by using the search box in the wizard.
2. Choose Next.
1. In the New Decision Table dialog, choose Browse to choose the project under which you want
to create your decision table. Enter a name and description.
If the project is shared, the Package field specifies the package that is associated with the
project.
2. Choose Finish.
The decision table editor opens. It consists of three panes: Scenario, Details, and Output.
• The Scenario pane of the editor consists of the Decision Table and Data Foundation nodes. Selecting any of
these nodes shows the specific node information in the Details pane.
• The Details pane of the Data Foundation node displays the tables or information models used for defining
the decision table. The Details pane of the Decision Table node displays the modeled rules in tabular
format.
• The Output pane displays the vocabulary, conditions, and actions, and allows you to perform edit
operations. Expand the vocabulary node to display the parameters, attributes, and calculated attributes
sub-nodes. In the Output pane, you can also view properties of the selected objects within the editor.
Related Information
You can add tables, a table type, or an information view to the decision table in any of the following ways
Procedure
1. In the Scenario pane, drag the required table or table type to the Data Foundation node.
Note
• In the SAP HANA Development perspective, you can view the physical tables and table types under
Catalog in the SAP HANA System Library. The information views are displayed under Package, and
check out views to use them in decision tables.
• In the SAP HANA Modeler perspective, you can view tables and table types under Catalog, while
information views are under Content in Package.
2. Hover over the Data Foundation node and choose the + icon next to the node to search for the object you
want to add.
3. In the context menu of the Scenario pane, choose Add Objects and search for the object you want to add.
4. In the Scenario pane, select the Data Foundation node. In the context menu of the Details pane, choose
Add... and search for the object you want to add.
• You can create a decision table by using an analytic view only if it has a calculated attribute.
• You can model a decision table on one table type or information view only, or on multiple physical
tables.
• You can mark table type columns and information view columns only as conditions and not as
actions. You can use only parameters as actions.
Remember
You can set the decision table property Mutually Exclusive to True or False. The value of this property is
set to True by default.
• If the value is set to True, then the search stops when any condition row is partially matched in the
decision table, even though there might be other rows that could have been fully matched.
• If the value is set to False, then all the condition rows are checked from top to bottom and the
action value is updated based on the first match.
For example, consider a scenario where you would like to give a 10% discount on all cold drinks in the
summer season for the country India. The decision table has been modeled as follows:
India Summer 10
Winter 9
Any Any 8
If the Mutually Exclusive property is set to True, then the discounts on cold drinks is calculated as
follows:
India Winter 9
India Autumn 0
If the Mutually Exclusive property is set to False, then the discounts on cold drinks is calculated as
follows:
India Winter 9
India Autumn 8
Procedure
1. If you want to model a decision table based on multiple tables, go to the Data Foundation node. In the
context menu of the Details pane, choose Create Join.
Note
You can also join two tables by linking a column of one table to a column in another table.
Procedure
1. In the Details pane of the Data Foundation node, select the required table column.
2. From the context menu of the table column, choose Add as Attribute.
Note
• Attributes contain a subset of columns that can be used as conditions, actions, and in calculated
attributes.
• To delete attributes from the Attributes node, choose Remove from the context menu of the Output
pane. However, you cannot delete the attributes that are already used as actions or conditions.
You can add different conditions and actions on the attributes present in the table.
Procedure
In the Output pane, expand the Attributes node and perform the following substeps:
Add conditions from the attributes Select the required attributes and choose Add as Conditions
from the context menu.
Add actions for the selected conditions Select the required attributes and choose Add as Actions
from the context menu.
Note
• To delete conditions and actions, choose Remove from the context menu of the Conditions/Actions
node in the Output pane.
• You can provide an alias name for a condition or an action by editing the value of the Alias name
property.
• You can choose to create parameters and use them as conditions or actions. If you are using
parameters as conditions, the values you provide for the parameters at runtime determine which rules
are followed when updating the action values. For more information on how to use parameters, see Use
Parameters in a Decision Table [page 260]
• You can arrange the condition and action columns of the decision table depending on how you want
them to appear. For more information, see Change Layout of a Decision Table [page 259].
Procedure
Note
Note
The syntax of "In" operator
contains different formats for
enumeration or string, and
more.
Note
The values include 20 and 30.
Or
< 2012-12-12
Or
> 2012-12-12
Note
• And & Or operators are supported for all data types except string data type.
• If the supported data type of the condition column is CHAR-based, you must put IN and the
associated value in quotation marks. This ensures that IN is not considered as an operator. For
example, “IN PROCESS” is a value, whereas IN PROCESS without quotation marks reflects IN
as an operator and PROCESS as a value
.
Note
• If a database table column is used as a condition, you can use the value help dialog to select
the condition values. You can select multiple values at one time. You can edit a condition value
by selecting the condition and entering a value.
• You can enter a pattern for the condition values that have the data type VARCHAR. The pattern
must be prefixed with the LIKE and NOT LIKE operators, for example, LIKE a*b or NOT LIKE a?
b. If the LIKE or NOT LIKE operator is not present, the pattern is treated as a string.
Note
Note
You can use parameters and attributes of the same data type as that of the action or condition in
expressions. For example, if the data type of the condition is integer, then all parameters and
attributes of the integer data type can be used in the condition expression.
Note
If you do not provide a value for the search and choose Find, all the data corresponding to the
selected column is shown.
Note
You can export decision table data to an Excel sheet by using the context menu option Export Data
to Excel in the Details pane of the Decision Table node. You can also import decision table data from
the Excel by using the context menu option Import Data from Excel in the Details pane of the
Decision Table node.
Procedure
1. To set the rules that you want to use for validation, do the following:
Note
In the Job Log section, you can see the validation status and detailed report of the decision table.
Note
In the SAP HANA Development perspective, only client-side validation occurs. However, in the SAP
HANA Modeler perspective, both client- and server-side validation occurs.
• If you are in the SAP HANA Modeler perspective, do the following as required:
• Save and Activate - Activates the current decision table.
• Save and Activate All - Activates the current decision table along with the required objects.
• If you are in the SAP HANA Development perspective, do the following:
1. In the Project Explorer view, select the required object.
2. From the context menu, select Team Commit .
3. From the context menu, select Team Activate .
Note
• You can choose to save and activate the view from the editor by using .
• The activation always triggers the validation check for the server-side rules. However, if you have
selected validation rules in the Preferences dialog box, then the client-side validation is also triggered.
Result: Upon successful activation, a procedure corresponding to the decision table is created in the _SYS_BIC
schema. The name of the procedure is in the format <package name>/<decision table name>. In addition, if a
parameter is used as an action in the decision table, the corresponding table type is created in the _SYS_BIC
schema. The name of the table type is in the format <package name>/<decision table name>/TT.
Remember
If parameters are used as conditions in a decision table, corresponding IN parameters are generated. Also,
if the parameters are used as actions, an OUT parameter is generated.
To execute the decision table procedure, perform the following steps as required:
Physical tables Physical table col Physical table column call "<schema name>"."<procedure
umn name>";
Remember
The order of the parameters while executing the procedure must be the same as in the Output panel, and
not as used in the decision table.
Tip
You can view the procedure name by using the Open Definition context menu option for the selected
procedure.
Result: Upon execution of the procedure, the physical table data is updated (if no parameters are used), based
on the data that you enter in the form of condition values and action values.
Remember
If parameters are being used as actions in a decision table, the physical table is not updated.
You use this procedure to change the decision table layout by arranging the condition and action columns. By
default, all the conditions appear as vertical columns in the decision table. You can choose to mark a condition
as a horizontal condition, and view the corresponding values in a row. The evaluation order of the conditions is
such that the horizontal condition is evaluated first, and then the vertical ones.
Note
You can only change the layout of a decision table if it has more than one condition. You can mark only one
condition as a horizontal condition.
Procedure
Note
4. Choose OK.
5. Save the changes.
Note
You can also set a condition as horizontal from the context menu of the condition in the Output pane.
You can also arrange the conditions and actions in the desired sequence in the Output pane by using
the navigation buttons in the toolbar.
Note
You can also arrange the sequence by using the navigation buttons at the top of the Output pane.
You use this procedure to create a parameter that can be used to simulate a business scenario.
Procedure
1. Create a Parameter
a. In the Output pane, select the Parameters node.
b. From the context menu, choose New and Enter a name and description.
c. Select the required data type from the dropdown list
d. Enter the length and scale as required.
e. Choose the required Type from the dropdown list.
Note
If you have selected Static List for Type, choose Add in the List of Values section to add values. You
can also provide an alias for the enumeration value.
a. Choose OK.
2. Use Parameter as Condition or Action
a. In the Output pane, select the Parameters node.
b. From the context menu of the parameter, choose Add as Conditions/ Add as Actions.
Parameters are used to simulate a business scenario. You can use parameters as conditions and actions in the
decision table at design time. Parameters used as conditions determine the set of physical table rows to be
updated based on the parameter value that you provide at runtime during the procedure call. Parameters used
as actions simulate the physical table without updating it.
Static List Use this if the value of a parameter comes from a user-
defined list of values.
Example
If you want to evaluate Discount based on the Quantity and Order Amount, you can create two parameters:
Order Amount and Discount. Use Quantity and Order Amount as the condition, and Discount as the action. The
sample decision table could look like this:
>5 50000 10
>=10 100000 15
You use this procedure to create calculated attributes that can be used as conditions in a decision table. You
can create a calculated attribute to perform a calculation using the existing attributes, parameters, and SQL
functions.
Procedure
Note
3. Choose OK.
4. Add the required calculated attribute as a condition.
This section describes on how you can manage active and inactive versions of objects in SAP HANA modeling
environment.
Related Information
You use this procedure to take over the ownership of the inactive version of an object from the workspace that
belongs to another user.
Prerequisites
Context
Objects in edit mode in other workspaces are not available for modification. To modify such objects you need to
own the inactive object. The options available for changing the inactive object ownership are as follows:
Option Purpose
Switch Ownership To take over multiple inactive objects from other users.
Inactive objects that do not have an active version are also
available for take over using this option
Take Over To take a single inactive object from another workspace that
you wish to edit using the editor.
Note
Using this functionality you can only own the inactive version of the object. The active version is owned by
the user who created and activated the object.
1. If you want to own multiple inactive objects from other workspaces, do the following:
a. In the Quick View pane, choose Switch Ownership.
b. Select a system where you want to perform this operation.
c. In the Source User field, select the user who owns the inactive objects.
d. Add the required inactive objects to the Selected Models section.
e. Choose Finish.
2. If an object opens in read only mode and you want to edit it, do the following:
a. In the editor toolbar, select Switch Version.
b. Choose Take Over.
Note
You can choose to save the changes made by the other user (previous owner of the inactive
version) to the inactive version of the object.
You use this procedure to view the active version of an information object while working with its inactive version
for example, to view the changes made to the active version.
Procedure
1. In the SAP HANA Modeler perspective, expand the Content node of the required system.
2. Select the required object from a package.
3. In the context menu, choose Open.
4. In the editor pane, choose Show Active Version.
5. Compare inactive and active versions of the object.
6. Choose OK.
You use this procedure to view the version details of an information model for tracking purposes.
1. In the Modeler perspective, expand the Content node of the required system.
2. Select the required object from a package.
3. From the context menu, choose History.
For information about the capabilities available for your license and installation scenario, refer to the Feature
Scope Description for SAP HANA.
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
• Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
• The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
• SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
• Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such
links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Bias-Free Language
SAP supports a culture of diversity and inclusion. Whenever possible, we use unbiased language in our documentation to refer to people of all cultures, ethnicities,
genders, and abilities.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.