TIBCO EBX 5.9.0 Documentation - Advanced
TIBCO EBX 5.9.0 Documentation - Advanced
TIBCO EBX 5.9.0 Documentation - Advanced
Documentation
EBX Version 5.9.0
Introduction
CHAPTER 1
How EBX works
This chapter contains the following topics:
1. Product overview
2. EBX architecture
Data services help integrate EBX with third-party systems (middleware), by allowing external systems
to access data in the repository, or to manage dataspaces and workflows through web services.
See also
Data modeling [p 22]
Datasets [p 24]
Dataspaces [p 26]
Workflow modeling [p 27]
Data workflows [p 28]
Data services [p 29]
CHAPTER 2
Using the EBX user interface
This chapter contains the following topics:
1. Overview
2. Advanced perspective
3. Perspectives
4. User pane
5. User interface features
6. Where to find EBX help
2.1 Overview
The general layout of EBX workspaces is entirely customizable by a perspective administrator.
If several customized perspectives have been created, the tiles icon 'Select perspective' allows the user
to switch between available perspectives.
The advanced perspective is accessible by default.
Note
The advanced perspective is still accessible to users through explicit selection (for
example through a Web component). Unlike other perspectives, it can only be "hidden"
in the user interface so that users cannot apply it themselves.
• Header: Displays the avatar of the user currently logged in and the perspective selector. Clicking
on the user's avatar gives access to the user pane.
• Menu bar: The functional categories accessible to the current user.
• Navigation pane: Displays context-dependent navigation options. For example: selecting a table
in a dataset, or a work item in a workflow.
• Workspace: Main context-dependent work area of the interface. For example, the table selected
in the navigation pane is displayed in the workspace, or the current work item is executed in the
workspace.
The following functional areas are displayed according to the permissions of the current user: Data,
Dataspaces, Modeling, Data Workflow, Data Services, and Administration.
2.3 Perspectives
The EBX perspectives are highly configurable views with a target audience. Perspectives offer a
simplified user interface to business users and can be assigned to one or more profiles. This view is
split into several general areas, referred to as the following in the documentation:
• Header: Displays the avatar of the user currently logged in and the perspective selector (when
more than one perspective is available). Clicking on the user's avatar gives access to the user pane.
• Navigation pane: Displays the hierarchical menu as configured by the perspective administrator.
It can be expanded or collapsed to access relevant entities and services related to the user's activity.
• Workspace: Main context-dependent work area of the interface.
Perspectives are configured by authorized users. For more information on how to configure a
perspective, see perspective administration [p 358].
Favorite perspectives
When more than one perspective is available to a user, it is possible to define one as their favorite
perspective so that, when logging in, this perspective will be applied by default. To do so, an icon is
available in the perspective selector next to each perspective:
• A full star indicates the favorite perspective. A click on it will remove the favorite perspective.
• An empty star indicates that the associated perspective is not the favorite one. A click on it will
set this perspective as the favorite one.
Attention
The logout button is located on the user pane.
Avatar
An avatar can be defined for each user. The avatar consists in a picture, defined using a URL path;
or in two letters (the user's initials by default). The background color is set automatically and cannot
be modified. Regarding the image that will be used, it has to be a square format but there is no size
limitation.
Note
Avatars appear in the user pane, history and workflow interfaces.
The feature is also available through the Java method UIComponentWriter.addUserAvatar . API
The avatar layout can be customized in the 'Ergonomics and layout' section of the 'Administration'
area. It is possible to choose between the display of the avatar only, user name only, or to display both.
Density
Users can now choose their display density mode between 'Compact' and 'Comfortable'. The display
mode can be modified from the user pane.
Context-sensitive help
When browsing any workspace in EBX, context-specific help is available by clicking on the question
mark located to the right side of the second header. The corresponding chapter from the product
documentation will be displayed.
When a permalink to the element is available, a link button appears in the upper right corner of the
panel.
CHAPTER 3
Glossary
This chapter contains the following topics:
1. Governance
2. Data modeling
3. Datasets
4. Data management life cycle
5. History
6. Workflow modeling
7. Data workflows
8. Data services
9. Cross-domain
3.1 Governance
repository
A back-end storage entity containing all the data managed by EBX. The repository is organized into
dataspaces.
See also dataspace [p 26].
profile
The generic term for a user or a role. Profiles are used in data workflows and for defining permission
rules.
See also user [p 21], role [p 22].
Related Java API Profile . API
user
An entity created in the repository in order for physical users or external systems to authenticate and
access EBX. Users may be assigned roles and have other account information associated with them.
See also user and roles directory [p 22], profile [p 21].
Related concept User and roles directory [p 373].
role
A user classification, used for permission rules and data workflows, which can be assigned to users.
Each user may belong to multiple roles.
Whenever a role profile is specified in EBX, the behavior resulting from that designation is applied
to all users that are members of that role. For example, in a workflow model, a role may be specified
when defining to whom work items are offered. As a result, all users belonging to that role can receive
the same work item offer.
See also user and roles directory [p 22], profile [p 21].
Related concept User and roles directory [p 373].
Related Java API Role . API
administrator
A predefined role that has access to the technical administration and configuration of EBX.
user session
A repository access context that is associated with a user after being authenticated against the user
and roles directory.
Related concept User and roles directory [p 373].
Related Java API Session . API
data model
A structural definition of the data to be managed in the EBX repository. A data model includes detailed
descriptions of all included data, in terms of organization, data types, and semantic relationships. The
purpose of data models is to define the structure and characteristics of datasets, which are instances
of data models that contain the data being managed by the repository.
See also dataset [p 24].
Related concept Data models [p 32].
field
A data model element that is defined with a name and a simple datatype. A field can be included in the
data model directly or as a column of a table. In EBX, fields can be assigned basic constraints, such as
length and size, as well as more complex validation rules involving computations. Automated value
assignment using field inheritance or computations based on other data can also be defined for fields.
Aggregated lists can be created by setting the cardinality of a field to allow multiple values in the
same record. Fields can be arranged into groups to facilitate structural organization in the data model.
By default, fields are denoted by the icon .
See also record [p 24], group [p 23], table (in data model) [p 23], validation rule [p 24],
inheritance [p 25].
Related concepts Structure elements properties [p 51], Controls on data fields [p 65].
Related Java API SchemaNode . API
primary key
A field or a composition of multiple fields used to uniquely identify the records in a table.
Primary keys are denoted by the icon .
Related concept Tables definition [p 459].
foreign key
A field or a composition of multiple fields in one table whose field values correspond to the primary
keys of another table. Foreign keys are used to reference records in one table from another table.
Foreign keys are denoted by the icon .
See also primary key [p 23].
Related concept Foreign key [p 464].
group
A classification entity used to facilitate the organization of a data model. A group can be used to
collect fields, other groups, and tables. If a group contains tables, the group cannot be included within
another table, as the constraint that tables cannot be nested must be respected. A group can be used to
create a reusable type based on the group's structure, which can then be used to create other elements
of the same structure in the data model.
Groups are represented by the icon .
reusable type
A shared simple or complex type definition that can be used to define other elements in the data model.
validation rule
An acceptance criterion defined on a field or a table. Data is considered invalid if it does not comply
with all imposed validation rules.
The former name (prior to version 5) of "validation rule" was "constraint".
3.3 Datasets
Main documentation section Datasets [p 106]
record
A set of field values in a table, uniquely identified by a primary key. A record is a row in the table.
Each record follows the data structure defined in the data model. The data model drives the data types
and cardinality of the fields found in records.
See also table (in dataset) [p 24], primary key [p 23].
The former name (prior to version 5) of "record" was "occurrence".
dataset
A data-containing instance of a data model. The structure and behavior of a dataset are based upon
the definitions provided by the data model that it is implementing. Depending on its data model, a
dataset contains data in the form of tables, groups, and fields.
Datasets are represented by the icon .
See also table (in dataset) [p 24], field [p 23], group [p 23], views [p 25].
Related concept Datasets [p 106].
The former name (prior to version 5) of "dataset" was "adaptation instance".
inheritance
A mechanism by which data can be acquired by default by one entity from another entity. In EBX,
there are two types of inheritance: dataset inheritance and field inheritance.
When enabled, dataset inheritance allows a child dataset to acquire default data values from its parent
dataset. This feature can be useful when designing a data model where data declared in a parent scope
will be used with the same value by default in nested child scopes. Values that are inherited from the
parent can be overridden by the child. By default, dataset inheritance is disabled. It can be enabled
during the data model definition.
Inheritance from the parent dataset is represented by the icon .
Field inheritance is defined in the data model to automatically fetch a field value from a record in
another table.
Inherited fields are represented by the icon .
Related concept Inheritance and value resolution [p 268].
views
A customizable display configuration that may be applied to viewing tables. A view can be defined
for a given user or role, in order to specify whether records are displayed in a tabular or hierarchical
format, as well as to set record filtering criteria.
The hierarchical view type offers a tree-based representation of the data in a table. Nodes in the tree can
represent either field values or records. A hierarchical view can be useful for showing the relationships
between the model data. When creating a view that uses the hierarchical format, dimensions can
be selected to determine the structural representation of data. In a hierarchical view, it is possible
to navigate through recursive relationships, as well as between multiple tables using foreign key
relationships.
See also
Views [p 113]
Hierarchies [p 115]
recommended view
A recommended view can be defined by the dataset owner for each target profile. When a user logs
in with no view specified, their recommended view (if any) is applied. Otherwise, the default view
is applied.
The 'Manage recommended views' action allows defining assignment rules for recommended views
depending on users and roles.
Related concept Recommended views [p 116].
favorite view
When displaying a table, the user can choose to define the current as their favorite view through the
'Manage views' sub-menu.
Once it has been set as the favorite, the view will be automatically applied each time this user accesses
the table.
Related concept Manage views [p 117].
dataspace
A container entity comprised of datasets. It is used to isolate different versions of datasets or to
organize them.
Child dataspaces may be created based on a given parent dataspace, initialized with the state of the
parent. Datasets can then be modified in the child dataspaces in isolation from their parent dataspace
as well as each other. The child dataspaces can later be merged back into their parent dataspace or
compared against other dataspaces.
See also inheritance [p 25], repository [p 21], dataspace merge [p 26].
Related concept Dataspaces [p 88].
The former name (prior to version 5) of "dataspace" was "branch" or "snapshot".
reference dataspace
The root ancestor dataspace of all dataspaces in the EBX repository. As every dataspace merge must
consist of a child merging into its parent, the reference dataspace is never eligible to be merged into
another dataspace.
See also dataspace [p 26], dataspace merge [p 26], repository [p 21].
dataspace merge
The integration of the changes made in a child dataspace since its creation into its parent dataspace.
The child dataspace is closed after the merge has completed successfully. To perform a merge, all the
differences identified between the source dataspace and the target dataspace must be reviewed, and
conflicts must be resolved. For example, if an element has been modified in both the parent and child
dataspace since the creation of the child dataspace, the conflict must be resolved manually by deciding
which version of the element should be kept as the result of the merge.
Related concept Merge [p 96].
snapshot
A static copy of a dataspace that captures its state and all of its content at a given point in time for
reference purposes. A snapshot may be viewed, exported, and compared to other dataspaces, but it
can never be modified directly.
Snapshots are represented by the icon .
Related concept Snapshot [p 101]
The former name (prior to version 5) of "snapshot" was "version" or "home".
3.5 History
Main documentation section History [p 249]
historization
A mechanism that can be enabled at the table level to track modifications in the repository. Two history
views are available when historization is activated: table history view and transaction history view.
In all history views, most standard features for tables, such as export, comparison, and filtering, are
available.
Activation of historization requires the configuration of a history profile. The historization of tables
is not enabled by default.
See also table history view [p 27], transaction history view [p 27], history profile [p 27].
history profile
A set of preferences that specify which dataspaces should have their modifications recorded in the
table history, and whether transactions should fail if historization is unavailable.
See also history profile [p 27].
workflow model
A procedural definition of operations to be performed on data. A workflow model describes the
complete path that the data must follow in order to be processed, including its states and associated
actions to be taken by human users and automated scripts.
Related concept Workflow models [p 130].
The former name (prior to version 5) of "workflow model" was "workflow definition".
Workflow models are represented by the icon .
script task
A data workflow task performed by an automated process, with no human intervention. Common
script tasks include dataspace creation, dataspace merges, and snapshot creation.
Script tasks are represented by the icon .
See also workflow model [p 27].
user task
A data workflow task that is made up of one or more work items performed concurrently by human
users. User task work items are offered or assigned to users, depending on the workflow model. The
progression of a data workflow beyond a user task depends on the satisfaction of the task termination
criteria defined in the workflow model.
User tasks are represented by the icon .
See also workflow model [p 27].
workflow condition
A decision step in a data workflow. A data workflow condition describes the criteria used to decide
which step will be executed next.
Workflow conditions are represented by the icon .
sub-workflow invocation
A step in a data workflow that pauses the current data workflow and launches one or more other data
workflows. If multiple sub-workflows are invoked by the same sub-workflow invocation step, they
will be executed concurrently, in parallel.
wait task
A step in a data workflow that pauses the current workflow and waits for a specific event. When the
event is received, the workflow is resumed and automatically goes to the next step.
data context
A set of data that may be shared between steps throughout a data workflow to ensure continuity
between steps.
workflow publication
An instance of a workflow model that has been made available for execution to users with the
appropriate permissions.
The former name (prior to version 5) of "workflow publication" was "workflow".
data workflow
An executed instance of a workflow model, which runs the data processing steps that are defined in
the model, including user tasks, script tasks, and conditions.
See also workflow model [p 27].
Related concept Data workflows [p 158].
The former name (prior to version 5) of "data workflow" was "workflow instance".
work list
A list of all published data workflows that the current user has the permissions to view. Users with
the permissions to launch data workflows do so from their 'Work List'. All outstanding work items
requiring action from the user appear under their published workflows in the work list. Additionally,
if the user is the administrator of data workflows, they are able to view the state of execution of those
data workflows in their 'Work List', and may intervene if necessary.
work item
An action that must be performed by a human user as a part of a user task.
Allocated work items are represented by the icon .
See also user task [p 28].
token
Tokens are used during data workflow management, and are visible to repository administrators.
data service
EBX shares master data according to the Service-oriented architecture (SOA) by using XML web
services. Since all data services are generated directly from models or built-in services they can be
used to access part of the features available from the user interface.
Data services offer:
• a WSDL model-driven and built-in generator to build a communication interface. It can be
produced through the user interface or the HTTP(S) connector for a client application. XML
messages are communicated to the EBX entry point.
• a SOAP connector or entry point component for SOAP messages which allows external systems
interacting with the EBX repository. This connector responds to requests coming from the WSDL
produced by EBX. This component accepts all SOAP XML messages corresponding to the EBX
WSDL generator.
• A RESTful connector, or entry point for the select operations, allows external systems
interrogating the EBX repository. After authenticating, it accepts the request defined in the URL
and executes it according to the permissions of the authenticated user.
lineage
A mechanism by which access rights profiles are implemented for data services. Access rights profiles
are then used to access data via WSDL interfaces.
Related concept: Generating a WSDL for lineage [p 183].
3.9 Cross-domain
node
A node is an element of a tree view or a graph. In EBX, 'Node' can carry several meanings depending
on the context of use:
• In the workflow model [p 27] context, a node is a workflow step or condition.
• In the data model [p 22] context, a node is a group, a table or a field.
• In the hierarchy [p 25] context, a node represents a value of a dimension.
• In an adaptation tree [p 25], a node is a dataset.
• In a dataset [p 24], a node is the node of the data model evaluated in the context of the dataset
or the record.
Data models
CHAPTER 4
Introduction to data models
This chapter contains the following topics:
1. Overview
2. Using the Data Models area user interface
4.1 Overview
What is a data model?
The first step towards managing data in EBX is to develop a data model. The purpose of a data model
is to provide the detailed structural definition of the data that will be managed in the repository, in
terms of organization, data types, and semantic relationships.
In order to implement a data model in the repository, you will first create a new data model, then
define the details of the structure of its component table, field, and group elements, as well as their
behavior and properties. When you have completed the entry or import of your data model structure in
the repository, you will publish it to make it available for use by datasets. Once you have a publication
of your data model, you and other users can create datasets based upon it to contain the data that is
managed by the EBX repository.
Note
This area is available only to authorized users in the 'Advanced perspective'.
Included data models Defines the data models included in the current model. All
types defined in included data models can be reused in the
current model.
Java bindings The bindings specify what Java types have to be generated
from the model.
Component library Defines the Java components available in the model. These
provide programmatic features that will be available for the
model, such as programmatic constraints, functions, and UI
beans.
User services Declares the user services using the API available before
release 5.8.0. From release 5.8.0, it is advised to use the
new UserService API (these services are directly registered
through the Java API, hence no declaration is required for
them in the data model assistant)..
Data services Specifies the WSDL operations' suffixes that allow to refer
to a table in the data service operations using a unique name
instead of its path.
Replications This table defines the replication units of the data model. A
replication unit allows the replication of a source table in
the relational database, so that external systems can access
this data by means of plain SQL requests and views.
Data structure The structure of the data model. Defines the relationship
between the elements of the data model and provides access
to the definition of each element.
Simple data types Simple reusable types defined in the current data model.
Complex data types Complex reusable types defined in the current data model.
Included simple data types Simple reusable types defined in an included external data
model.
Included complex data types Complex reusable types defined in an included external data
model.
See also
Implementing the data model structure [p 45]
Configuring the data model [p 39]
Reusable types [p 47]
Related concepts
Dataspaces [p 88]
Datasets [p 106]
CHAPTER 5
Creating a data model
This chapter contains the following topics:
1. Creating a new data model
2. Selecting a data model type
Semantic models
Semantic models enable the full use of data management features, such as life cycle management
using dataspaces, provided by EBX. This is the default type of data model.
Relational models
Relational models are used when the tables created from a data model will eventually be mapped to
a relational database management system (RDBMS). The primary benefit of using a relational model
is the ability to query the tables created from the data model using external SQL requests. However,
employing a relational model results in the loss of functionalities in EBX such as inheritance, multi-
valued fields, and the advanced life cycle management provided by dataspaces.
Note
A relational data model can only be used by a single dataset, and the dataspace containing
the dataset must also be declared as being relational.
See also
Relational mode [p 243]
Dataspaces [p 88]
CHAPTER 6
Configuring the data model
This chapter contains the following topics:
1. Information associated with a data model
2. Permissions
3. Data model properties
4. Included data models
5. Data services
6. Replication of data to relational tables
7. Add-ons used by the data model
Note
This area is available only to authorized users in the 'Advanced perspective'.
Unique name The unique name of the data model. This name cannot be
modified once the data model has been created.
Owner Specifies the data model owner, who will have permission to
edit the data model's information and define its permissions.
Localized documentation Localized labels and descriptions for the data model.
6.2 Permissions
To define the user permissions on your data model, select 'Permissions' from the data model 'Actions'
[p 33] menu for your data model in the navigation pane.
The configuration of the permissions of a data model are identical to the options for the permissions
of a dataset, as explained in Permissions [p 121].
Module name Defines the module that contains the resources that will be
used by this data model. This is also the target module used
by the data model publication if publishing to a module.
Module path Physical location of the module on the server's file system.
Sources locations The source paths used when configuring Java components
in the 'Component library'. If a path is relative, it will be
resolved using the 'Module path' as the base path.
Dataset inheritance Specifies whether dataset inheritance is enabled for this data
model. Dataset inheritance is disabled by default.
See Dataset inheritance [p 125] for more information.
Enable user services (old API) Specifies if user services using the API available before
release 5.8.0 can be declared. If 'No', the section
'Configuration > User services' is not displayed (except if
at least service has been already declared in this section).
From release 5.8.0, it is advised to use the new UserService
Java API (these services are directly registered through the
Java API, hence no declaration is required in the data model
assistant).
See UserServiceDeclaration for more information.
API
6.5Data services
It is possible to refer to tables in Data Service operations using unique names instead of their paths
by defining suffixes for WSDL operations. A WSDL suffix is the association between a table path
and a name.
To define a WSDL suffix through the user interface, create a new record in the 'Data services' table
under the data model configuration in the navigation pane. A record of this table defines the following
properties:
Table path Specifies the path of the table in the current data model that
is to be referred by the WSDL operation suffix.
WSDL operation suffix This name is used to suffix all the operation names of the
concerned table. If undefined for a given table, the last
element of the table path is used instead. This name must be
unique in the context of this data model.
with a particular dataset in a given dataspace. A single replication unit can cover multiple tables, as
long as they are in the same dataset. A replication unit defines the following information:
Aggregated lists Specifies the properties of the aggregated lists in the table
that are replicated in the database.
Path: Specifies the path of the aggregated list in the table
that is to be replicated to the database.
Table name in database: Specifies the name of the table
in the database to which the data of the aggregated list
will be replicated. This name must be unique amongst all
replications units.
CHAPTER 7
Implementing the data model
structure
To work with the structural definition of your data model, select the data model you are working with
in the navigation pane.
You can then access the structure of your data model in the navigation pane under 'Data structure', to
define the structure of fields, groups, and tables.
This chapter contains the following topics:
1. Common actions and properties
2. Reusable types
3. Data model element creation details
4. Modifying existing elements
Add a new element relative to any existing element in the data structure by clicking the down arrow
to the right of the existing entry, and selecting an element creation option from the menu. Depending
on whether the existing element is a field, group, or table, you have the choice of creating the new
element as a child of the existing element, or before or after the existing element at the same level.
You can then follow the element creation wizard to create the new element.
Note
The element root is always added upon data model creation. If this element must be
renamed, it can be deleted and recreated with a new name.
Note
If you duplicate a primary key field, the properties of the field are maintained, but the
new field is not automatically added to the primary key.
Moving elements
To reorder an element within its current level of the data structure, click the down arrow
corresponding to its entry and select 'Move'. Then, select the left-arrow button corresponding to the
field before which you want to move the current element.
Note
It is not possible to move an element to a position outside of its level in the data structure.
Note
If you modify the definition of a reusable type in the 'Simple data types' or 'Complex
data types' section, you will modify the structure of all elements based on that reusable
type. The structure of a groups or table using a reusable type is shown as read-only. To
edit the structure of the associated reusable type, you have to access the type from the
'Simple data types' or 'Complex data types' section.
Note
As the names of data types must be unique across all locally defined as well as all included
types, you cannot create new reusable types with the same name as a data type in an
included data model. Similarly, you cannot include an external data model that defines a
data type with the same name as a locally defined reusable type or a data type in another
included data model.
Included data types appear in the sections 'Included simple data types' and 'Included complex data
types' in the navigation panel. You can view the details of these included reusable types; however,
they can only be edited locally in their original data models.
See Included data models [p 41] for more information.
Creating tables
While creating a table, you have the option to create the new table based on an existing reusable type.
See Reusable types [p 47] for more information.
Every table requires specifying at least one primary key field, which you can create as a child element
of the table from the navigation pane.
Creating groups
While creating a group, you have the option to create the new group based on an existing reusable
type. See Reusable types [p 47] for more information.
Creating associations
An association allows defining semantic links between tables. You can create an association by
creating it as a child element under the table's entry in the 'Data structure' tree and by selecting
'association' in the form for creating a new element. An association can only be defined inside a table.
It is not possible to convert an existing field to an association.
When creating an association, you must specify the type of association. Several options are available:
• Inverse relationship of a foreign key. In this case, the association element is defined in a source
table and refers to a target table. It is the counterpart of the foreign key field, which is defined in
the target table and refers back the source table. You must define the foreign key that references
the parent table of the association.
• Over a link table. In this case, the association element is defined in a source table and refers
to a target table that is inferred from a link table. This link table defines two foreign keys: one
referring to the source table and another one referring to the target table. The primary key of the
link table must also refer to auto-incremented fields and/or the foreign key to the source or target
table of the association. You must define the link table and these two foreign keys.
• Using an XPath predicate. In this case, the association element is defined in a source table and
refers to a target table that is specified using a path. An XPath expression is also defined to specify
the criteria used to associate a record of the current table to records of the target table. You must
define the target table and an XPath expression.
In all types of association, we call associated records the records in the target table that are
semantically linked to records in the source table.
Once you have created an association, you can specify additional properties. For an association, it is
then possible to:
• Filter associated records by specifying an additional XPath filter. It is only possible to use fields
from the source and the target table when defining an XPath filter. That is, if it is an association
other a link table it is not possible to use fields of the link table in the XPath filter. You can use
the available wizard to select the fields that you want to use in your XPath filter.
• Configure a tabular view to define the fields that must be displayed in the associated table. It is
not possible to configure or modify an existing tabular view if the target table of the association
does not exist. If a tabular view is not defined, all columns that a user is allowed to view according
to the granted access rights are displayed.
• Define how associated records are to be rendered in forms. You can specify that associated records
are to be rendered either directly in the form or in a specific tab. By default, associated records
are rendered in the form at the same position of the association in the parent table.
• Hide/show associated records in data service 'select' operation. By default associated records are
hidden in data service 'select' operation.
• Specify the minimum and maximum numbers of associated records that are required. In associated
datasets, a validation message of the specified severity is added if an association does not comply
with the required minimum or the maximum numbers of associated records. By default, the
required minimum and the maximum numbers of associated records are not restricted.
• Add validation constraints using XPath predicates to restrict associated records. It is only possible
to use fields from the source and the target table when defining an XPath predicate. That is, if
it is an association over a link table it is not possible to use fields of the link table in the XPath
predicate. You can use the available wizard to select the fields that you want to use in your XPath
predicate. In associated datasets, a validation message of the specified severity is added when an
associated record does not comply with the specified constraint.
CHAPTER 8
Properties of data model elements
After the initial creation of an element, you can set additional properties in order to complete its
definition.
Maximum number of values Maximum number of values for an element. When set to a
value greater than '1', the element becomes multi-valued.
As primary keys cannot be multi-valued, they must have this
property set to '1' or 'undefined'.
For tables, the maximum number of values is automatically
set to 'unbounded' upon creation. The maximum number of
values is automatically set to '0' when defining the field as
a selection node.
Validation rules This property is available for tables and fields in tables
except Password fields, reusable types, fields in complex
reusable types, and selection nodes. Used to define powerful
and complex validation rules with the help of the provided
XPath 1.0 criteria editor.
See Criteria editor [p 291] for more information.
This can be useful if the validation of the value depends on
complex criteria or on the value of other fields.
It is also possible to indicate that a rule defines a verification
for a value that is mandatory under certain circumstances. In
this case a value is mandatory if the rule is not satisfied. See
Constraint on 'null' values [p 487] for more information.
Using the associated wizard, you can define localized labels
for the validation rule, as well as define a localized message
with severity to be displayed if the criteria is not met.
When defining the severity of the validation message it is
possible to indicate whether an input that would violate a
validation rule will be rejected or not when submitting a
form. The error management policy is only available on
validation rules defined on a field and when the severity is
set to 'error'. If the validation rule must remain valid, then
any input that would violate the rule will be rejected and the
values will remain unchanged. If errors are allowed, then
any input that would violate the rule will be accepted and
the values will change. If not specified, the validation rule
always blocks errors upon the form submission by default.
If a validation rule is defined on a table, it will be considered
as a 'constraint on table' and each record of the table will be
evaluated against it at runtime. See Constraints on table [p
487] for more information.
Default value Default value assigned to this field. In new data creation
forms, the default value appears automatically in the user
input field. The default value must comply with the defined
type of the field.
See Default value [p 503] for more information.
Default view and tools > Specifies whether or not this element is shown in the default
Visibility view of a dataset, in the text search of a dataset or in the data
service "select" operation.
• Model-driven view
Specifies whether or not the current element is shown
in the default tabular view of a table, the default record
form of a table, and in the default view of a dataset
if the current element is a table. Default dataset view,
tabular view and default record form generated from
the structure of the data model. If the current element
is inside a table, then setting the property to 'Hidden'
will hide the element from the default tabular view and
default record form of the table without having to define
specific access permissions. Current element will still
be displayed in the view configuration wizard to be able
to create a custom view that displays this element. If
the current element is a table, then setting the property
to 'Hidden' will hide the table from the default view
of a dataset without having to define specific access
permissions. This property is ignored if it is set on an
element that is neither a table nor in a table.
• All views
Specifies whether or not the current element is shown
in all views of a table in a dataset. Setting the property
to 'Hidden in all views' will hide the element in all
views of the table, whether tabular (default tabular
view included) or hierarchical, without having to define
specific access permissions. The current element will
also be hidden in the view configuration wizard. That
is, it won't be possible to create a custom view that will
display this element. This property is ignored if it is
set on an element that is not in a table. This property
is not applied on forms. That is, setting the property
to 'Hidden in all views' will not hide the element in a
record form but only in views.
• Search tools
Specifies whether or not the current element is shown
in a dataset search tool. Setting the property to 'Hidden
in all searches' will hide the element both in the text and
typed search tools of a dataset. Setting the property to
Default view and tools > Widget Defines the widget to be used. A widget is an input
component that is displayed in forms in associated datasets.
If undefined, a default widget is displayed in associated
datasets according to the type and properties of the current
element. It is possible to use a built-in widget or a custom
widget. A custom widget is defined using a Java API to
allow the development of rich user interface components
for fields or groups. Built-in and custom widgets cannot be
defined on a table or an association. It is forbidden to define
both a custom widget and a UI bean. It is forbidden to define
on a foreign key field both a custom widget and a combo-
box selector.
See UIWidgetFactory for more information.
API
Default view and tools > Combo- Specifies the name of the published view that will be used
box selector in the combo-box selection of the foreign key. A selection
button will be displayed at the bottom right corner of the
drop-down list. When defining a foreign key, this feature
allows accessing an advanced selection view through the
'Selector' button that opens the advanced selection view,
from where sorting and searching options can be used. If no
published view is defined, the advanced selection view will
be disabled. If the path of the referenced table is absolute
then only the published views corresponding to this table
will be displayed. If the path of the referenced table is
relative then all the published views associated with the data
model containing the target table will be displayed. This
property can only be set if no custom widget is defined.
See Defining a view for the combo box selector of a foreign
key [p 507] in the Developer Guide.
UI bean
Attention
Transformation on export This property is available for fields and for groups that
are terminal nodes. Specifies a Java class that defines
transformation operations to be performed when exporting
an associated dataset as an archive. The input of the
transformation is the value of this element.
See NodeDataTransformer for more information.
API
Access properties Defines the access mode for the current element, that is, if
its data can be read and/or written.
• 'Read & Write' corresponds to the mode RW in the data
model XSD.
• 'Read only' corresponds to the mode R- in the data
model XSD.
• 'Not a dataset node' corresponds to the mode CC in the
data model XSD.
• 'Non-terminal node' corresponds to the mode -- in the
data model XSD.
See Access properties [p 503] in the Developer Guide.
Comparison mode Defines the comparison mode associated with the element,
which controls how its differences are detected in a dataset.
• 'Default' means the element is visible when comparing
associated data.
• 'Ignored' implies that no changes will be detected when
comparing two versions of modified content (records
or datasets).
During a merge, the data values of ignored elements
are not merged even if the content has been modified.
However, values of ignored data sets or records being
created during the operation are merged.
During an archive import, values of ignored elements
are not imported when the content has been modified.
However, values of ignored datasets or records being
created during the operation are imported.
Apply last modifications policy Defines if this element must be excluded from the service
allowing to apply the last modifications that have been
performed on a record to the other records of the same table.
• 'Default' means that the last modification on this
element can be applied to other records.
• 'Ignored' implies that the last modification on this
element cannot be applied to other records. This
element will not be visible in the apply last
modifications service.
See Apply last modifications policy [p 508] in the
Developer Guide.
Note
A value is considered mandatory if the 'Minimum number of values' property is set to
'1' or greater. For terminal elements, mandatory values are only checked in activated
datasets. For non-terminal elements, the values are checked regardless of whether the
dataset is activated.
Trim whitespaces
Trim white spaces
Implements the property osd:trim. This property is used to indicate whether leading and trailing white
spaces must be trimmed upon user input. If this property is not set, leading and trailing white spaces
are removed upon user input.
See Whitespace handling upon user input [p 492] in the Developer Guide.
UI bean
See Common advanced properties [p 55].
Disable validation
Specifies if the constraints defined on the field must be disabled. This property can only be defined
on computed fields. If true, cardinalities, simple and advanced constraints defined on the field won't
be checked when validating associated datasets.
Transformation on export
See Common advanced properties [p 56].
Access properties
See Common advanced properties [p 56].
Auto-increment
This property is only available for fields of type 'Integer' that are contained in a table. When set, the
value of the field is automatically calculated when a new record is created. This can be useful for
primary keys, as it generates a unique identifier for each record. Two attributes can be specified:
Disable auto-increment checks Specifies whether to disable the check of the auto-
incremented field value in associated datasets against the
maximum value in the table being updated.
That is, the uniqueness of the allocation spans over all datasets based upon this data model, in any
dataspace in the repository. The uniqueness across different dataspaces facilitates the merging of
child dataspaces parent dataspaces while reasonably avoiding conflicts when a record's primary
key includes the auto-incremented value.
Despite this policy, a specific limitation exists when a mass update transaction assigning specific
values is performed concurrently with a transaction that allocates an auto-incremented value on
the same field. It is possible that the latter transaction will allocate a value that has already been
set in the former transaction, as there is no locking between different dataspaces.
See Auto-incremented values [p 495] in the Developer Guide.
Default view
See Common advanced properties [p 54].
Node category
See Common advanced properties [p 57].
Inherited field
Defines a relationship from the current field to a field in another table in order to automatically fetch
its field value.
Source element XPath of the element in the source record from which
to inherit this field's value. The source element must be
terminal, belong to the record described by 'Source record',
and its type must match the type of this field. This property
is mandatory when using field inheritance.
Table
Primary key A list of fields in the table that compose the table's primary
key. You can add or remove primary key fields here, as in
the 'Data structure' view.
Each primary key field is denoted by its absolute XPath
notation that starts under the table's root element.
If there are several elements in the primary key, the list is
white-space delimited. For example, "/name /startDate".
Presentation > Record labeling Defines the fields to provide the default and localized labels
for records in the table.
Can also specify a Java class to set the label
programmatically, or set the label in a hierarchy. This
Java class must implement either the UILabelRenderer API
Presentation > Default rendering Specifies the default display rendering mode of the groups
for groups in forms contained in this table. If nothing is defined here, the default
policy set in the Administration area will be used to display
groups.
See Record form: rendering mode for nodes [p 364] in the
Administration Guide.
Enabled rendering for groups
Specifies a display rendering mode to be enabled for
groups in the table in addition to the modes 'expanded'
and 'collapsed', which are always available. Tabs must be
enabled on the table to have the option to display groups as
tabs. Similarly, links must be enabled to have the option to
display groups as links.
Default rendering for groups
Specifies the default display rendering mode to use for the
groups contained in this table. If a group does not specify
a default mode then the default mode defined for this table
will be used. Links must be enabled to define the default
rendering mode as 'Link'. Select a rendering mode according
to network and browser performance. Link mode is lighter
Presentation > Specific Defines a specific rendering for customizing the record form
rendering of forms in a dataset.
See UIForm andAPI
UserServiceRecordFormFactory
API
for
more information.
Uniqueness constraints
Indicates which fields or set of fields must be unique across the table.
Triggers
Specifies Java classes that defines methods to be automatically executed when modifications are
performed on the table, such as record creation, updates, deletion, etc.
A built-in trigger for starting data workflows is included by default.
See Triggers [p 494] in the Developer Guide.
Access properties
See Common advanced properties [p 56].
Default view
See Common advanced properties [p 54].
Node category
See Common advanced properties [p 57].
UI bean
See Common advanced properties [p 55].
Transformation on export
See Common advanced properties [p 56].
Access properties
See Common advanced properties [p 56].
Default view
Rendering in forms Defines the rendering mode of this group. If this property
is not set, then the default view for groups specified by the
container table will be used. 'Tab' and 'Link' are each only
available when the container table enables it.
Tab position
This attribute specifies the position of the tab with respect
to all the tabs defined in the model. This position is used for
determining tab order. If a position is not specified, the tab
will be displayed according to the position of the group in
the data model.
Node category
See Common advanced properties [p 57].
CHAPTER 9
Data validation controls on elements
After the initial creation of an element, you can set additional controls in order to complete its
definition.
Fixed length The exact number of characters required for this field.
Minimum length The minimum number of characters allowed for this field.
Maximum length The maximum number of characters allowed for this field.
Pattern A regular expression pattern that the value of the field must
match. It is not possible to simultaneously define a pattern
for both a field and its data type.
Decimal places The maximum number of decimal places allowed for this
field.
Maximum number of digits The maximum total number of digits allowed for this integer
or decimal field.
Greater than [constant] Defines the minimum value allowed for this field.
Less than [constant] Defines the maximum value allowed for this field.
Referenced table XPath expression describing the location of the table. For
example, /root/MyTable.
Label Defines fields to provide the default and localized labels for
records in the table.
Can also specify a Java class to set the label
programmatically if 'XPath expression' is set to 'No'. This
Java class must implement the TableRefDisplay interface
API
Greater than [dynamic] Defines a field to provide the minimum value allowed for
this field.
Less than [dynamic] Defines a field to provide the maximum value allowed for
this field.
Fixed length [dynamic] Defines a field to provide the exact number of characters
required for this field.
Excluded values Defines a list of values that are not allowed for this field.
Excluded segment Defines an inclusive range of values that are not allowed for
this field.
Minimum excluded value: Lowest value not allowed for
this field.
Maximum excluded value: Highest value not allowed for
this field.
Specific constraint (component) Specifies one or more Java classes that implement the
Constraint interface of the Java API. See Programmatic
API
Enumeration filled by another Defines the possible values of this enumeration using a
node reference to another list or enumeration element.
Dataspace set configuration Define the dataspaces that can be referenced by a field
of the type Dataspace identifier (osd:dataspaceKey). If a
configuration is not set, then only opened branches can be
referenced by this field by default.
• Includes
Specifies the dataspaces that can be referenced by this
field.
Pattern: Specifies a pattern that filters dataspaces. The
pattern is checked against the name of the dataspaces.
Type: Specifies the type of dataspaces that can be
referenced by this field. If not defined, this restriction
is applied to branches.
Include descendants: Specifies if children or
descendants of the dataspaces that match the specified
pattern are included in the set. If not defined, this
restriction is not applied to child dataspaces. If "None"
Dataset set configuration Define the datasets that can be referenced by a field of the
type Dataset identifier (osd:datasetName).
• Includes
Specifies the datasets that can be referenced by this
field.
Pattern:Specifies a pattern that filters datasets. The
pattern is checked against the name of the datasets.
Include descendants: Specifies if children or
descendants of the datasets that match the specified
pattern are included in the set.
• Excludes
Specifies the datasets that cannot be referenced by this
field. Excludes are ignored if no includes are defined.
Pattern: Specifies a pattern that filters datasets. The
pattern is checked against the name of the datasets.
Include descendants: Specifies if children or
descendants of the datasets that match the specified
pattern are included in the set.
• Filter
Specifies a filter to accept or reject datasets in the
context of a dataset or record. This filter is only used
in the dedicated input component that is associated to
this field. That is, this filter is not used when validating
this field. A specific constraint can be used to perform
specific controls on this field. A filter is defined by
Validation properties
Each constraint not using a specific Java class can define localized validation messages with a severity
using the following properties:
CHAPTER 10
Toolbars
This chapter contains the following topics:
1. Definition
10.1 Definition
A toolbar allows to customize the buttons and menus that are displayed when viewing tables or records
in a dataset. The customization of toolbars can be performed in the data model via the 'Configuration'
section.
Add a toolbar from the Toolbars section of the navigation pane, by clicking on the arrow located
to the right of [ All elements ], then selecting the Create toolbar option. Follow the creation wizard
to create a toolbar. A toolbar defines the following information:
Add one of these elements under a toolbar or to an existing element by clicking on the arrow
located to the right of the existing element, and by selecting a creation option in the menu. Then,
follow the creation wizard to create an element.
Action button
This type of element allows to associate an action to a button in a toolbar. The action will be triggered
when the user clicks on the associated button in one of the toolbars. A Action button type element
defines the following information:
Service Defines the service that will be executed when the user
clicks on the button. It is possible to select a built-in service,
or a user service defined in a module or in the current data
model. If the 'Web component' target is selected, the service
will have to be declared as available as a web component
for toolbars.
Modal size Size of the modal window containing the web component.
Note
A Action button type element can only be created under a toolbar type element.
Menu button
This type of element allows to define a menu that will be displayed when the user clicks on the
associated button in a toolbar. An element of the Menu button type defines the following information:
Note
An element of the Menu button type can only be created under an element of the toolbar
type.
Separator
This type of element allows to insert a separator in the form of spacing between two elements of a
toolbar.
Note
An element of the Separator type can only be created under an element of the toolbar
type.
Menu group
This type of element allows to define a group of elements in a menu. An element of the Menu group
type defines the following information:
Group type Specifies the type of menu group to create: - 'Local' allows
to create an empty fully customizable menu group. - 'Service
group' allows to assign an existing service group to this
menu group. - 'Menu builder' allows to assign a predefined
menu content to this menu group. Once created, it is not
possible to change the type of this menu group.
Menu builder name Specifies the predefined menu content to assign to this menu
group: - 'Default menu "Actions"' has the same content as
the default toolbar 'Actions' menu. Standard and custom
services are displayed without distinction. - 'Default menu
"Actions" (with separator)' has the same menu content as
above, but displays differently since standard and custom
services are separated (standard services first, then custom
services).
Excluded services Indicates the services to exclude from the group of reused
services. These services will not be displayed to end users
in associated datasets.
Excluded service groups Indicates the groups to exclude from the group of services
to reuse. Services in excluded groups will not be displayed
to end users in associated datasets.
Note
An element of the Menu group type can only be created under the following elements:
• Menu button
Service Defines the service that will be executed when the user
clicks on the button. It is possible to select a built-in service,
or a user service defined in a module or in the current data
model. If the 'Web component' target is selected, the service
will have to be declared as available as a web component
for toolbars.
Modal size Size of the modal window containing the web component.
Note
An element of the Action menu item type can only be created under a Menu group type
element.
Note
An element of the Sub menu item type can only be created under an element of the Menu
group type.
Deleting elements
All the elements of a toolbar can be deleted from it by using the arrow located to the right of the
element to be deleted.
If an element containing other elements is deleted, then the deletion is recursively performed on all
elements located below the deleted element.
Moving elements
In order to move an element, click on the arrow and select the moving option to be used.
Note
A selection of toolbars can be exported by selecting in the 'Toolbars' section the toolbars
to be exported and then by selecting the XML export option available in the Actions menu.
The toolbars can also be exported by using the data model export service. It can be found
in the Data model 'Actions' [p 33] menu in the navigation pane.
See also XML Schema Document (XSD) import and export [p 81]
Importing toolbars
It is possible to import existing toolbars from an XML document. To do so, select the XML import
option available in the Actions menu of the 'Toolbars' section. Then follow the wizard to import the
toolbars.
Note
The toolbars can also be imported by using the data model import service accessible via
the Data model 'Actions' [p 33] menu in the navigation pane.
See also XML Schema Document (XSD) import and export [p 81]
CHAPTER 11
Working with an existing data model
Once your data model has been created, you can perform a number of actions that are available from
the data model 'Actions' [p 33] menu in the workspace.
This chapter contains the following topics:
1. Validating a data model
2. XML Schema Document (XSD) import and export
3. Duplicating a data model
4. Deleting a data model
Note
The validation process checks basic data model integrity, but more complex checks are
only performed at publication time. More messages may be reported when you try to
publish your data model.
be able to import the XSD file. See Data model properties [p 40] for more information on declaring
modules.
To perform an import select 'Import XSD' from the data model 'Actions' [p 33] menu of the data model
into which you are importing.
You can import an XML Schema Document (XSD) from the local file system. To do so, click the
'From a local document' button in the import wizard and follow the next step:
• Document name: path on the local file system of the XSD file to import.
You can also import a data model in an XSD that is packaged in a module. The import of a data model
in XSD format from a module uses the following properties:
Source path Path to Java source used to configure business objects and
rules.
This property is required if the data model being imported
defines programmatic elements.
Note
Imported XSD files must be encoded in 'UTF-8'. Exported XSD files are always encoded
in 'UTF-8'.
Note
Only an administrator can clean up the publications of deleted data models in the
'Administration' area.
See Publishing data models [p 83] for more information on the publication process.
CHAPTER 12
Publishing a data model
This chapter contains the following topics:
1. About publications
2. Publication modes
3. Embedded publication mode
Note
The Publish button is only displayed to users who have permission to publish the data
model. See Data model permissions [p 39] for more information.
As datasets are based on publications, any modifications you make to a data model will only take effect
on existing datasets when you republish to the publication associated with those datasets. When you
republish a data model to an existing publication, all existing datasets associated with that particular
publication are updated.
Note
Snapshots, which are static archives of the state of the data model, must not be confused
with data model versions, which act instead as parallel evolving branches of the data
model. See Versioning embedded data models [p 85] for more information on data
model versions.
CHAPTER 13
Versioning a data model
This chapter contains the following topics:
1. About versions
2. Accessing versions
3. Working with versions
4. Known limitations on data model versioning
Access data model version Go to the corresponding version of the data model.
Create version Creates a new version based on the contents of the selected
version. The new version is added as a child of the selected
version, though its contents bear no relation to those of its
parent version after creation.
Set as default version Sets the selected version as the default version opened when
users access the data model.
Import archive Imports the content of an archive into the selected version.
The archive to import must contain a data model with the
same name as the data model associated with the version.
A version can be deleted by clicking the X button to the right of its entry. A version cannot be deleted
if it is linked to a publication or if it has child versions. The root version of a data model also cannot
be deleted.
Two versions of the same data model can be compared in the workspace by selecting their checkboxes,
then selecting Actions > Compare selected versions. The side-by-side comparison shows structural
differences between the version of the data model, with the older version on the left and the newer
version on the right.
Dataspaces
CHAPTER 14
Introduction to dataspaces
This chapter contains the following topics:
1. Overview
2. Using the Dataspaces area user interface
14.1 Overview
What is a dataspace?
The life cycle of data can be complex. It may be necessary to manage a current version of data while
working on several concurrent updates that will be integrated in the future, including keeping a trace
of various states along the way. In EBX, this is made possible through the use of dataspaces and
snapshots.
A dataspace is a container that isolates different versions of datasets and organizes them. A dataspace
can be branched by creating a child dataspace, which is automatically initialized with the state of
its parent. Thus, modifications can be made in isolation in the child dataspace without impacting its
parent or any other dataspaces. Once modifications in a child dataspace are complete, that dataspace
can be compared with and merged back into the parent dataspace.
Snapshots, which are static, read-only captures of the state of a dataspace at a given point in time,
can be taken for reference purposes. Snapshots can be used to revert the content of a dataspace later,
if needed.
Note
This area is available only to authorized users in the 'Advanced perspective'.
The navigation pane displays all existing dataspaces, while the workspace displays information about
the selected dataspace and lists its snapshots.
See also
Creating a dataspace [p 91]
Snapshots [p 101]
CHAPTER 15
Creating a dataspace
This chapter contains the following topics:
1. Overview
2. Properties
3. Relational mode
15.1 Overview
By default, dataspaces in EBX are in semantic mode. This mode offers full-featured data life cycle
management.
To create a new dataspace in the default semantic mode, select an existing dataspace on which to base
it, then click the Create a dataspace button in the workspace.
Note
This area is available only to authorized users in the 'Advanced perspective'.
The new dataspace will be a child dataspace of the one from which it was created. It will be initialized
with all the content of the parent at the time of creation, and an initial snapshot will be taken of this
state.
Aside from the reference dataspace, which is the root of all semantic dataspaces in the repository,
semantic dataspaces are always a child of another dataspace.
15.2 Properties
The following information is required at the creation of a new dataspace:
CHAPTER 16
Working with existing dataspaces
This chapter contains the following topics:
1. Dataspace information
2. Dataspace permissions
3. Merging a dataspace
4. Comparing a dataspace
5. Validating a dataspace
6. Dataspace archives
7. Closing a dataspace
Child merge policy This merge policy only applies to user-initiated merge
processes; it does not apply to programmatic merges, for
example, those performed by workflow script tasks.
The available merge policies are:
• Allows validation errors in result: Child dataspaces
can be merged regardless of the validation result. This
is the default policy.
• Pre-validating merge: A child dataspace can only be
merged if the result would be valid.
Child dataspace sort policy Defines the display order of child dataspaces in dataspace
trees. If not defined, the policy of the parent dataspace is
applied. Default is 'by label'.
Allowable actions
Users can be allowed to perform the following actions:
Create a child dataspace Whether the profile can create child dataspaces.
Create a snapshot Whether the profile can create snapshots from the
dataspace.
Initiate merge Whether the profile can merge the dataspace with its parent.
Close snapshot Whether the profile can close snapshots of the dataspace.
Permissions of child dataspaces Specifies the default access permissions for child dataspaces
when created that are created from the current dataspace.
Initiating a merge
To merge a dataspace into its parent dataspace:
1. Select that dataspace in the navigation pane of the Dataspaces area.
2. In the workspace, select Merge dataspace from the Actions menu.
Note
This change set review and acceptance stage is bypassed when performing merges
using data services or programmatically. For automated merges, all changes in the child
dataspace override the data in the parent dataspace.
The change acceptance process uses a side-by-side comparison interface that recapitulates the changes
that require review. Two change set columns are obtained by taking the relevant changes from the
following dataspace state comparisons:
• The current child dataspace compared to its initial snapshot.
• The parent dataspace compared to the initial snapshot of the child dataspace.
By default, all detected changes are selected to be merged. You may deselect any changes that you
want to omit from the merge. You can view the changes relevant to different scopes in your data model
by selecting elements in the navigation pane.
In order to detect conflicts, the merge involves the current dataspace, its initial snapshot and the parent
dataspace, because data is likely to be modified both in the current dataspace and its parent.
The merge process also handles modifications to permissions on tables in the dataspace. As with other
changes, access control changes must be reviewed for inclusion in the merge.
When you have decided which changes to merge for a given scope, you must click the button Mark
difference(s) as reviewed to indicate that you have reviewed all the changes in that scope. All changes
must be reviewed in order to proceed with the merge.
Types of modifications
The merge process considers the following changes as modifications to be reviewed:
• Record and dataset creations
• Any changes to existing data
• Record, dataset, or value deletions
• Any changes to table permissions
Types of conflicts
This review interface also shows conflicts that have been detected. Conflicts may arise when the same
scope contains modifications in both the source and target dataspaces.
Conflicts are categorized as follows:
• A record or a dataset creation conflict
• An entity modification conflict
• A record or dataset deletion conflict
• All other conflicts
Finalizing a merge
Once you have reviewed all changes and decided which to include in the merge result, click on the
Merge >> button in the navigation pane.
Depending on the child merge policy that is configured on the parent dataspace in your merge, the
subsequent process may differ. By default, merges are finalized even if the result would contain
validation errors. The administrator of the parent dataspace in your merge can set its child merge policy
so that merges of its children are only finalized if the result would not contain any validation errors.
If, however, the administrator of the parent dataspace has set its child merge policy to 'Pre-validating
merge', a dedicated dataspace is first created to hold the result of the merge. When the result is valid,
this dedicated dataspace containing the merge result is automatically merged into the parent dataspace,
and no further action is required.
In the case where validation errors are detected in the dedicated merge dataspace, you only have access
to the original parent dataspace and the dataspace containing the merge result, named "[merge] <
name of child dataspace >". The following options are available to you from the Actions > Merge
in progress menu in the workspace:
• Cancel, which abandons the merge and recuperates the child dataspace in its pre-merge state.
• Continue, which you can use to reattempt the merge after you have made the necessary
corrections to the dedicated merge dataspace.
Note
When the merge is performed through a Web Component, the behavior of the child
merge policy is the same as described; the policy defined in the parent dataspace is
automatically applied when merging a child dataspace. However, this setting is ignored
during programmatic merge, which includes script tasks in data workflows.
Abandoning a merge
Merges are performed in the context of a user session, and must be completed in a single operation.
If you decide not to proceed with a merge that you have initiated, you can click the Cancel button
to abandon the operation.
If you navigate to another page after initiating a merge, the merge will be abandoned, but the locks
on the parent and child dataspaces will remain until you unlock them in the Dataspaces area.
You may unlock a dataspace by selecting it in the navigation pane, and clicking the Unlock button
in the workspace. Performing the unlock from the child dataspace unlocks both the child and parent
dataspaces. Performing the unlock from the parent dataspace only unlocks the parent dataspace, thus
you need to unlock the child dataspace separately.
Note
This service is only available in the user interface if you have permission to validate
every dataset contained in the current dataspace.
Exporting
To export a dataspace to an archive, select that dataspace in the navigation panel, then select Actions
> Export in the workspace. Once exported, the archive file is saved to the file system of the server,
where only an administrator can retrieve the file.
Note
See Archives directory [p 350] in the Administration Guide for more information.
Datasets to export The datasets to export from this dataspace. For each
dataset, you can export its data values, permissions, and/or
information.
Importing
To import content into a dataspace from an archive, select that dataspace in the navigation panel, then
select Actions > Import in the workspace.
If the selected archive does not include a change set, the current state of the dataspace will be replaced
with the content of the archive.
If the selected archive includes the whole content as well as a change set, you can choose to apply the
change set in order to merge the change set differences with the current state. Applying the change
set leads to a comparison screen, where you can then select the change set differences to merge.
If the selected archive only includes a change set, you can select the change set differences to merge
on a comparison screen.
CHAPTER 17
Snapshots
This chapter contains the following topics:
1. Overview of snapshots
2. Creating a snapshot
3. Viewing snapshot contents
4. Snapshot information
5. Comparing a snapshot
6. Validating a snapshot
7. Export
8. Closing a snapshot
Loading strategy Only administrators can modify this setting. See Loading
strategy [p 94].
Note
In order to use this service, you must have permission to validate every dataset contained
in the snapshot.
17.7 Export
To export a snapshot to an archive, open that snapshot, then select Actions > Export in the workspace.
Once exported, only an administrator can retrieve the archive.
Note
See Archives directory [p 350] in the Administration Guide for more information.
Datasets to export The datasets to export from this snapshot. For each
dataset, you can choose whether to export its data values,
permissions, and information.
Datasets
CHAPTER 18
Introduction to datasets
This chapter contains the following topics:
1. Overview
2. Using the Data user interface
18.1 Overview
What is a dataset?
A dataset is a container for data that is based on the structural definition provided by its underlying data
model. When a data model has been published, it is possible to create datasets based on its definition. If
that data model is later modified and republished, all its associated datasets are automatically updated
to match.
In a dataset, you can consult actual data values and work with them. The views applied to tables allow
representing data in a way that is most suitable to the nature of the data and how it needs to be accessed.
Searches and filters can also be used to narrow down and find data.
Different permissions can also be accorded to different roles to control access at the dataset level.
Thus, using customized permissions, it would be possible to allow certain users to view and modify
a piece of data, while hiding it from others.
Select or create a dataset using the 'Select dataset' menu in the navigation pane. The data structure of
the dataset is then displayed in the navigation pane, while record forms and table views are displayed
in the workspace.
When viewing a table of the dataset in the workspace, the button displays searches and filters that
can be applied to narrow down the records that are displayed.
Operations at the dataset level are located in the Actions menu in the navigation pane (services are
available at the bottom of the list).
See also
Creating a dataset [p 109]
Searching and filtering data [p 112]
Working with records in the user interface [p 119]
Inheritance [p 25]
Related concepts
Data model [p 32]
Dataspace [p 88]
CHAPTER 19
Creating a dataset
This chapter contains the following topics:
1. Creating a root dataset
2. Creating an inheriting child dataset
Note
This area is available only to authorized users in the 'Advanced perspective' or from a
specifically configured perspective.
The wizard allows you to select one of three data model packaging modes on which to base the new
dataset: packaged, embedded, or external.
• A packaged data model is a data model that is located within a module, which is a web application.
• An embedded data model is a data model that is managed entirely within the EBX repository.
• An external data model is one that is stored outside of the repository and is referenced using its
URI.
After locating the data model on which to base your dataset, you must provide a unique name, without
spaces or special characters. Optionally, you may provide localized labels for the dataset, which will
be displayed to users in the user interface depending on their language preferences.
Attention
Table contents are not copied when duplicating a dataset.
To create a child dataset, select the 'Select dataset [p 107]' menu in the navigation pane, then click
the button next to the desired parent dataset.
As the dataset will automatically be based on the same data model as the parent dataset, the only
information that you need to provide is a unique name, and optionally, localized labels.
CHAPTER 20
Viewing table data
EBX offers a customization mechanism for tables via the 'Views' feature. A view allows specifying
which columns should be displayed as well as the display order. Views can be managed by profile
thanks to the recommended views [p 116] concept.
This chapter contains the following topics:
1. 'View' menu
2. Sorting data
3. Searching and filtering data
4. Views
5. Views management
6. Grid edit
7. History
a criterion is highlighted, you can set its sort direction by clicking on the 'ASC' or 'DESC' button to
the right.
To change the priority of a sort criterion, highlight it in the list, then use the up and down arrow buttons
to move it.
To remove a custom sort order that is currently applied, select View > Reset.
Note
Applying a view resets and removes all currently applied searches and filters.
Search
In simple mode, the 'Search' tool allows adding type-contextual search criteria to one or more fields.
Operators relevant to the type of a given field are proposed when adding a criterion.
By enabling the advanced mode, it is possible to build sub-blocks containing criteria for more complex
logical operations to be involved in the search results computation.
Note
In advanced mode, the criteria with operators "matches" or "matches (case sensitive)"
follow the standard regular expression syntax from Java.
Text search
The text search is intended for plain-text searches on one or more fields. The text search does not take
into account the types of the fields being searched for.
• If the entered text contains one or more words without wildcard characters (* or ?), matching fields
must contain all specified words. Words between quotes, for example "aa bb", are considered to
be a single word.
• Standard wildcard characters are available: * (any text) or ? (any character). For performance
reasons, only one of these characters can be entered in each search.
• Wildcard characters themselves can be searched for by escaping them with the character '\'. For
example '\*' will search for the asterisk character.
Examples:
• aa bb: field contains 'aa' and 'bb'.
• aa "bb cc": field contains 'aa' and 'bb cc'.
• aa*: field label starts with 'aa'.
• *bb: field label ends with 'bb'.
• aa*bb: field label starts with 'aa' and ends with 'bb'.
Note
This filter only applies to records of the table that have been validated at least once by
selecting Actions > Validate at the table level from the workspace, or at the dataset level
from the navigation pane.
20.4 Views
It is possible to customize the display of tables in EBX according to the target user. There are two
types of views: tabular [p 114] and hierarchical [p 115].
A view can be created by selecting View > Create a new view in the workspace. To apply a view,
select it in View > name of the view.
Two types of views can be created:
• 'Simple tabular view': A table view to sort and filter the displayed records.
• 'Hierarchical view': A tree view that links data in different tables based on their relationships.
View description
When creating or updating a view, the first page allows specifying general information related to the
view.
Owner Name of the owner of the view. This user can manage and
modify it. (Only available for administrators and dataset
owners)
Share with Other profiles allowed to use this view from the 'View'
menu.
Note
Requires a permission, see Views permissions
[p 371].
Displayed columns Specifies the columns that will be displayed in the table.
Sorted columns Specifies the sort order of records in the table. See Sorting
data [p 111].
Grid edit If enabled, users of this view can switch to grid edit, so that
they can edit records directly from the tabular view.
Disable create and duplicate If yes, users of this view cannot create nor duplicate records
from the grid edit.
Hierarchical views
A hierarchy is a tree-based representation of data that allows emphasizing relationships between
tables. It can be structured on several relationship levels called dimension levels. Furthermore, filter
criteria can be applied in order to define which records will be displayed in the view.
Hierarchy dimension
A dimension defines dependencies in a hierarchy. For example, a dimension can be specified to display
products by category. You can include multiple dimension levels in a single view.
Display records in a new window If 'Yes', a new window will be opened with the record.
Otherwise, it will be displayed in a new page of the same
window.
Prune hierarchy If 'Yes', hierarchy nodes that have no children and do not
belong to the target table will not be displayed.
Display root node If 'No', the root node of the hierarchy will not be displayed
in the view.
Toolbar on top of hierarchy Allows to set the toolbar on top of the hierarchy.
Display non-matching children In a recursive case, when a search filter is applied, allows
the display of non-matching children of a matching node
during a search.
Remove recursive root leaves In a recursive case, when a search filter is applied or if the
mode is 'pruned', removes from the display the root leaves.
Labels
For each dimension level that references another table, it is possible to define localized labels for the
corresponding nodes in the hierarchy. The fields from which to derive labels can be selected using
the built-in wizard.
Filter
The criteria editor allows creating a record filter for the view.
Ordering field
In order to enable specifying the position of nodes in a hierarchical view, you must designate an
eligible ordering field defined in the table on which the hierarchical view is applied. An ordering field
must have the 'Integer' data type and have a 'Hidden' default view mode in its advanced properties
in the data model definition.
Except when the ordering field is in 'read-only' mode or when the hierarchy is filtered, any field can
be repositioned.
By default, if no ordering field is specified, child nodes are sorted alphabetically by label.
Attention
Do not designate a field that is meant to contain data as an ordering node, as the data will be
overwritten by the hierarchical view.
View sharing
Users having the 'Share views' permission on a view are able to define which users can display this
view from their 'View' menu.
To do so, simply add profiles to the 'Share with' field of the view's configuration screen.
View publication
Users having the 'Publish views' permission can publish views present in their 'View' menu.
A published view is then available to all users via Web components, workflow user tasks, data services
and perspectives. To publish a view, go to View > Manage views > name of the view > Publish.
Available actions on recommended views are: change order of assignment rules, add a rule, edit
existing rule, delete existing rule.
Thus, for a given user, the recommended views are evaluated according to the user's profile: the applied
rule will be the first that matches the user's profile.
Note
The 'Manage recommended view' feature is only available to dataset owners.
Manage views
The 'Manage views' sub-menu offers the following actions:
Define this view as my favorite Only available when the currently displayed view is
NOT the recommended view. The favorite view will be
automatically applied when accessing the table.
Define recommended view as my Only available when a favorite view has already been
favorite defined. This will remove the user's current favorite view.
A recommended view, similarly to a favorite view, will be
automatically applied when accessing the table. This menu
item is not displayed if no favorite view has been defined.
Copy/paste
The copy/paste of one or more cells into another one in the same table can be done through the Edit
menu. It is also possible to use the associated keyboard shortcuts Ctrl+C and Ctrl+V.
This system does not use the operating system clipboard, but an internal mechanism. As a
consequence, copying and pasting a cell in an external file will not work. Conversely, pasting a value
into a table cell won't work either.
All simple type fields using built-in widgets are supported, except:
• foreign keys targeting non-string fields;
• non-string enumerates.
20.7 History
The history feature allows tracking changes on master data.
The history feature must have been previously enabled at the data model level. See Advanced
properties for tables [p 59] for more information.
To view the history of a dataset, select Actions > History in the navigation pane.
To view the history of a table or of a selection of records, select Actions > View history in the
workspace.
Several history modes exist, which allow viewing the history according to different perspectives:
History in current dataspace The table history view displays operations on the current
branch. This is the default mode.
History in current dataspace The table history view displays operations on the current
and ancestors branch and on all its ancestors.
History in current dataspace The table history view displays operations on the current
and merged children branch and on all its merged children.
History in all dataspaces The table history view displays operations on the whole
branch hierarchy.
In the history view, use the VIEW menu in order to switch to another history mode.
CHAPTER 21
Editing data
This chapter contains the following topics:
1. Working with records in the user interface
2. Importing and exporting data
3. Restore from history
Note
This action is available only to authorized users in the 'Advanced perspective' or from
a specifically configured perspective.
Creating a record
In a tabular view, click the button located above the table.
In a hierarchical view, select 'Create a record' from the menu of the parent node under which to create
the new record.
Next, enter the field values in the new record creation form. Mandatory fields are indicated by
asterisks.
Duplicating a record
To duplicate a selected record, select Actions > Duplicate.
A new record creation form pre-populates the field values from the record being duplicated. The
primary key must be given a unique value, unless it is automatically generated (as is the case for auto-
incremented fields).
Deleting records
To delete one or more selected records, select Actions > Delete.
Note
The content of complex terminal nodes, such as aggregated lists and user defined
attributes, are excluded from the comparison process. That is, the compare service
ignores any differences between the values of the complex terminal nodes in the records.
See also
CSV Services [p 219]
XML Services [p 213]
Note
This feature has limitations linked to the limitations of the history feature:
• the 'restore from history' feature is not available on tables containing lists that are not
supported by history. See Data model limitations [p 254].
• computed values, encrypted values and fields on which history has been disabled are
ignored when restoring a record from history, since these fields are not historized.
CHAPTER 22
Working with existing datasets
This chapter contains the following topics:
1. Validating a dataset
2. Duplicating a dataset
3. Deactivating a dataset
4. Managing dataset permissions
Permissions are defined using profile records. To define a new permissions profile, create a new record
in the 'Access rights by profile' table.
Restriction policy If 'Yes', indicates that when the permissions defined here
are more strict than otherwise defined, these permissions
are respected. This is contrary to the default where the most
permissive rights defined take precedence.
See Resolving user-defined rules [p 281].
Create a child dataset Indicates whether the profile can create a child dataset.
Inheritance also must be activated in the data model.
Duplicate the dataset Indicates whether the profile can duplicate the dataset.
Delete the dataset Indicates whether the profile can delete the dataset.
Activate/deactivate the Indicates whether the profile can modify the Activated
dataset property in the dataset information. See Deactivating a
dataset [p 121].
Create a view Indicates whether the profile can create views and
hierarchies in the dataset.
Tables policy Specifies the default permissions for all tables. Specific
permissions can also be defined for a table by clicking the
'+' button.
Create a new record Indicates whether the profile can create records in the table.
Overwrite inherited record Indicates whether the profile can override inherited records
in the table. This permission is useful when using dataset
inheritance.
Occult inherited record Indicates whether the profile can occult inherited records
in the table. This permission is useful when using dataset
inheritance.
Delete a record Indicates whether the profile can delete records in the table.
Values access policy Specifies the default access permissions for all the nodes
of the dataset and allows the definition of permissions for
Rights on services This section specifies the access permissions for services. A
service is not accessible to a profile if it is crossed-out.
CHAPTER 23
Dataset inheritance
Using the concept of dataset inheritance, it is possible to create child datasets that branch from a parent
dataset. Child datasets inherit values and properties by default from the parent dataset, which they can
then override if necessary. Multiple levels of inheritance can exist.
An example of using dataset inheritance is to define global default values in a parent dataset, and
create child datasets for specific geographical areas, where those default values can be overridden.
Note
By default, dataset inheritance is disabled. It must be explicitly activated in the
underlying data model.
Note
• A dataset cannot be deleted if it has child datasets. The child datasets must be deleted first.
• If a child dataset is duplicated, the newly created dataset will be inserted into the existing
dataset tree as a sibling of the duplicated dataset.
Record inheritance
A table in a child dataset inherits the records from the tables of its ancestor datasets. The table in the
child dataset can add, modify, or delete records. Several states are defined to differentiate between
types of records.
When the inheritance button is toggled on, it indicates that the record or value is inherited from the
parent dataset. This button can be toggled off to override the record or value. For an occulted record,
toggle the button on to revert it to inheriting.
The following table summarizes the behavior of records when creating, modifying or deleting a record,
depending on its initial state.
Root Standard new record creation. Standard modification of an Standard record deletion. The
The newly created record will existing record. The modified record will no longer appear
be inherited in child datasets values will be inherited in the in the current dataset and the
of the current dataset. child datasets of the current child datasets of the current
dataset. dataset.
Inherited If a record is created using An inherited record must first Deleting an inherited record
the same primary key as an be marked as overwritten in changes it state to occulted.
existing inherited record, that order to modify its values.
record will be overwritten
and its value will be the one
submitted at creation.
Overwritten Not applicable. Cannot create An overridden record can be Deleting an overwritten
a new record if the primary returned to the inherited state, record changes its state to
key is already used in the but its modified value will be occulted.
current dataset. lost.
Individual values in an
overridden record can be
set to inheriting or can be
modified.
Occulted If a record is created using Not applicable. An occulted Not applicable. An occulted
the primary key of an existing record cannot be modified. record is already considered
occulted record, the record to be deleted.
state will be changed to
overwritten and its value
modified according to the one
submitted at creation.
Workflow models
CHAPTER 24
Introduction to workflow models
This chapter contains the following topics:
1. Overview
2. Using the Workflow Models area user interface
3. Generic message templates
4. Limitations of workflows
24.1 Overview
What is a workflow model?
Workflows in EBX facilitate the collaborative management of data in the repository. A workflow can
include human actions on data and automated tasks alike, while supporting notifications on certain
events.
The first step of realizing a workflow is to create a workflow model that defines the progression of
steps, responsibilities of users, as well as other behavior related to the workflow.
Once a workflow model has been defined, it can be validated and published as a workflow publication.
Data workflows can then be launched from the workflow publication to execute the steps defined in
the workflow model.
See also
Workflow model (glossary) [p 27]
Data workflow (glossary) [p 29]
Note
This area is available only to authorized users in the 'Advanced perspective'. Only
authorized users can access these interfaces.
• Label & Description: Specifies the localized labels and descriptions associated with the template.
• Message: Specifies the localized subjects and bodies of the message.
The message template can include data context variables, such as ${variable.name}, which are
replaced when notifications are sent. System variables that can be used include:
Example
Generic template message:
Today at ${system.time}, a new work item was offered to you
Resulting email:
Today at 15:19, a new work item was offered to you
CHAPTER 25
Creating and implementing a
workflow model
This chapter contains the following topics:
1. Creating a workflow model
2. Implementing the steps
3. User tasks
4. Script tasks
5. Conditions
6. Sub-workflow invocations
7. Wait tasks
8. Visualizing the workflow diagram
Mode
For backward compatibility reasons, two user task modes are available: the default mode and the
legacy mode.
By default, a user task generates a single work item. This mode offers more features, such as offering
a work item to a list of profiles or directly displaying the avatars in the workflow progress view.
In the legacy mode, a user task can generate several work items.
By default, the user task creation service is hidden in legacy mode. To display it, a property should be
enabled in the ebx.properties file. For more information, see Disabling user task legacy mode.
List of profiles
The definition of the profiles of a user task may vary depending on the user task mode.
Service
EBX includes the following built-in services:
• Access a dataspace
• Access data (default service)
• Access the dataspace merge view
• Compare contents
• Create a new record
• Duplicate a record.
• Export data to a CSV file
• Export data to an XML file
• Import data from a CSV file
• Import data from an XML file
• Merge a dataspace
• Validate a dataspace, a snapshot or a dataset
Configuration
Note
If you specify a service extension overriding the method
UserTask.handleWorkItemCompletion to handle the determination of the user task's
completion, it is up to the developer of the extension to verify from within their code
that the task termination criteria defined through the user interface has been met. See
UserTask.handleWorkItemCompletion in the JavaDoc for more information
API
In order to change this default behavior, it is possible to define a certain number of work item
rejections to tolerate. While within the limit of tolerated rejections, no error will occur and it is the
task termination criteria that determines when to end the user task.
The following task termination criteria automatically tolerate all rejections:
• 'When all work items have been either accepted or rejected'
• 'Either when all work items have been accepted, or as soon as one work item has been rejected'
Extension
A custom class can be specified in order for the task to behave dynamically in the context of a given
data workflow. For example, this can be used to create work items or complete user tasks differently
than the default behavior.
The specified rule is a JavaBean that must extend the UserTask class. API
Attention
If a rule is specified and the handleWorkItemCompletion method is overridden, the completion
strategy is no longer automatically checked. The developer must check for completion within this
method.
Notification
A notification email can be sent to users when specific events occur. For each event, you can specify
a content template.
It is possible to define a monitor profile that will receive all emails that are sent in relation to the
user task.
Reminder
Reminder emails for outstanding offered or allocated work items can be periodically sent to the
concerned users. The recipients of the reminder are the users to whom the work item is offered or
allocated, as well as the recipients on copy.
The content of the reminder emails is determined by the current state of the work item. That is, if the
work item is offered, the notification will use the "Offered work items" template; if the work item is
allocated, the notification will use the "Allocated work items" template.
Deadline
Each user task can have a completion deadline. If this date passes and associated works items are not
completed, a notification email is sent to the concerned users. This same notification email will then
be sent daily until the task is completed.
There are two deadline types:
• Absolute deadline: A calendar date.
• Relative deadline: A duration in hours, days or months. The duration is evaluated based on the
reference date, which is the beginning of the user task or the beginning of the workflow.
Library script task EBX includes a number of built-in library script tasks,
which can be used as-is.
Any additional library script tasks must be declared in a
module.xml file. A library script task must define its label,
description and parameters. When a user selects a library
script task for a step in a workflow model, its associated
parameters are displayed dynamically.
Specific script task Specifies a Java class that performs custom actions. The
associated class must belong to the same module as
the workflow model. Its labels and descriptions are not
displayed dynamically to users in workflow models.
script tasks, additional library script tasks can be defined for use in workflow models. Their labels
and descriptions can be localized.
The method ScriptTaskBean.executeScript API
is called when the data workflow reaches the
corresponding step.
Attention
The method ScriptTaskBean.executeScript must not create any threads because the data
API
workflow moves on as soon as the method is executed. Each operation in this method must therefore
be synchronous.
It is possible to dynamically set variables of the library script task if its implementation follows the
Java Bean specification. Variables must be declared as parameters of the bean of the library script task
in module.xml. The workflow data context is not accessible from a Java bean.
Note
Some built-in library script tasks are marked as "deprecated" because they are not
compatible with internationalization. It is recommended to use the new script tasks that
are compatible with internationalization.
step.
Attention
The method ScriptTask.executeScript must not create any threads because the data workflow
API
moves on as soon as the method is executed. Each operation in this method must therefore be
synchronous.
25.5 Conditions
Conditions are decision steps in workflows.
Two types of conditions exist, which, once defined, can be used in workflow model steps:
Library conditions
EBX includes the following built-in library conditions:
• Dataspace modified?
• Data valid?
• Last user task accepted?
• Value null or empty?
• Values equals?
Library conditions are classes that extend the class ConditionBean . Besides the built-in library
API
conditions, additional library conditions can be defined for use in workflow models. Their labels and
descriptions can be localized.
See the example [p 519].
It is possible to dynamically set variables of the library condition if its implementation follows the
Java Bean specification. Variables must be declared as parameters of the bean of the library condition
in module.xml. The workflow data context is not accessible from a Java bean.
Specific conditions
Specific conditions are classes that extend the class Condition . API
The workflow data context is directly accessible from the Java bean.
Wait task beans must be declared in a module.xml file.
First, the wait task bean is called when the workflow starts waiting. At this time, the generated resume
identifier is available to call a web service for example. Then, the wait task bean is called when the wait
task is resumed. In this way, the data context may be updated according to the received parameters.
Note
The built-in administrator always has the right to resume a workflow.
Actions
View
Layout > Default layout Applies the default layout to the diagram.
Grid > Show/Hide grid Shows the grid if the grid is not visible, hides it otherwise.
Buttons
Save layout and close Saves the current layout and closes the service.
Additional features
The diagram view offers useful additional features
Zoom in/Zoom out. Mouse middle button then mouse wheel / CTRL then mouse
wheel.
Multiple selection Click on the nodes or links selected holding down the CTRL
button / Draw a selection rectangle (you will need to hold
down the left click for 1 second before drawing the area).
Customizing links drawing When clicking on a link, you can either move the segments
by dragging the squares which appear on the corners, or
separate a specific segment by moving the circle in the
middle.
CHAPTER 26
Configuring the workflow model
This chapter contains the following topics:
1. Information associated with a workflow model
2. Workflow model properties
3. Data context
4. Custom workflow execution views
5. Permissions on associated data workflows
6. Workflow model snapshots
7. Deleting a workflow model
Owner Specifies the workflow model owner, who will have the
rights to edit the workflow model's information and define
its permissions.
Localized documentation Localized labels and descriptions for the workflow model.
Notification of error The list of profiles that will receive notifications, based on
a template, when a data workflow is in error state.
See Generic message templates [p 131].
Automatically open the first step Allows determining the navigation after a workflow is
launched. By default, once a workflow is launched,
the current table (workflow launchers or monitoring >
publications) is automatically displayed.
The data context is defined by a list of variables. Each variable has the following properties:
Default value If defined, the variable will be initialized with this default
value.
Workflow administration > Defines the profile that is allowed to replay a workflow step.
Replay a step In order to perform this action, this profile is automatically
granted the "Visualize workflows" permission. A button in
the "Monitoring > Active workflows" section is available to
replay a step. A profile with the "Workflow administration"
permission is automatically allowed to perform this specific
action. The built-in administrator always has the rights to
replay a step.
Workflow administration > Defines the profile that is allowed to terminate and
Terminate workflow clean a workflow. In order to perform this action, this
profile is automatically granted the "Visualize workflows"
permission. A button in the "Monitoring > Active
workflows" section is available to terminate and clean an
active workflow. A button in the "Completed workflows"
section is available to delete a completed workflow. A
profile with the "Workflow administration" permission is
automatically allowed to perform this specific action. The
built-in administrator always has the rights to terminate a
workflow.
Workflow administration > Defines the profile that is allowed to force resuming a
Force a workflow to resume waiting workflow. In order to perform this action, this
profile is automatically granted the "Visualize workflows"
permission. A button in the "Monitoring > Active
workflows" section is available to resume a workflow. A
profile with the "Workflow administration" permission is
automatically allowed to perform this specific action. The
built-in administrator always has the right to resume a
workflow.
Workflow administration > Defines the profile that is allowed to disable a workflow
Disable a publication publication. In order to perform this action, this profile
is automatically granted the "Visualize workflows"
permission. A button in the "Monitoring > Publications"
section is available to disable a publication. It is only
Allocation management Defines the profile that is allowed to manage work items
allocation. The allocation actions include the following:
allocate work items, reallocate work items and deallocate
work items. In order to perform these actions, this
profile is automatically granted the "Visualize workflows"
permission. The built-in administrator always has the
allocation management rights.
Allocation management > Defines the profile that is allowed to allocate work items. In
Allocate work items order to perform these actions, this profile is automatically
granted the "Visualize workflows" permission. A button
in the "Monitoring > Work items" section is available
to allocate a work item. It is only displayed on offered
work items. A profile with the "Allocation management"
permission is automatically allowed to perform this specific
action. The built-in administrator always has the work items
allocation rights.
Allocation management > Defines the profile that is allowed to reallocate work items.
Reallocate work items In order to perform this action, this profile is automatically
granted the "Visualize workflows" permission. A button
in the "Monitoring > Work items" section is available to
reallocate a work item. It is only displayed on allocated
work items. A profile with the "Allocation management"
permission is automatically allowed to perform this specific
action. The built-in administrator always has the work items
reallocation rights.
Allocation management > Defines the profile that is allowed to deallocate work items.
Deallocate work items In order to perform this action, this profile is automatically
granted the "Visualize workflows" permission. A button
in the "Monitoring > Work items" section is available to
deallocate a work item. It is only displayed on allocated
work items. A profile with the "Allocation management"
permission is automatically allowed to perform this specific
Launch workflows Defines the profile that is allowed to manually launch new
workflows. This permission allows launching workflows
from the active publications of the "Workflow launchers"
section. The built-in administrator always has the launch
workflows rights.
Visualize workflows > The If enabled, the workflow creator has the permission to
workflow creator can visualize it view the workflows he has launched. This restricted
permission grants access to the workflows he launched and
to the associated work items in the "Monitoring > Active
workflows", "Monitoring > Work items" and "Completed
workflows" sections. The default value is 'No'.
Visualize workflows > Visualize Defines the profile that is allowed to visualize completed
completed workflows workflows. This permission allows visualizing completed
workflows in the "Completed workflows" section and
accessing their history. A profile with the "Visualize
workflows" permission is automatically allowed to perform
this action. The built-in administrator always has the
visualize completed workflows rights.
Note
A user who has no specific privileges assigned can only see work items associated with
this workflow that are offered or allocated to that user.
CHAPTER 27
Publishing workflow models
This chapter contains the following topics:
1. About workflow publications
2. Publishing and workflow model snapshots
3. Sub-workflows in publications
The system computes the dependencies to workflow models used as sub-workflows, and automatically
creates one publication for each dependent model. These technical publications are dedicated to the
workflow engine to launch sub-workflows, and are not available in the Workflow Data area.
The multiple publication is not available for a workflow model containing sub-workflow invocation
steps. This is why the first step of the publication (selection of workflow models to publish) is not
offered in this case.
Republishing the main workflow model automatically updates the invoked sub-workflow models.
Although a sub-workflow model can be published separately as a main workflow model, this will not
update the version used by an already published main workflow model using this sub-workflow.
Data workflows
CHAPTER 28
Introduction to data workflows
This chapter contains the following topics:
1. Overview
28.1 Overview
A data workflow is an executed step-by-step data management process, defined using a workflow
model publication. It allows users, as well as automated procedures, to perform actions collaboratively
on a set of data. Once a workflow model has been developed and published, the resulting publication
can be used to launch a data workflow to execute the defined steps.
Depending on the workflow user permissions defined by the workflow model, a user may perform
one or more of the following actions on associated data workflows:
• As a user with default permissions, work on and complete an assigned work item.
• As a user with workflow launching permissions, create a new data workflow from a workflow
model publication.
• As a workflow monitor, follow the progress of ongoing data workflows and consult the history
of completed data workflows.
• As a manager of work item allocation, modify work item allocations manually for other users
and roles.
• As a workflow administrator, perform various administration actions, such as replaying steps,
terminating workflows in progress, or rendering publications unavailable for launching data
workflows.
See also
Work items [p 165]
Launching and monitoring data workflows [p 171]
Administration of data workflows [p 173]
Permissions on associated data workflows [p 151]
CHAPTER 29
Using the Data Workflows area user
interface
This chapter contains the following topics:
1. Navigating within the interface
2. Navigation rules
3. Custom views
4. Specific columns
5. Filtering items in views
6. Graphical workflow view
Note
This area is available only to authorized users in the 'Advanced perspective' or from a
specifically configured perspective. Only authorized users can access these interfaces.
The navigation pane is organized into several entries. These entries are displayed according to their
associated global permission. The different entries are:
Work items inbox All work items either allocated or offered to you, for which
you must perform the defined task.
Workflow launchers List of workflow model publications from which you are
allowed to launch data workflows, according to your user
permissions.
Monitoring Monitoring views on the data workflows for which you have
the necessary viewing permissions.
Work items Work items for which you have the necessary viewing
permissions. If you have additional administrative
permissions, you can also perform actions relevant to
work item administration, such as allocating work items to
specific users or roles from this view.
Completed workflows Data workflows that have completed their execution, for
which you have the necessary viewing permissions. You can
view the history of the executions of the data workflows.
If you have additional administrative permissions, you can
also clean completed workflows from the repository from
this view.
Note
Each section can be accessed through Web Components, for example, for portal integration,
or programatically using the ServiceKey class in the Java API.
See also
Using EBX as a Web Component [p 191]
API
ServiceKey
Workflow launchers
By default, once a workflow has been launched, the workflow launchers table is displayed.
This behavior can be modified according to the model configuration, which can allow to directly open
the first step without displaying the workflow launchers table.
See the automatic opening of the first workflow step [p 148] in workflow modeling.
CHAPTER 30
Work items
This chapter contains the following topics:
1. About work items
2. Working on work items as a participant
3. Work item priorities
Note
The default behavior described above can be overridden by a programmatic extension
defined in the user task. In this case, work items may be generated programmatically and
not necessarily based on the user task's list of participants.
Legacy mode
By default, for each user defined as a participant of the user task, the data workflow creates a work
item in the allocated state.
By default, for each role defined as a participant of the user task, the data workflow creates a work
item in the offered state.
Note
The default behavior described above can be overridden by a programmatic extension
defined in the user task. In this case, work items may be generated programmatically and
not necessarily based on the user task's list of participants.
Note
If you interrupt the current session in the middle of a started work item, for example by
closing the browser or by logging out, the current work item state is preserved. When
you return to the work item, it continues from the point where you left off.
CHAPTER 31
Launching and monitoring data
workflows
This chapter contains the following topics:
1. Launching data workflows
2. Monitoring activities
3. Managing work item allocation
Select 'Work items' in the 'Monitoring' section of the navigation pane. The actions that you are able
to perform appear in the Actions menu of the work item's entry in the table, depending on the current
state of the work item.
Deallocate Reset a work item in the allocated state to the offered state.
See also
Work items [p 165]
Permissions on associated data workflows [p 151]
CHAPTER 32
Administration of data workflows
If you have been given permissions for administration activities associated with data workflows, any
relevant publications, active data workflows, and work items, will appear under the entries of the
'Monitoring' section in the navigation panel. From these monitoring views, you can directly perform
administrative tasks from the Actions menus of the table entries.
Note
When a workflow model gives you administrative rights, you automatically have
monitoring permissions on all of the relevant aspects of data workflow execution, such
as publications, active data workflows, and work items.
• To execute: The token is the process of progressing to the next step, based on the workflow model.
• Executing: The token is positioned on a script task or a condition that is being processed.
• User: The token is positioned on a user task and is awaiting a user action.
• Waiting for sub-workflows: The token is positioned on a sub-workflow invocation and is
awaiting the termination of all launched sub-workflows.
• Waiting for event: The token is positioned on a wait task and is waiting for a specific event to
be received.
• Finished: The token has reached the end of the data workflow.
• Error: An error has occurred.
Note
Once a publication has been disabled, it cannot be re-enabled from the Data Workflows
area. Only a user with the built-in repository 'Administrator' role can re-enable a disabled
publication from the Administration area, although manually editing technical tables
is not generally recommended, as it is important to ensure the integrity of workflow
operations.
Note
When you choose to unpublish a workflow publication, you will be prompted to confirm
the termination and cleaning of any data workflows in progress that were launched from
this workflow publication, and any associated work items. Any data that is lost as a result
of forcefully terminating a data workflow cannot be recovered.
Replaying a step
In the event of an unexpected failure during a step, for example, an access rights issue or unavailable
resources, you can "replay" the step as a data workflow administrator. Replaying a step cleans the
associated execution environment, including any related work items and sub-workflows, and resets
the token to the beginning of the current step.
To replay the current step in a data workflow, select Actions > Replay the step from the entry of the
workflow in the 'Active workflows' table.
Note
This action is not available on workflows in the 'Executing' state, and on sub-workflows
launched from another workflow.
Note
Workflow history data is not deleted.
Note
This action is available for sub-workflows, and for workflows in error blocked on the
last step.
Note
Workflow history data is not deleted.
Note
This action is only available for workflows in the 'waiting for event' state.
Note
This action is not available on sub-workflows launched from another workflow.
Data services
CHAPTER 33
Introduction to data services
This chapter contains the following topics:
1. Overview
2. Using the Data Services area user interface
33.1 Overview
What is a data service?
A data service [p 29] is:
• a standard Web service that interacts with EBX.
SOAP data services can be dynamically generated based on data models from the 'Data Services'
area.
• a REST service that allows interrogating the EBX repository.
The built-in RESTful service does not require a service interface, it is self-descriptive through
the returned metadata.
They can be used to access some of the features available through the user interface.
See also
WSDL/SOAP [p 562]
REST [p 616]
Lineage
Lineage [p 30] is used to establish user permission profiles for non-human users, namely data services.
When accessing data using WSDL interfaces, data services use the permission profiles established
through lineage.
Glossary
See also Data services [p 29]
Note
This area is available only to authorized users in the 'Advanced perspective'.
Related concepts
Dataspace [p 88]
Dataset [p 106]
Data workflows [p 158]
Introduction [p 562]
CHAPTER 34
Generating data service WSDLs
This chapter contains the following topics:
1. Generating a WSDL for operations on data
2. Generating a WSDL for dataspace operations
3. Generating a WSDL for data workflow operations
4. Generating a WSDL for lineage
5. Generating a WSDL for administration
6. Generating a WSDL to modify the default directory
Operations on datasets
The following operations can be performed using the WSDL generated for operations at the dataset
level:
• Select dataset content for a dataspace or snapshot.
• Get dataset changes between dataspaces or snapshots
• Replication unit refresh
Operations on tables
The following operations, if selected, can be performed using the WSDL generated for operations at
the table level:
• Insert record(s)
• Select record(s)
• Update record(s)
• Delete record(s)
• Count record(s)
• Get changes between dataspace or snapshot
• Get credentials
• Run multiple operations on tables in the dataset
See also
WSDL download using a HTTP request [p 575]
Operations generated from a data model [p 581]
Operations on dataspaces
The following operations can be performed using the WSDL generated for operations at the dataspace
level:
• Create a dataspace
• Close a dataspace
• Create a snapshot
• Close a snapshot
• Merge a dataspace
• Lock a dataspace
• Unlock a dataspace
• Validate a dataspace or a snapshot
• Validate a dataset
See also
WSDL download using a HTTP request [p 575]
Operations on datasets and dataspaces [p 602]
See also
WSDL download using a HTTP request [p 575]
Operations on data workflows [p 608]
See also
WSDL download using a HTTP request [p 575]
User interface operations [p 612]
System information operation [p 612]
See also
WSDL download using a HTTP request [p 575]
Directory services [p 611]
Integration
CHAPTER 35
Overview of integration and
extension
Several service and component APIs allow you to develop custom extensions for EBX and integrate
it with other systems.
This chapter contains the following topics:
1. Using EBX as a Web Component
2. User interface customization
3. Data services
4. XML and CSV import/export services
5. Programmatic services
Custom widgets
A custom widget is a graphical component developed specifically to customize the look and feel of
groups and fields in data models or in programmatically defined schemas.
Ajax
EBX supports Ajax asynchronous exchange of data with the server without refreshing the currently
displayed page.
See also
User service Ajax callbacks [p 526]
Ajax component UIAjaxComponent API
See also
WSDL/SOAP data services [p 562]
REST data services [p 616]
See also
XML import and export [p 213]
CSV import and export [p 219]
CHAPTER 36
Using EBX as a Web Component
This chapter contains the following topics:
1. Overview
2. Integrating EBX Web Components into applications
3. Repository element and scope selection
4. Combined selection
5. Request specifications
6. Example calls to an EBX Web Component
36.1 Overview
EBX can be used as a user interface Web Component, called through the HTTP protocol. An EBX
Web Component can be integrated into any application that is accessible through a supported web
browser. This method of access offers the major benefits of EBX, such as user authentication, data
validation, and automatic user interface generation, while additionally providing the ability to focus
user navigation on specific elements of the repository.
Typical uses of EBX Web Components include integrating them into the intranet frameworks of
organizations or into applications that manage the assignment of specific tasks to users.
• An EBX User service [p 187] or a Custom widget [p 188], in which case, the new session will
automatically inherit from the parent EBX session.
Note
In Java, the recommended method for building HTTP requests that call EBX web
components is to use the class UIHttpManagerComponent in the API.
API
The parameter firstCallDisplay may change this automatic display according to its value.
The repository elements that can be selected are as follows:
• Dataspace or snapshot
• Dataset
• Node
• Table or a published view
• Table record
The scope determines how much of the user interface is displayed to the user, thus defining where the
user is able to navigate in the session. The default scope that the Web component uses is the smallest
possible depending on the entity or service being selected or invoked by the request.
Specific case
If the target entity is a record and if an action is on the table that contains this record, then this action
will be selected and the record will be opened inside the action.
In the same way, if a workflow work item is targeted by the web component, and if an action on « inbox
» exists in the perspective, then this action will be selected and the work item will be opened inside it.
Known limitations
If the Web component specifies a predicate to filter a table, the perspective action must specify the
exact same predicate to be selected.
In the same way, if the perspective action specifies a predicate to filter a table, the Web component
must specify the exact same predicate to establish the match.
Note
The base URL must refer to the servlet FrontServlet, defined in the deployment
descriptor /WEB-INF/web.xml of the web application ebx.war.
login and password, Specifies user authentication properties. If neither a login and password pair nor No
or a user directory- a user directory-specific token is provided, user will be required to authenticate
specific token through the repository login page.
See Directory for more information.
API
trackingInfo Specifies the tracking information of the new session. Tracking information is No
logged in history tables. Additionally, it can be used to programmatically restrict
access permissions.
See AccessRule for more information.
API
redirect The URL to which the user will be redirected at the end of the component session, No
when they click on the button 'Close'. The close button is always displayed for
record selections, but whether or not it is displayed for all other cases must be
specified using the parameter closeButton.
For more information, see Exit policy [p 361].
locale Specifies the locale to use. Value is either en-US or fr-FR. No, default
is the locale
registered for
the user.
instance Selects the specified dataset. The value must be the reference of a Only if xpath or
dataset that exists in the selected dataspace or snapshot. viewPublication is
specified.
Layout parameters
scope Specifies the scope to be used by the web component. Value can be full, data, No, default will
dataspace, dataset or node. be computed to
be the smallest
See UIHttpManagerComponent.Scope for more information.
API
possible
according to the
target selection.
firstCallDisplay Specifies which display must be used instead of the one determined by the No, default will
combination of selection and scope parameter. be computed
according to the
Possible values are:
target selection.
• auto: The display is automatically set according to the selection.
• view: Forces the display of the tabular view or of the hierarchical view.
• record: If the predicate has at least one record, forces the display of the first
record in the list.
For example,
firstCallDisplay=view
firstCallDisplay=view:hierarchyExpanded
firstCallDisplay=record
firstCallDisplay=record:{predicate}
more information.
See UIHttpManagerComponent.setFirstCallDisplayRecord for more API
information.
closeButton Specifies how to display the session close button. Value can be logout or cross. No. If scope
is not full,
See UIHttpManagerComponent.CloseButtonSpec for more information.
API
no close
button will be
displayed by
default.
dataSetFeatures Specifies which features to display in a UI service at the dataset level or a form No.
outside of a table.
These options pertain only to features in the workspace. It is recommended to use
this property with the smallest scope possible, namely dataset or node.
Syntax:
<prefix> ":" <feature> [ "," <feature>]*
where
• <prefix> is hide or show,
• <feature> is services, title, save, or revert.
For example,
hide:title
show:save,revert
viewFeatures Specifies which features to display in a tabular or a hierarchy view (at the table No.
level).
where
• <prefix> is hide or show,
• <feature> is create, views, selection, filters, services, refresh, title,
or breadcrumb.
For example,
hide:title,selection
show:service,title,breadcrumb
recordFeatures Specifies which features must be displayed in a form at the record level. No.
These options pertain only to features in the workspace. It is recommended to use
this property with the smallest scope possible, namely dataset or node.
Syntax:
<prefix> ":" <feature> [ "," <feature>]*
where
• <prefix> is hide or show,
• <feature> is services, title, breadcrumb, save, saveAndClose, close, or
revert.
For example,
hide:title
show:save,saveAndClose,revert
pageSize Specifies the number of records that will be displayed per page in a table view No.
(either tabular or hierarchical).
startWorkItem Specifies a work item must be automatically taken and started. Value can be true or No. Default
false. value is false,
where the target
See ServiceKey.WORKFLOW for more information.
API
work item
state remains
unchanged.
CHAPTER 37
Built-in user services
EBX includes a number of built-in user services. Built-in user services can be used:
• when defining workflow model tasks [p 136]
• when defining perspective action menu items [p 16]
• as extended user services when used with service extensions [p 553]
• when using EBX as a Web Component [p 191]
This reference page describes the built-in user services and their parameters.
Input parameters
Parameter Label Description
firstCallDisplay First call display mode Defines the display mode that must be
used when displaying a filtered table or
a record upon first call. Default (value
= 'auto'): the display is automatically set
according to the selection. View (value =
'view'): forces the display of the tabular
view or of the hierarchical view. Record
(value = 'record'): if the predicate has at
least one record, forces the display of the
record form.
Input parameters
Parameter Label Description
Output parameters
Parameter Label Description
Input parameters
Parameter Label Description
Output parameters
Parameter Label Description
Input parameters
Parameter Label Description
xpath Dataset table to export (XPath) The value must be a valid absolute
location path of a table in the selected
dataset. The notation must conform to
a simplified XPath, in its abbreviated
syntax - This field is required for this
service.
Input parameters
Parameter Label Description
xpath Dataset table to export (XPath) The value must be a valid absolute
location path of a table in the selected
dataset. The notation must conform to
a simplified XPath, in its abbreviated
syntax - This field is required for this
service.
Input parameters
Parameter Label Description
xpath Dataset table to import (XPath) The value must be a valid absolute
location path of a table in the selected
dataset. The notation must conform to
a simplified XPath, in its abbreviated
syntax - This field is required for this
service.
Input parameters
Parameter Label Description
xpath Dataset table to import (XPath) The value must be a valid absolute
location path of a table in the selected
dataset. The notation must conform to
a simplified XPath, in its abbreviated
syntax - This field is required for this
service.
Input parameters
Parameter Label Description
Input parameters
Parameter Label Description
Output parameters
Parameter Label Description
Input parameters
Parameter Label Description
Output parameters
Parameter Label Description
Input parameters
Parameter Label Description
Input parameters
Parameter Label Description
compare.xpath Table or record to compare (XPath) The value must be a valid absolute
location path of a table or a record in the
selected dataset to compare. The notation
must conform to a simplified XPath, in
its abbreviated syntax.
Note
This service is for perspectives only.
Input parameters
Parameter Label Description
CHAPTER 38
XML import and export
This chapter contains the following topics:
1. Introduction
2. Imports
3. Exports
4. Handling of field values
5. Known limitations
38.1 Introduction
XML imports and exports can be performed on tables through the user interface using the Actions
menu in the workspace.
Both imports and exports are performed in the context of a dataset.
Imports and exports can also be done programmatically.
Default import and export option values can be set in the Administration area, under User interface
> Graphical interface configuration > Default option values > Import/Export.
38.2 Imports
Attention
Imported XML documents must be encoded in UTF-8 and its structure must conform to the
underlying data model of the target dataset.
Import mode
When importing an XML file, you must specify one of the following import modes, which will dictate
how the import procedure handles the source records.
Update or insert mode If a record with the same primary key as the source record
already exists in the target table, that record is updated.
Otherwise, a new record is created.
Replace (synchronization) mode If a record with the same primary key as the source record
already exists in the target table, that record is updated.
Otherwise, a new record is created. If a record exists in the
target table but is not present in the source XML file, that
record is deleted from the table.
See the data services operations update [p 589] and insert [p 591], as well as ImportSpec.setByDelta
API
Element does not exist in the source document If 'by delta' mode is disabled (default):
Target field value is set to one of the following:
• If the element defines a default value, the target field value
is set to that default value.
• If the element is of a type other than a string or list, the
target field value is set to null.
• If the element is an aggregated list, the target field value is
set to an empty list.
• If the element is a string that distinguishes null from an
empty string, the target field value is set to null. If it is a
string that does not distinguish between the two, an empty
string.
• If the element (simple or complex) is hidden in data
services, the target value is not changed.
Element exists but is empty (for example, <fieldA/>) • For nodes of type xs:string (or one of its sub-types), the
target field's value is set to null if it distinguishes null
from an empty string. Otherwise, the value is set to empty
string.
• For non-xs:string type nodes, an exception is thrown in
conformance with XML Schema.
Element is present and null (for example, <fieldA The target field is always set to null except for lists, for which
xsi:nil="true"/>) it is not supported.
In order to use the xsi:nil="true" attribute, you must
import the namespace declaration xmlns:xsi="http://
www.w3.org/2001/XMLSchema-instance".
Optimistic locking
If the technical attribute ebxd:lastTime exists in the source XML file, the import mechanism
performs a verification to prevent an update operation on a record that may have changed since the
last read. In order to use theebxd:lastTime attribute, you must import the namespace declaration
xmlns:ebxd="urn:ebx-schemas:deployment_1.0. The timestamp associated with the current record
will be compared to this timestamp. If they are different, the update is rejected.
38.3 Exports
Note
Exported XML documents are always encoded in UTF-8.
When exporting to XML, if the table has filters applied, only the records that correspond to the filter
are included in the exported file.
Download file name Specifies the name of the XML file to be exported. This field
is pre-populated with the name of the table from which the
records are being exported.
Include technical data Specifies whether internal technical data will be included in
the export.
Note: If this option is selected, the exported file will not be
able to be re-imported.
Omit XML comment Specifies whether the generated XML comment that
describes the location of data and the date of the export
should be omitted from the file.
Selection nodes
The XML import and export services do not support selection values.
Exporting such fields will not cause any error, however, no value will be exported.
Importing such fields will cause an error, and the import procedure will be aborted.
CHAPTER 39
CSV import and export
This chapter contains the following topics:
1. Introduction
2. Exports
3. Imports
4. Handling of field values
5. Known limitations
39.1 Introduction
CSV imports and exports can be performed on tables through the user interface using the Actions
menu in the workspace.
Both imports and exports are performed in the context of a dataset.
Imports and exports can also be done programmatically.
Default import and export option values can be set in the Administration area, under User interface
> Graphical interface configuration > Default option values > Import/Export.
39.2 Exports
When exporting to CSV, if the table has filters applied, only the records that correspond to the filter
are included in the exported file.
Download file name Specifies the name of the CSV file to be exported. This field
is pre-populated with the name of the table from which the
records are being exported.
File encoding Specifies the character encoding to use for the exported file.
The default is UTF-8.
Include technical data Specifies whether internal technical data will be included in
the export.
Note: If this option is selected, the exported file will not be
able to be re-imported.
Field separator Specifies the field separator to use for exports. The
default separator is comma, it can be modified under
Administration > User interface.
List separator Specifies the separator to use for values lists. The
default separator is line return, it can be modified under
Administration > User interface.
Programmatic CSV exports are performed using the classes ExportSpec and ExportImportCSVSpec
API API
39.3 Imports
Download file name Specifies the name of the CSV file to be imported.
Import mode When importing a CSV file, you must specify one of the
following import modes, which will control the integrity of
operations between the source and the target table.
• Insert mode: Only record creation is allowed. If a
record exists in the target table with the same primary
key as the source record, an error is returned and the
whole import operation is cancelled.
• Update mode: Only modifications of existing records
are allowed. If no record exists in the target table with
the same primary key as the source record, an error is
returned and the whole import operation is cancelled.
• Update or insert mode: If a record with the same
primary key as the source record already exists in the
target table, that record is updated. Otherwise, a new
record is created.
• Replace (synchronization) mode: If a record with the
same primary key as the source record already exists in
the target table, that record is updated. Otherwise, a new
record is created. If a record exists in the target table
but is not present in the source XML file, that record is
deleted from the table.
File encoding Specifies the character encoding to use for the exported file.
The default is UTF-8.
Field separator Specifies the field separator to use for exports. The
default separator is comma, it can be modified under
Administration > User interface.
List separator Specifies the separator to use for values lists. The
default separator is line return, it can be modified under
Administration > User interface.
Programmatic CSV imports are performed using the classes ImportSpec and ExportImportCSVSpec
API API
Hidden fields
Hidden fields are exported as ebx-csv:hidden strings. An imported hidden string will not modify a
field's content.
Terminal groups
In a CSV file, it is impossible to differentiate a created terminal group that contains only empty fields
from a non-created one.
As a consequence, some differences may appear during comparison after performing an export
followed by an import. To ensure the symmetry of import and export, use XML import and export
instead. See XML import and export [p 213].
Association fields
The CSV import and export services do not support association values, i.e. the associated records.
Exporting such fields will not cause any error, however, no value will be exported.
Importing such fields will cause an error and the import procedure will be aborted.
Selection nodes
The CSV import and export services do not support selection values, i.e. the selected records.
Exporting such fields will not cause any error, however, no value will be exported.
Importing such fields will cause an error and the import procedure will be aborted.
CHAPTER 40
Supported XPath syntax
This chapter contains the following topics:
1. Overview
2. Example expressions
3. Syntax specifications for XPath expressions
4. Java API
40.1 Overview
The XPath notation used in EBX must conform to the abbreviated syntax of the XML Path Language
(XPath) Version 1.0 standard, with certain restrictions. This document details the abbreviated syntax
that is supported.
Absolute path
/library/books/
Relative paths
./Author
../Title
Complex predicates
starts-with(col3,'xxx') and ends-with(col3,'yyy') and osd:is-not-null(./col3))
col1 < $param1 and col4 = $param2 where the parameters $param1 and $param2 refer respectively
to 100 and 'true'
Note
The use of this notation is restricted to the Java API since the parameter values can only
be set by the method Request.setXPathParameter of the Java API.
API
Predicates on label
osd:label(./delivery_date)='12/30/2014' and ends-with(osd:label(../adress),'Beijing -
China')
Note
• XPath functions for validation search cannot be used on XPath predicates defined on associations
and foreign key filters.
• The predicates osd:label, osd:contains-record-label and osd:contains-validation-message
are localized. The locale can be set by the methods of the Java API Request.setLocale or API
Request.setSession .
API
Attention
To ensure that the search is performed on an up-to-date validation report, it is necessary to perform
an explicit validation of the table just before using these predicates.
Predicate specification
booleanValue = true
<path> <relative path> or osd:label(<relative Relative to the table that contains it:
path>) ../authorstitle
<boolean comparator> = or !=
<string comparator> =
Due to the strong dependence of predicates on the data model node and the node type of the criterion,
the path portion of the atomic predicate expression (left-hand side) must be a node path and cannot
be an XPath formula. For example, the expression /table[floor(./a) > ceiling(./d)] is not valid.
Predicate on label
The osd:label() function can be applied to the path portion of the atomic predicate, in order to resolve
the predicate on the label instead of the value. In this case, only string operators and string criteria can
be used, i.e. ends-with(osd:label(./price),'99').
A predicate on label is localized, so the criterion must be expressed in the same
locale as the predicate-filtered request. For example: request.setLocale(Locale.FRENCH);
request.setXPathFilter("osd:label(./delivery_date)='30/12/2014'");
Note
It is forbidden to use the osd:label function if the right part of the predicate is a
contextual value.
Note
If the osd:label function is used in a data model, for example on a selection or in the filter
predicate of a table reference node, the default locale of the data model (as defined in its
module declaration) must be used for the criterion format (even though this is generally
not recommended).
Contextual values
For predicates that are relative to a selected node, the criterion value (that is, the right-hand side of
the predicate) can be replaced with a contextual path using the syntax ${<relative-path>} where
<relative-path> is the location of the element relative to the selected node.
Note
When calling a method, the criterion is the second parameter, and the first parameter
cannot be a relative value.
Aggregated lists
For predicates on aggregated lists, the predicate returns true regardless of the comparator if one of
the list elements verifies the predicate.
Note
Special attention must be paid to the comparator !=. For example, for an aggregated
list, ./list != 'a' is not the same as not(./list = 'a'). Where the list contains the
elements (e1,e2,..), the first predicate is equivalent to e1 != 'a' or e2 != 'a' ...,
while the second is equivalent to e1 != 'a' and e2 != 'a' ....
'Null' values
Null values must be explicitly treated in a predicate using the operators osd:is-null and osd:is-
not-null.
'Coeur'
Coeur
"Coeur d'Alene"
Coeur d'Alene
The standard XPath syntax has been extended so as to extract the value of any targeted primary key
field.
Example
If the table /root/tableA has an osd:tableRef field named 'fkB' whose target is /root/tableB and the
primary key of tableB has two fields, id of type xs:int and date of type xs:date, then the following
expressions would be valid:
• /root/tableA[ fkB = '123|2008-01-21' ], where the string "123|2008-01-21" is a representation
of the entire primary key value.
See Syntax of the internal String representation of primary keys PrimaryKey.syntax
API
for
more information.
• /root/tableA[ fkB/id = 123 and date-equal(fkB/date, '2008-01-21') ], where this predicate
is a more efficient equivalent to the one in the previous example.
• /root/tableA[ fkB/id >= 123 ], where any number operator could be used, as the targeted
primary key field is of type xs:int.
• /root/tableA[ date-greater-than( ./fkB/date, '2007-01-01' ) ], where any date operator
could be used, as the targeted primary key field is of type xs:date;
• /root/tableA[ fkB = "" ] is not valid as the targeted primary key has two columns.
• /root/tableA[ osd:is-null(fkB) ] checks if a foreign key is null (not defined).
Localization
CHAPTER 41
Labeling and localization
This chapter contains the following topics:
1. Overview
2. Value formatting policies
3. Syntax for locales
41.1 Overview
EBX offers the ability to handle the labeling and the internationalization of data models.
Textual information
In EBX, most master data entities can have a label and a description, or can correspond to a user
message. For example:
• Dataspaces, snapshots and datasets can have their own label and description. The label is
independent of the unique name, so that it remains localizable and modifiable;
• Any node in the data model can have a static label and description;
• Values can have a static label when they are enumerated;
• Validation messages can be customized, and permission restrictions can provide text explaining
the reason;
• Each record is dynamically displayed according to its content, as well as the context in which it
is being displayed (in a hierarchy, as a foreign key, etc.);
All this textual information can be localized into the locales that are declared by the module.
See also
Labels and messages [p 497]
Tables declaration [p 459]
Foreign keys declaration [p 464]
If the corresponding file does not exist in the module, the formatting policy is looked up in the class-
path of EBX. If the locale-specific formatting policy is not found, the formatting policy of en_US is
applied.
The content of the file frontEndFormattingPolicy.xml is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<formattingPolicy xmlns="urn:ebx-schemas:formattingPolicy_1.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:ebx-schemas:formattingPolicy_1.0 ../schema/ebx-reserved/formattingPolicy_1.0.xsd">
<date pattern="dd/MM" />
<time pattern="HH:mm:ss" />
<dateTime pattern="dd/MM/yyyy HH:mm" />
<decimal pattern="00,00,00.000" groupingSeparator="|" decimalSeparator="^"/>
<int pattern="000,000" groupingSeparator=" "/>
</formattingPolicy>
CHAPTER 42
Extending EBX internationalization
This chapter contains the following topics:
1. Overview
2. Extending EBX user interface localization
3. Localized resources resolution
4. Known limitations
42.1 Overview
EBX offers the following localization capabilities:
1. The ability to internationalize labels and specific data models, as described in section Labeling
and localization [p 234],
2. The extensibility of EBX user interface localization
Note
Currently, Latin & Cyrillic characters are supported. Locales that use other character sets
may be usable, but are not supported.
• Declare the new locale in the EBX main configuration file. For example:
ebx.locales.available=en-US, fr-FR, xx
• The first locale is always considered the default.
• The built-in locales, en-US and fr-FR, can be removed if required.
See Configuring EBX localization [p 329].
• Deploy the following files in the EBX class-path:
• A formatting policy file, named
com.orchestranetworks.i18n.frontEndFormattingPolicy_xx.xml,
Note
The files must be ending with ".mxml".
Note
The resolution is done at the localized message level. It is therefore possible to define
one or more files for a locale that only includes messages for which specific localization
is required.
Persistence
CHAPTER 43
Overview of persistence
This chapter describes how master data, history, and replicated tables are persisted. A given table can
employ any combination of master data persistence mode, historization, and replication.
While all persisted information in EBX is ultimately stored as relational tables in the underlying
database, whether it is in a form that is accessible outside of EBX depends on if it is in mapped mode.
Note
The term mapped mode [p 241] refers to any tables that are stored as-is, and thus whose
contents can be accessed directly in the database.
43.2 Historization
Master data tables can activate historization in order to track modifications to their data, regardless of
whether they are persisted in semantic or relational mode, and whether they are replicated.
The history itself is in mapped mode, meaning that it can potentially be consulted directly in the
underlying database.
43.3 Replication
Replication enables direct SQL access to tables of master data, by making a copy of data in the
repository to relational table replicas in the database. Replication can be enabled on any table
regardless of whether it is persisted in semantic or relational mode, and whether it has history activated.
The replica tables are persisted in mapped mode, as their primary purpose is to make master data
accessible to direct queries outside of EBX.
See also
Database mapping administration [p 379]
Data model evolutions [p 263]
including fewer columns in the index. The reasoning is that an index leading to this situation
would have headers so large that it could not be efficient anyway.
• Fields of type type="osd:password" are ignored.
• Terminal complex types are supported; however, they cannot be globally set to null at record-
level.
More generally, tables in mapped mode are subject to any limitations of the underlying RDBMS.
For example, the maximum number of columns in a table applies (1000 for Oracle 11g R2, 1600 for
PostgreSQL). Note that a history table contains twice as many fields as declared in the schema (one
functional field, plus one generated field for the operation code).
Data model evolutions may also be constrained by the underlying RDBMS, depending on the existing
data model.
CHAPTER 44
Relational mode
This chapter contains the following topics:
1. Overview of modes
2. Enabling relational mode for a data model
3. Validation
4. SQL access to data in relational mode
5. Limitations of relational mode
See also
dataspaces [p 88]
dataset inheritance [p 269]
Dataspaces Yes No
Data model All features are supported. Some restrictions, see Data model
restrictions for tables in relational mode
[p 247].
Data validation Yes, enables tolerant mode. Yes, some constraints become blocking,
see Validation [p 245].
Transactions See Concurrency and isolation levels [p See Concurrency and isolation levels [p
432]. 432].
Data model evolutions See Data model evolutions [p 263] in the See Data model evolutions [p 263] in the
Reference Manual. Reference Manual.
44.3 Validation
This section details the impact of relational mode on data validation.
Structural constraints
Some EBX data model constraints will generate a "structural constraint" on the underlying RDBMS
schema for relational mode and also if table history is activated [p 249]. This concerns the following
facets:
• facets xs:maxLength and xs:length on string elements;
• facets xs:totalDigits and xs:fractionDigits on xs:decimal elements.
Databases do not support as tolerant a validation mode as EBX. Hence, the above constraints become
blocking constraints. A blocking constraint means that updates are rejected if they do not comply.
Additionally, such constraints are no longer checked during validation process, except for foreign key
constraints under some circumstances (see Foreign key blocking mode [p 245]). When a transaction
does not comply with a blocking constraint, it is cancelled and a ConstraintViolationException API
is thrown.
If the table defines millions of records, this becomes a performance issue. It is then recommended to
define a table-level constraint ConstraintOnTable . API
In the case where it is not possible to define such a table-level constraint, it is recommended to at
least define a local or explicit dependency DependenciesDefinitionContext.dependencies , so as API
SQL reads
Direct SQL reads are possible in well-managed, preferably short-lived transactions. However, for
such accesses, EBX permissions are not taken into account. As a result, applications given allowed to
perform reads must be trusted through other authentication processes and permissions.
SQL writes
Direct SQL writes bypass the governance framework of EBX. Therefore, they must be used with
extreme caution. They could cause the following situations:
• failure to historize EBX tables;
• failure to execute EBX triggers;
• failure to verify EBX permissions and constraints;
• modifications missed by the incremental validation process;
• losing visibility on EBX semantic tables, which might be referenced by foreign keys.
Consequently, direct SQL writes are to be performed if, and only if, all the following conditions are
verified:
• The written tables are not historized and have no EBX triggers.
• The application performing the writes can be fully trusted with the associated permissions, to
ensure the integrity of data. Specifically, the integrity of foreign keys (osd:tableRef) must be
preserved at all times. See Foreign key blocking mode [p 245] for more information.
• The application server running EBX is shut down whenever writes are performed. This is to
ensure that incremental validation does not become out-of-date, which would typically occur in
a batch context.
Schema evolutions may also be constrained by the underlying RDBMS, depending on the data already
contained in the concerned tables.
CHAPTER 45
History
This chapter contains the following topics:
1. Overview
2. Configuring history
3. History views and permissions
4. SQL access to history
5. Impacts and limitations of historized mode
45.1 Overview
History is a feature allowing to track all data modifications on a table (records creation, update and
deletion).
It is an improvement on the XML audit trail [p 393]. XML audit trail is still activated by default; it
can be safely deactivated if history is enabled for the relevant tables.
See also
History [p 26]
Relational mode [p 243]
Replication [p 257]
Data model evolutions [p 263]
• A list of dataspaces (branches) for which history is activated. It is possible to specify whether
direct children and/or all descendants should also be concerned.
Some profiles are already created when installing the repository. These profiles can neither be deleted
nor modified.
Profile Id Description
The data model assistant allows you to view the historization profiles defined in the repository.
Historization must be activated for each table separately. See model design [p 442] documentation
for more details.
To disable the history of a field or group through the data model assistant, use the History property
in the Advanced properties of the element.
When this property is defined on a group, history is disabled recursively for all its descendants. Once
a group disables history, it is not possible to specifically re-enable history on a descendant.
Note
If the table containing the field or group is not historized, this property will not have any effect.
It is not possible to disable history for primary key fields.
Integrity
If problems are detected at data model compilation, warning messages or error messages will be added
to the validation report associated with this data model. Furthermore, if any error is detected, each
associated instance (dataset) will be inaccessible. The most common error cases are the following:
• A table references a profile that is not defined in the repository.
• A history profile that is referenced in the data model mentions a non-defined or closed dataspace
in the current repository.
Note
Deploying a data model on a repository that does not have the expected profiles requires
the administrator to add them.
and AdaptationTable.getHistory can then be used in the programmatic rule in order to implement
API
To see the 'Transaction history' table, navigate to the Administration area and select 'History and logs'
using the down arrow menu in the navigation pane. Transaction history can also be accessed from the
Dataspaces area by selecting a historized dataspace and using the Actions menu in the workspace.
For more information, see transaction history view [p 27].
Access restrictions
The database tables must be accessed only in read-only mode. It is up to the database administrator
to forbid write access except for the database user used by EBX, as specified in the section Rules for
the database access and user privileges [p 347].
Common and generic tables The main table is HV_TX; each record of this table represents
a transaction. Only transactions that involve at least one
historized table are recorded.
These common tables are all prefixed by "HV".
Specific generated tables For each historized table, a specific history table is
generated. This table contains the history of the data
modifications on the table.
In the EBX user interface, the name of this table in database
can be obtained by clicking on the table documentation pane
(advanced mode). All the specific history tables are prefixed
with "HG".
Activating history on this table generates the HG_product table shown in the history schema structure
above. Here is the description of its different fields:
• tx_id: transaction ID.
• instance: instance ID.
• op: operation type - C (create), U (update) or D (delete).
• productId: productId field value.
• OproductId: operation field for productId, see next section.
• price: price field value.
• Oprice: operation field for price, see next section.
• beginDate: date field value.
• ObeginDate: operation field for beginDate, see next section.
Validation
Some EBX data model constraints become blocking constraints when table history is activated. For
more information, see the section Structural constraints [p 245].
• Programmatic merge: when performing a programmatic merge on a dataspace, the time and
user of the last operation performed in the child dataspace are preserved, while the user
recorded in history is the user who performs the merge.
• D3: for distributed data delivery feature, when a broadcast is performed, the data from the
primary node is reported on the replica node and the time and user of the last operation
performed in the child dataspace are preserved, while the user recorded in history is 'ebx-
systemUser' who performs the report on the replica node upon the broadcast.
CHAPTER 46
Replication
This chapter contains the following topics:
1. Overview
2. Configuring replication
3. Accessing a replica table using SQL
4. Requesting an 'onDemand' replication refresh
5. Impact and limitations of replication
46.1 Overview
Data stored in the EBX repository can be mirrored to dedicated relational tables to enable direct access
to the data by SQL requests and views.
Like history and relational mode, this data replication is transparent to end-users and client
applications. Certain actions trigger automatic changes to the replica in the database:
• Activating replication at the model-level updates the database schema by automatically executing
the necessary DDL statements.
• Data model evolutions that impact replicated tables, such as creating a new column, also
automatically update the database schema using DDL statements.
• When using the 'onCommit' refresh mode: updating data in the EBX repository triggers the
associated inserts, updates, and deletions on the replica database tables.
See also
Relational mode [p 243]
History [p 249]
Data model evolutions [p 263]
Repository administration [p 346]
Note
replicated table: refers to a primary data table that has been replicated
replica table (or replica): refers to a database table that is the target of the replication
name Name of the replication unit. This name identifies a replication Yes
unit in the current data model. It must be unique.
refresh Specifies the data synchronization policy. The possible policies Yes
are:
• onCommit: The replica table content in the database is
always up to date with respect to its source table. Every
transaction that updates the EBX source table triggers the
corresponding insert, update, and delete statements on the
replica table.
• onDemand: The replication of specified tables is only
done when an explicit refresh operation is performed. See
Requesting an 'onDemand' replication refresh [p 260].
table/path Specifies the path of the table in the current data model that is Yes
to be replicated to the database.
table/nameInDatabase Specifies the name of the table in the database to which the Yes
data will be replicated. This name must be unique amongst all
replications units.
table/element/path Specifies the path of the aggregated list in the table that is to be Yes
replicated to the database.
table/element/ Specifies the name of the table in the database to which the data Yes
nameInDatabase of the aggregated list will be replicated. This name must be
unique amongst all replications units.
For example:
<xs:schema>
<xs:annotation>
<xs:appinfo>
<osd:replication>
<name>ProductRef</name>
<dataSpace>ProductReference</dataSpace>
<dataSet>productCatalog</dataSet>
<refresh>onCommit</refresh>
<table>
<path>/root/domain1/tableA</path>
<nameInDatabase>PRODUCT_REF_A</nameInDatabase>
</table>
<table>
<path>/root/domain1/tableB</path>
<nameInDatabase>PRODUCT_REF_B</nameInDatabase>
<element>
<path>/retailers</path>
<nameInDatabase>PRODUCT_REF_B_RETAILERS</nameInDatabase>
</element>
</table>
</osd:replication>
</xs:appinfo>
</xs:annotation>
...
</xs:schema>
Notes:
• See Data model restrictions for replicated tables [p 260]
• If, at data model compilation, the specified dataset and/or dataspace does not exist in the current
repository, a warning is reported, but the replica table is created in the database. Once the specified
dataspace and dataset are created, the replication becomes active.
• At data model compilation, if a table replication is removed, or if some of the above properties
has changed, the replica table is dropped from the database, and then recreated with the new
definition if needed.
To disable the replication of a field or group through the data model assistant, use the Replication
property in the Advanced properties of the element.
When this property is defined on a group, replication is disabled recursively for all its descendents.
Once a group disables replication, it is not possible to specifically re-enable replication on a
descendant.
Note
If the table containing the field or group is not replicated, this property will not have any effect.
It is not possible to disable replication for primary key fields.
Access restrictions
The replica database tables must only be directly accessed in read-only mode. It is the responsibility
of the database administrator to block write-access to all database users except the one that EBX uses.
See also Rules for the database access and user privileges [p 347]
SQL reads
Direct SQL reads are possible in well-managed, preferably short-lived transactions. However, for such
accesses, EBX permissions are not taken into account. As a result, applications given the privilege to
perform reads must be trusted through other authentication processes and permissions.
Validation
Some EBX data model constraints become blocking constraints when replication is enabled. For more
information, see Structural constraints [p 245].
• Field inheritance is also only supported for the 'onDemand' refresh policy. This means that, at
data model compilation, an error is reported if the refresh mode is 'onCommit' and the table to be
replicated has an inherited field. See inherited fields [p 270] for more information.
• Computed values are ignored.
• Limitations exist for two types of aggregated lists: aggregated lists under another aggregated list,
and aggregated lists under a terminal group. Data models that contain such aggregated lists can
be used, however these lists will be ignored (not replicated).
• User-defined attributes are not supported. A compilation error is raised if they are included in a
replication unit.
Data model evolutions may also be constrained by the underlying RDBMS, depending on the data
already contained in the concerned tables.
Database configuration
The refresh operation is optimized to transmit only the rows of the source table that have been modified
(with respect to creation and deletion) since the last refresh. However, depending on the volume of
data exchanged, this can be an intensive operation, requiring large transactions. In particular, the first
refresh operation can concern a large number of rows. It is necessary for the database to be configured
properly to allow such transactions to run under optimal conditions.
For instance, with Oracle:
• It is mandatory for the bulk of all replica tables in a replication unit to fit into the 'UNDO'
tablespace.
• It is recommended to provide enough space in the buffer cache to allow those transactions to run
with minimal disk access.
• It is recommended to provision 'REDO' log groups big enough to avoid those transactions to wait
for the 'db_writer' process.
• For inheritance, a replica record field cannot hold the "inherit value" flag
(AdaptationValue.INHERIT_VALUE). It only holds the inherited value in such cases. More
generally, it is not possible to distinguish inheriting state from overwriting state.
CHAPTER 47
Data model evolutions
This chapter describes the modifications that are possible on data models, as well as potential
limitations. The restrictions and/or potential impacts of data model evolutions depend on the
persistence mode. The principles for each mode are the following:
• Semantic mode: flexible and non-blocking. Can lead to a loss of data; for instance, a primary key
definition can freely evolve, but all existing records in any dataspace and snapshot that violate
the primary key constraint will no longer be loaded.
• Any mapped mode: restrictive and thus blocking if data exists and if the evolution would violate
their integrity according to the new data model.
Attention
Whenever the data modeler performs an evolution on the data model, it is important to anticipate
the fact that it could lead to a loss of data. In such cases, if existing data must be preserved in some
ways, a data migration plan must be set up and operated before the new data model is published or
deployed. It can also be noted that data is not destroyed immediately after the data model evolution;
in semantic mode, as long as no update is performed on a table whose definition has evolved, if the
data model is rolled back to its previous state, then the previous data is retrieved.
Note
Certain types of data model evolutions cannot be performed directly in the user interface,
and thus the data model must be exported, modified in XSD format, then re-imported.
For changes to a data model that impact its configuration, not just its structure, the XSD
must be imported into EBX from a module. Otherwise, the configuration modifications
are not taken into account.
Model-level evolutions
The following modifications can be made to existing data models:
• A data model in semantic mode can be declared to be in relational mode. Data should be manually
migrated, by exporting then re-importing an XML or archive file.
• Relational mode can be disabled on the data model. Data should be manually migrated, by
exporting then re-importing an XML or archive file.
• Replication units can be added to the data model. If their refresh policy is 'onCommit', the
corresponding replica tables will be created and refreshed on next schema compilation.
• Replication units can be removed from the data model. The corresponding replica tables will be
dropped immediately.
• The data model can be deleted. If it declares replication units, the corresponding replica tables
will be dropped immediately. If it is relational or contains historized tables, this change marks
the associated mapped tables as disabled. See Database mapping [p 379] for the actual removal
of associated database objects.
Table-level evolutions
The following modifications can be made to a data model at the table-level:
• A new table can be added. Upon creation, the table can also declare one or more mapped modes.
• An existing table can be deleted. If it declares replication units, the corresponding replica tables
will be dropped immediately. If it historized or relational, this change marks the mapped table as
disabled. See Database mapping [p 379] for the actual removal of associated database objects.
• An existing table in semantic mode can be declared to be in relational mode. Data should be
manually migrated, by exporting then re-importing an XML or archive file.
• History can be enabled or disabled on a table. History will not take into account the operations
performed while it is disabled.
• A table can be renamed. Data should be manually migrated, by exporting then re-importing an
XML or archive file, because this change is considered to be a combination of deletion and
creation.
Field-level evolutions
The following modifications can be made to a data model at the field-level:
• A new field can be added.
• An existing field can be deleted. In semantic mode, the data of the deleted field will be
removed from each record upon its next update. For a replica table, the corresponding column is
automatically removed. In history or relational mode, the field is marked as disabled.
• A field can be specifically disabled from the history or replication which applies to its containing
table, by using the attribute disable="true". For a replica table, the corresponding column is
automatically removed. For a history table, the column remains but is marked as disabled. See
Disabling history on a specific field or group [p 250] and Disabling replication on a specific field
or group [p 259].
• The facets of a field can be modified, except for the facets listed under Limitations/restrictions
[p 265].
The above-mentioned changes are accepted, but they can lead to a loss of data. Data should be migrated
manually, by exporting then re-importing an XML or archive file, since these changes are considered
to be a combination of deletion and creation.
• A field can be renamed.
• The type of a field can be changed.
Index-level evolutions
• An index can be added or renamed.
• An index can be modified, by changing or reordering its fields. In mapped mode, the existing
index is deleted and a new one is created.
• An index can be deleted. In mapped mode, a deleted index is also deleted from the database.
47.2 Limitations/restrictions
Note
All limitations listed in this section that affect mapped mode can be worked around by
purging the mapped table database resources. For the procedure to purge mapped table
database resources, see Database mapping [p 379].
• In semantic mode, the existing values for this field are only loaded into the cache if they
comply with the new structure of the target primary key.
• In mapped mode, the structure of a foreign key field is set to match that of the target primary
key. A single field declaring an osd:tableRef constraint may then be split into a number of
columns, whose number and types correspond to that of the target primary key. Hence, the
following cases of evolutions will have an impact on the structure of the mapped table:
• declaring a new osd:tableRef constraint on a table field;
• removing an existing osd:tableRef constraint on a table field;
• adding (resp. removing) a column to (resp. from) a primary key referenced by an existing
osd:tableRef constraint;
• modifying the type or path for any column of a primary key referenced by an existing
osd:tableRef constraint.
These cases of evolution will translate to a combination of field deletions and/or creations.
Consequently, the existing data should be migrated manually.
Other
CHAPTER 48
Inheritance and value resolution
This chapter contains the following topics:
1. Overview
2. Dataset inheritance
3. Inherited fields
4. Optimize & Refactor service
48.1 Overview
The principle of inheritance is to mutualize resources that are shared by multiple contexts or entities.
EBX offers mechanisms for defining, factorizing and resolving data values: dataset inheritance and
inherited fields.
Furthermore, functions can be defined to compute values.
Note
Inheritance mechanisms described in this chapter should not be confused with "structural
inheritance", which usually applies to models and is proposed in UML class diagrams
for example.
Dataset inheritance
Dataset inheritance is particularly useful when data applies to global enterprise contexts, such as
subsidiaries or business partners.
Based on a hierarchy of datasets, it is possible to factorize common data into the root or intermediate
datasets and define specialized data in specific contexts.
The dataset inheritance mechanisms are detailed below in Dataset inheritance [p 269].
Inherited fields
Contrary to dataset inheritance, which exploits global built-in relationships between datasets, inherited
fields exploit finer-grained dependencies that are specific to the data structure. It allows factorizing
and specializing data at the business entities-level.
For example, if the model specifies that a 'Product' is associated with a 'FamilyOfProducts', it is
possible that some attributes of 'Product' inherit their values from the attributes defined in its associated
'FamilyOfProducts'.
Note
When using both inheritance in the same dataset, field inheritance has priority over the
dataset one.
The element osd:inheritance defines the property dataSetInheritance to specify the use of
inheritance on datasets based on this data model. The following values can be specified:
• all, indicates that inheritance is enabled for all datasets based on the data model.
• none, indicates that inheritance is disabled for all datasets based on the data model.
If not specified, the inheritance mechanism is disabled.
• Auto-incremented nodes
• Nodes defining a computed value
root record Locally defined in the table and has no parent. This means
that no record with the same primary key exists in the parent
table, or that this parent is an occulting record.
overwriting record Locally defined in the table and has a parent record. This
means that a record with the same primary key exists in the
parent table, and that this parent is not an occulting record.
The overwriting record inherits its values from its parent,
except for the values that it explicitly redefines.
inherited record Not locally defined in the current table and has a parent
record. All values are inherited.
Functions are always resolved in the current record context
and are not inherited.
occulting record Specifies that, if a parent with the same primary key is
defined, this parent will not be visible in table descendants.
<sourceNode>color</sourceNode>
</osd:inheritance>
</xs:appinfo>
</xs:annotation>
</xs:element>
The element sourceRecord is an expression that describes how to look up the record from which the
value is inherited. It is a foreign key, or a sequence of foreign keys, from the current element to the
source table.
If sourceRecord is not defined in the data model, the inherited fields are fetched from the current
record.
The element sourceNode is the path of the node from which to inherit in the source record.
The following conditions must be satisfied for specific inheritance:
• The element sourceNode is mandatory.
• The expression for the path to the source record must be a consistent path of foreign keys, from
the current element to the source record. This expression must involve only one-to-one and zero-
to-one relationships.
• The sourceRecord cannot contain any aggregated list elements.
• Each element of the sourceRecord must be a foreign key.
• If the inherited field is also a foreign key, the sourceRecord cannot refer to itself to get the path
to the source record of the inherited value.
• Every element of the sourceRecord must exist.
• The source node must belong to the table containing the source record.
• The source node must be terminal.
• The source node must be writeable.
• The source node type must be compatible with the current node type.
• The source node cardinalities must be compatible with those of the current node.
• The source node cannot be the same as the inherited field if the fields to inherit from are fetched
into the same record.
• Handles duplicated values: Detects and removes all parameter values that are duplicates of the
inherited value.
• Mutualizes common values: Detects and mutualizes the common values among the descendants
of a common ancestor.
Procedure details
Datasets are processed from the bottom up, which means that if the service is run on the dataset at
level N, with N+1 being the level of its children and N+2 being the level of its children's children, the
service will first process the datasets at level N+2 to determine if they can be optimized with respect
to the datasets at level N+1. Next, it would proceed with an optimization of level N+1 against level N.
Note
• These optimization and refactoring functions do not handle default values that are
declared in the data model.
• The highest level considered during the optimization procedure is always the dataset on
which the service is run. This means that optimization and refactoring are not performed
between the target dataset and its own ancestors.
• Table optimization is performed on records with the same primary key.
• Inherited fields are not optimized.
• The optimization and refactoring functions do not modify the resolved view of a dataset,
if it is activated.
Service availability
The 'Optimize & Refactor' service is available on datasets that have child datasets and have the
'Activated' property set to 'No' in their dataset information.
The service is available to any profile with write access on current dataset values. It can be disabled
by setting restrictive access rights on a profile.
Note
For performance reasons, access rights are not verified on every node and table record.
CHAPTER 49
Permissions
Permissions dictate the access each user has to data and actions.
This chapter contains the following topics:
1. Overview
2. Defining user-defined rules
3. Defining programmatic rules
4. Resolving permissions on data
5. Resolving permissions on services
6. Resolving permissions on actions
49.1 Overview
Permissions are related to whether actions are authorized or not. They are also related to access rights,
that is, whether an entity is hidden, read, or read-write. The main entities controlled by permissions are:
• Dataspace
• Dataset
• Table
• Group
• Field
• An owner of a dataspace is a member of the owner attribute specified for a dataspace. In this
case, the built-in role 'OWNER' is activated when permissions are resolved in the context of the
dataspace.
Permission rules
A permission rule defines the authorization granted to a profile for a particular entity.
User-defined permission rules are created through the user interface. See the section Defining user-
defined rules [p 276].
Programmatic permission rules can be created by developers. See the section Defining programmatic
rules [p 280].
Resolution of permissions
Permissions are always resolved in the context of an authenticated user session, thus permissions are
mainly based on the user profiles.
In general, resolution of permissions is performed restrictively between a given level and its parent
level. Thus, at any given level, a user cannot have a higher permission than the one resolved at a
parent level.
Programmatic permissions are always considered to be restrictive.
Note
In the Java API, the class SessionPermissions
API
provides access to the resolved
permissions.
See also
Resolving permissions on data [p 281]
Resolving permissions on services [p 285]
Resolving permissions on actions [p 287]
On a dataset
An administrator or owner of a dataset can perform the following actions:
• Manage its permissions
• Change its owner, if the dataset is a root dataset
• Change its general information (localized labels and descriptions)
Attention
While the definition of permissions can restrict an administrator or dataset owner's right to view
data or perform certain actions, it remains possible for them to modify their own access, as they will
always have access to permissions management.
On a dataspace
To be a super owner of a dataspace, a user must either:
Attention
While the definition of permissions can restrict an administrator or dataspace owner's right to view
data or perform certain actions, it remains possible for them to modify their own access, as they will
always have access to permissions management.
avoid this, permissions on this node must be checked explicitly before applying the filter or
the sort criteria.
• When defining a custom view on this table in the UI. To avoid this, view definition
permissions should be restricted for such users.
• Resolution of custom display labels for tables ('defaultLabel' property) and relationships ('display'
property) ignores permission, and fields usually hidden due to access rights restrictions will
be displayed in such labels. As a result, these labels should not contain any confidential field.
Otherwise, a permission strategy should also be defined to restrict the display of the whole label.
• When a procedure disables all permission checks by using ProcedureContext.
setAllPrivileges , the client code must check that the current user session is allowed to run
API
the procedure.
• When performing actions on a table (create, delete, overwrite or occult) in a procedure, the
current user session access right on the table node is ignored during the permission resolution.
Should this check be performed, the client code must explicitly call SessionPermissions.
getNodeAccessPermission beforehand in the procedure.
API
• To optimize the resolution of permissions for both data and user services, a dedicated cache
is implemented at the session level; it only takes user-defined permissions into account, not
programmatic rules (which are not cached since they are contextual and dynamic). The session
cache life cycle depends on the context, as described hereafter:
• In the UI, the cache is cleared for every non-ajax event (i.e on page display, pop-up opening,
etc.).
• In programmatic procedures, the cache lasts until the end of the procedure, unless explicitly
cleared (see below).
Attention
When modifying permissions in a procedure context (by importing an EBX archive or
merging a dataspace programmatically), the session cache must be cleared via a call to
Session.clearCache . Otherwise, these modifications will not be reflected until the end
API
of the procedure.
Create a child dataspace Indicates whether the profile can create child dataspaces
from the current dataspace.
Create a child snapshot Indicates whether the profile can create snapshots of the
current dataspace.
Initiate merge Indicates whether the profile can merge the current
dataspace with its parent dataspace.
Export archive Indicates whether the profile can export the current
dataspace as an archive.
Import archive Indicates whether the profile can import an archive into the
current dataspace.
Close a dataspace Indicates whether the profile can close the current
dataspace.
Close a snapshot Indicates whether the profile can close a snapshot of the
current dataspace.
Rights on services Indicates if a profile has the right to execute services on the
dataspace. By default, all dataspace services are allowed.
Permissions of child dataspace When a user creates a child dataspace, the permissions
when created of this new dataspace are automatically assigned to the
profile's owner, based on the permissions defined under
'Permissions of child dataspace when created' in the parent
dataspace. If multiple permissions are defined for the owner
through different roles, the owner's profile behaves like any
other profile and permissions are resolved [p 274] as usual.
Actions on datasets
Create a child dataset Indicates whether the profile has the right to create a child
dataset of the current dataset.
Duplicate dataset Indicates whether the profile has the right to duplicate the
current dataset.
Change the dataset parent Indicates whether the profile has the right to change the
parent dataset of a given child dataset.
Actions on tables
The action rights on default tables are defined at the dataset level. It is then possible to override these
default rights for one or more tables. The allowable permissions for each profile are as follows:
Create a new record Indicates whether the profile has the right to create records
in the table.
Overwrite inherited record Indicates whether the profile has the right to overwrite
inherited records in the table.
Occult inherited record Indicates whether the profile has the right to occult inherited
records in the table.
Delete a record Indicates whether the profile has the right to delete records
in the table.
Permissions on services
An administrator or an owner of the current dataspace can modify the service default permission to
either restrict or grant access to certain profiles.
information.
• the ServiceActivationRule [p 280], described in the section below Defining activation rules on
service [p 280].
• the ServicePermissionRule , described in the section below Defining permission rules on
API
service [p 281].
According to the rule target (model node(s) or records) and type (AccessRule
or AccessRuleForCreate), several methods such as SchemaExtensionsContext.
setAccessRuleForCreateOnNode or SchemaExtensionsContext.setAccessRuleOnOccurrence
API API
can be used.
The rule thus assigned is said to be "local" and is only executed when the target entity is requested.
See Resolving permissions on data [p 281] for more information.
Attention
Only one AccessRule can be defined for each node, dataspace or record. Only one
AccessRuleForCreate can be defined for each table child node. The definition of a new
programmatic rule of one type will lead to the replacement of the existing one.
ActivationContextWithDatasetSet.setActivationRule methods.
API
The resulting assigned rule will be evaluated during the service activation evaluation. See
Resolving permissions on services [p 285] for more information.
The rule thus assigned is said to be "global" and is only executed when the service is activated
for the current context. See Resolving permissions on services [p 285] for more information.
• Or, for existing services, in the schema extension SchemaExtensions API
API
SchemaExtensionsContext.setServicePermissionRuleOnNodeAndAllDescendants
methods. It is thus possible to assign a rule to any service, including standard services
provided by EBX, on one or more data model nodes: a table node, an association node, etc.
The rule thus assigned is said to be "local" and is only executed in the extended schema
context and when the node corresponds to the one specified. See Resolving permissions on
services [p 285] for more information.
Attention
Only one ServicePermissionRule can be defined for each model node. Thus, the definition
of a new programmatic rule will replace the existing one.
• If no rules having restrictions are defined, the maximum permissions of all matching rules are
applied.
Examples:
Given two profiles P1 and P2 concerning the same user, the following table lists the possibilities when
resolving that user's permission to a service.
The same restriction policy is applied for data access rights resolution.
In another example, a dataspace can be hidden from all users by defining a restrictive association
between the built-in profile "Profile.EVERYONE" and the access right "hidden".
At any given level, the most restrictive access rights between those resolved at this level and higher
levels are applied. For instance, if a user's dataset access permissions resolve to read-write access,
but the container dataspace only allows read access, the user will only have read-only access to this
dataset.
Note
The dataset inheritance mechanism applies to both values and access rights. That is,
access rights defined on a dataset will be applied to its child datasets. It is possible to
override these rights in the child dataset.
User Profile
User 1 • user1
• role A
• role B
User 2 • user2
• role A
• role B
• role C
User 3 • user3
• role A
• role C
user3 Read No
Role A Read/Write No
Role C Hidden No
After resolution based on the role and profile access rights above, the rights that are applied to each
user are as follows:
User 1 Hidden
User 2 Read
User 3 Read/Write
Note
The resolution procedure is slightly different for table and table child nodes.
Attention
The resolved access rights on a dataset or dataset node is the minimum between the resolved access
rights defined in the user interface and the resolved programmatic rules, if any.
The permissions of a service are resolved as the service is called from the user interface, namely:
• During the execution, just before the service is displayed.
If the permission resolved in the user context is not enabled, a restriction message is displayed
in place of the service.
• During the display of menus if the service is defined as displayable in menus.
If the permission resolved in the context for the user is not enabled, the service will not be
displayed in the menu.
Thus, upon every request the resolution of permissions for a service is carried out as follows, in the
following order and as long as conditions are respected:
1. The service activation has to correspond to the current context. This activation considers:
• the selected entity type (dataset, table, record, etc.);
• static activation rules defined within the
API
UserServiceDeclaration.defineActivation
method;
• the potential dynamic activation rule (ServiceActivationRule [p 280]) also defined within the
UserServiceDeclaration.defineActivation method.
API
2. When the service is activated for the current context, permissions for the user session will be
evaluated:
• If permissions have been defined via the user interface for the current user (or for their roles),
their resolution must return enabled.
For more information, please refer to the Resolving user-defined rules [p 286] section.
• If a global permission rule [p 281] is defined for the service, it must return enabled for the
context provided (see ServicePermissionRuleContext ). API
• If a local permission rule [p 281] is defined for the selected node, it must return enabled for
the context provided (see ServicePermissionRuleContext ). API
Example
In this example, there are two users belonging to different roles and profiles:
User Profiles
User 1 • user1
• role A
• role B
User 2 • role C
• role D
The permissions associated with the roles and profiles defined on the dataset level are as follows:
The services available to each user after permission resolution are as follows:
Create a snapshot
Launch a merge
Export an archive
Import an archive
Create a dataset
Create a view
Override records
Occult records
Delete records
For the resolution of permissions on actions, only the permissions defined via the user interface for
the current user (or their roles) will be taken into account, the restriction policy being applied as for
any other permission defined via the user interface.
For more information, please refer to the Resolving user-defined rules [p 289] section.
Example
In this example, we have two users belonging to different roles and profiles:
User Profiles
User 1 • user1
• role A
• role B
User 2 • role C
• role D
Rights associated with roles and profiles on the actions of a given table are as follows:
Role C Yes No No No No
Role D No No Yes No No
The actions available to each user after resolving the rights are as follows:
Occult a record
CHAPTER 50
Criteria editor
This chapter contains the following topics:
1. Overview
2. Conditional blocks
3. Atomic criteria
50.1 Overview
The criteria editor is included in several different areas of the user interface. It allows defining table
filters, as well as validation and computation rules on data. This editor is based on the XPath 1.0 W3C
Recommendation.
Two types of criteria exist: atomic criteria and conditional blocks.
Field Specifies the field of the table to which the criterion applies.
Code only If checked, specifies searching the underlying values for the
field instead of labels, which are searched by default.
Expression
The expression can either be a fixed value or a formula. When creating a filter, only fixed values are
authorized. During creation of a validation or computation rule, a formula can be created using the
wizard.
Known limitation: The formula field does not validate input values, only the syntax and path are
checked.
CHAPTER 51
Performance guidelines
This chapter contains the following topics:
1. Basic performance checklist
2. Checklist for dataspace usage
3. Memory management
4. Validation
5. Mass updates
6. Accessing tables
• UI Components UIBeanEditor
API
For large volumes of data, cumbersome algorithms have a serious impact on performance. For
example, a constraint algorithm's complexity is O(n 2). If the data size is 100, the resulting cost is
proportional to 10 000 (this generally produces an immediate result). However, if the data size is 10
000, the resulting cost will be proportional to 10 000 000.
Another reason for slow performance is calling external resources. Local caching usually solves this
type of problem.
If one of the use cases above displays poor performance, it is recommended to track the problem either
through code analysis or using a Java profiling tool.
Directory integration
Authentication and permissions management involve the user and roles directory [p 373].
If a specific directory implementation is deployed and accesses an external directory, it can be useful
to ensure that local caching is performed. In particular, one of the most frequently called methods is
Directory.isUserInRole .
API
Aggregated lists
In a data model, when an element's cardinality constraint maxOccurs is greater than 1 and no osd:table
is declared on this element, it is implemented as a Java List. This type of element is called an
aggregated list [p 456], as opposed to a table.
It is important to consider that there is no specific optimization when accessing aggregated lists in
terms of iterations, user interface display, etc. Besides performance concerns, aggregated lists are
limited with regard to many functionalities that are supported by tables. See tables introduction [p
459] for a list of these features.
Attention
For the reasons stated above, aggregated lists should be used only for small volumes of simple
data (one or two dozen records), with no advanced requirements for their identification, lookups,
permissions, etc. For larger volumes of data (or more advanced functionalities), it is recommended
to use osd:table declarations.
This section reviews the most common performance issues that can appear in case of an intensive use
of many dataspaces containing large tables, and how to avoid them.
Note
Sometimes, the use of dataspaces is not strictly needed. As an extreme example, consider the
case where every transaction triggers the following actions:
1. A dataspace is created.
2. The transaction modifies some data.
3. The dataspace is merged, closed, then deleted.
In this case, no future references to the dataspace are needed, so using it to make isolated data
modifications is unnecessary. Thus, using Procedure already provides sufficient isolation
API
to avoid conflicts from concurrent operations. It would then be more efficient to directly do
the modifications in the target dataspace, and get rid of the steps which concern branching
and merging.
For a developer-friendly analogy, referring to a source-code management tool (CVS, SVN,
etc.): when you need to perform a simple modification impacting only a few files, it is probably
sufficient to do so directly on the main branch. In fact, it would be neither practical nor
sustainable, with regard to file tagging/copying, if every file modification involved branching
the whole project, modifying the files, then merging the dedicated branch.
Insufficient memory
When a table is in semantic mode (default), the EBX Java memory cache is used. It ensures a much
more efficient access to data when this data is already loaded in the cache. However, if there is not
enough space for working data, swaps between the Java heap space and the underlying database can
heavily degrade overall performance.
This memory swap overhead can only occur for tables in a dataspace with an on-demand loading
strategy [p 297].
Such an issue can be detected by looking at the monitoring log file [p 297]. If it occurs, various actions
can be considered:
• reducing the number of child dataspaces that contain large tables;
• reducing the number of indexes specifically defined for large tables;
• using relational mode instead of semantic mode;
• or (obviously) allocating more memory, or optimizing the memory used by applications for non-
EBX objects.
See also
Memory management [p 296]
Relational mode [p 243]
Transaction cancels
In semantic mode, when a transaction has performed some updates in the current dataspace and then
aborts, loaded indexes of the modified tables are reset. If updates on a large table are often cancelled
and, at the same time, this table is intensively accessed, then the work related to index rebuild will
slow down the access to the table; moreover, the induced memory allocation and garbage collection
can reduce the overall performance.
See also
Functional guard and exceptions TableTrigger.guardAndException API
API
Procedure
The following table details the loading modes which are available in semantic mode. Note that the
application server must be restarted so as to take into account any loading strategy change.
On-demand loading and In this default mode, each resource in a dataspace is loaded
unloading or built only when it is needed. The resources of the
dataspace are "soft"-referenced using the standard Java
SoftReference class. This implies that each resource can
be unloaded "at the discretion of the garbage collector in
response to memory demand".
The main advantage of this mode is the ability to free
memory when needed. As a counterpart, this implies a load/
build cost when an accessed resource has not yet been
loaded since the server started up, or if it has been unloaded
since.
Forced loading and This strategy is similar to the forced loading strategy, except
prevalidation that the content of the loaded dataspace or snapshot will also
be validated upon server startup.
Monitoring
Indications of EBX load activity are provided by monitoring the underlying database, and also by the
'monitoring' logging category [p 330].
If the numbers for cleared and built objects remain high for a long time, this is an indication that
EBX is swapping.
Tuning memory
The maximum size of the memory allocation pool is usually specified using the Java command-line
option -Xmx. As is the case for any intensive process, it is important that the size specified by this
option does not exceed the available physical RAM, so that the Java process does not swap to disk
at the operating-system level.
Tuning the garbage collector can also benefit overall performance. This tuning should be adapted to
the use case and specific Java Runtime Environment used.
51.4 Validation
The internal incremental validation framework will optimize the work required when updates occur.
The incremental validation process behaves as follows:
• The first call to a dataset validation report performs a full validation of the dataset. The loading
strategy [p 296] can also specify a dataspace to be prevalidated at server startup.
• Data updates will transparently and asynchronously maintain the validation report, insofar as the
updated nodes specify explicit dependencies. For example, standard and static facets, foreign key
constraints, dynamics facets, selection nodes specify explicit dependencies.
• If a mass update is executed or if there are too many validation messages, the incremental
validation process is stopped. The next call to the validation report will then trigger a full
validation.
• If a transaction is cancelled, the validation state of the updated dataset is reset. The next call to
the validation report will trigger a full validation as well.
Certain nodes are systematically revalidated, however, even if no updates have occurred since the last
validation. These are the nodes with unknown dependencies. A node has unknown dependencies if:
• It specifies a programmatic constraint Constraint in the default unknown dependencies mode,
API
• It is an Inherited fields [p 270] or it declares a dynamic facet that depends on a node that is itself
an Inherited fields [p 270].
Consequently, on large tables (beyond the order of 10 5), it is recommended to avoid nodes with
unknown dependencies (or at least to minimize the number of such nodes). For programmatic
constraints, the developer is able to specify two alternative modes that drastically reduce incremental
validation cost: local dependency mode and explicit dependencies. For more information, see
Dependencies and validation DependenciesDefinitionContext.dependencies . API
Note
It is possible for an administrator user to manually reset the validation report of a dataset.
This option is available from the validation report section in EBX.
Batch mode
For relational tables, the implementation of insertions, updates and deletions relies on the JDBC batch
feature. On large procedures, this can dramatically improve performance by reducing the number of
round-trips between the application server and the database engine.
In order to fully exploit this feature, the batch mode can be activated on large procedures. See
ProcedureContext.setBatch . This disables the explicit check for existence before record insertions,
API
thus reducing the number of queries to the database, and making the batch processing even more
efficient.
Transaction boundaries
It is generally not advised to use a single transaction when the number of atomic updates in the
transaction is beyond the order of 10 4. Large transactions require a lot of resources, in particular,
memory, from EBX and from the underlying database.
To reduce transaction size, it is possible to:
• Specify the property ebx.manager.import.commit.threshold [p 336]. However, this property is
only used for interactive archive imports performed from the EBX user interface.
• Explicitly specify a commit threshold inside the
API
ProcedureContext.setCommitThreshold
batch procedure.
• Structurally limit the transaction scope by implementing for a part of the task and
API
Procedure
executing it as many times as necessary.
On the other hand, specifying a very small transaction size can also hinder performance, due to the
persistent tasks that need to be done for each commit.
Note
If intermediate commits are a problem because transactional atomicity is no longer
guaranteed, it is recommended to execute the mass update inside a dedicated dataspace.
This dataspace will be created just before the mass update. If the update does not
complete successfully, the dataspace must be closed, and the update reattempted after
correcting the reason for the initial failure. If it succeeds, the dataspace can be safely
merged into the original dataspace.
Triggers
If required, triggers can be deactivated using the method ProcedureContext.setTriggerActivation . API
This access involves a unique set of functions, including a dynamic resolution process. This process
behaves as follows:
• Inheritance: Inheritance in the dataset tree takes into account records and values that are defined
in the parent dataset, using a recursive process. Also, in a root dataset, a record can inherit some
of its values from the data model default values, defined by the xs:default attribute.
• Value computation: A node declared as an osd:function is always computed on the fly when
the value is accessed. See ValueFunction.getValue . API
selection of records.
• Sort: A sort of the resulting records can be performed.
Attention
Faster access to tables is ensured if indexes are ready and maintained in memory cache. As mentioned
above, it is important for the Java Virtual Machine to have enough space allocated, so that it does
not release indexes too quickly.
Performance considerations
The request optimizer favors the use of indexes when computing a request result.
Attention
• Only XPath filters are taken into account for index optimization.
• Non-primary-key indexes are not taken into account for child datasets.
Assuming the indexes are already built, the impacts on performance are as follows:
1. If the request does not involve filtering, programmatic rules, or sorting, accessing its first few
rows (these fetched by a paged view) is almost instantaneous.
2. If the request can be resolved without an extra sort step (this is the case if it has no sort criteria, or
if its sort criteria relate to those of the index used for computing the request), accessing the first
few rows of a table should be fast. More precisely, it depends on the cost of the specific filtering
algorithm that is executed when fetching at least 2000 records.
3. Both cases above guarantee an access time that is independent of the size of the table, and provide
a view sorted by the index used. If an extra sort is required, the time taken by the first access
depends on the table size according to an Nlog(N) function, where N is the number of records in
the resolved view.
Note
The paginated requests automatically add the primary key to the end of the specified
criterion, in order to ensure consistent ordering. Thus, the primary key fields should also
be added to the end of any index intended to improve the performance of paginated
requests. These include tabular and hierarchical views, and drop-down menus for table
references.
If indexes are not yet built, or have been unloaded, additional time is required. The build time is
O(Nlog(N)).
Accessing the table data blocks is required when the request cannot be computed against a single
index (whether for resolving a rule, filter or sort), as well as for building the index. If the table blocks
are not present in memory, additional time is needed to fetch them from the database.
It is possible to get information through the monitoring [p 297] and request logging categories.
Attention
Only XPath filters are taken into account for index optimization. If the request includes non-
optimizable filters, table rows will be fetched from the database, then filtered in Java memory by
EBX, until the requested page size is reached. This is not as efficient as filtering on the database
side (especially regarding I/O).
Information on the transmitted SQL request is logged to the category persistence. See Configuring
the EBX logs [p 330].
Indexing
In order to improve the speed of operations on tables, indexes may be declared on a table at the data
model level. This will trigger the creation of an index of the corresponding table in the database.
When designing an index aimed at improving the performance of a given request, the same rules apply
as for traditional database index design.
Attention
On PostgreSQL, the default value of 0 instructs the JDBC driver to fetch the whole result set
at once, which could lead to an OutOfMemoryError when retrieving large amounts of data. On
the other hand, using fetchSize on PostgreSQL will invalidate server-side cursors at the end of
the transaction. If, in the same thread, you first fetch a result set with a fetchsize, then execute a
procedure that commits the transaction, then, accessing the next result will raise an exception.
See also
API
Request.setFetchSize
API
RequestResult
CHAPTER 52
Administration overview
The Administration section in EBX is the main point of entry for all administration tasks. In this
overview are listed all the topics that an administrator needs to master. Click on your topic of interest
in order to access the corresponding chapter or paragraph in the documentation.
This chapter contains the following topics:
1. Repository management
2. Disk space management
3. Data model
4. Perspectives
5. Administrative delegation
Object cache
EBX maintains an object cache in memory. The object cache size should be managed on a case by case
basis according to specific needs and requirements (pre-load option and pre-validate on the reference
dataspaces, points of reference, and monitoring), while continuously monitoring the repository health
reports (./ebxLog/monitoring.log).
See Memory management [p 296].
Obsolete contents
Keeping obsolete contents in the repository can lead to a slow server startup and slow responsiveness
of the interface. It is strongly recommended to delete obsolete content.
For example: datasets referring to deleted data models or undeployed add-on modules. See Deploying
and registering EBX add-ons [p 343].
Workflow
Cleanup
The workflow history and associated execution data have to be cleaned up on a regular basis.
The workflow history stores information on completed workflows, their respective steps and contexts.
This leads to an ever-growing database containing obsolete history and can thus lead to poor
performance of the database if not purged periodically. See Workflow history [p 386] for more
information.
Email configuration
It is required to configure workflow emails beforehand in order to be able to implement workflow
email notifications. See Configuration [p 384] for more information.
################################################################
# Daily rollover threshold of log files 'ebxFile:'
# Specifies the maximum number of backup files for daily rollover of 'ebxFile:' appenders.
# When set to a negative value, backup log files are never purged.
# Default value is -1.
################################################################
ebx.log4j.appender.ebxFile.backup.Threshold=-1
Audit trail
EBX is provided with a default audit trail manager. Any customized management (including purge,
backups, etc.) is the user's responsibility.
If the audit trail is unwanted, it is possible to fully deactivate it. See Activating the XML audit trail
[p 329] and Audit trail [p 393] for more information.
52.4 Perspectives
EBX offers extensive UI customization options. Simplified interfaces (Recommended perspectives)
[p 369] dedicated to each profile accessing the system can be parameterized by the administrator.
According to the profile of the user logging in, the interface will offer more or less options and menus.
This allows for a streamlined work environment.
See Advanced perspective [p 358] for more information.
Installation &
configuration
CHAPTER 53
Supported environments
This chapter contains the following topics:
1. Browsing environment
2. Supported application servers
3. Supported databases
Screen resolution
The minimum screen resolution for EBX is 1024x768.
Refreshing pages
Browser page refresh is not supported by EBX. When a page refresh is performed, the last user action
is re-executed, and therefore could result in potential issues. It is thus imperative to use the action
buttons and links offered by EBX instead of refreshing the page.
Browser configuration
The following features must be activated in the browser configuration for the user interface to work
properly:
• JavaScript
• Ajax
• Pop-ups
Attention
Avoid using any browser extensions or plug-ins, as they could interfere with the proper functioning
of EBX.
you can set the server to use the encoding of the request body by setting the parameter
useBodyEncodingForURI to 'true' in server.xml.
Attention
• Limitations apply regarding clustering and hot deployment/undeployment:
Clustering: EBX does not include a cache synchronization mechanism, thus it cannot be
deployed into a cluster of active instances. See Technical architecture [p 346] for more
information.
Hot deployment/undeployment: EBX does not support hot deployment/undeployment of web
applications registered as EBX modules, or of EBX built-in web applications.
Oracle Database 11g or higher. The distinction of null values encounters certain
limitations. On simple xs:string elements, Oracle does
not support the distinction between empty strings and null
values. See Empty string management [p 493] for more
information.
The user with which EBX connects to the database requires
the following privileges:
• CREATE SESSION,
• CREATE TABLE,
• ALTER SESSION,
• CREATE SEQUENCE,
• A non-null quota on its default tablespace.
SAP HANA Database 2.0 or When using SAP HANA Database as the underlying
Higher. database, certain schema evolutions are not supported. It is,
for example, impossible to reduce the length of a column;
this is a limitation of HANA, as mentioned in the SQL
reference guide: "For row table, only increasing the size of
VARCHAR and NVARCHAR type column is allowed."
The SAP Hana JDBC driver uses the local timezone of the
JVM to handle timestamp SQL columns. Hence, for the
specific use cases described in the section SQL access to
data in relational mode [p 246], the JVM powering the Hana
JDBC driver - that is the JVM powering EBX - should be
started with the property user.timezone set to UTC. This
configuration is not free of side effects: for example the
timestamps shown in the EBX logs will be in UTC instead
of the local timezone.
Microsoft SQL Server 2008R2 When used with Microsoft SQL Server, EBX uses the
or higher. default database collation to compare and sort strings stored
in the database. This applies to strings used in the data model
definition, as well as data stored in relational and history
tables. The default database collation can be specified when
the database is created. Otherwise, the database engine
server collation is used. To avoid naming conflicts or
unexpected behaviors, a case- and accent-sensitive collation
as the default database collation must be used (the collation
name is suffixed by "CS_AS" or the collation is binary).
The default setting to enforce transaction isolation on SQL
Server follows a pessimistic model. Rows are locked to
prevent any read/write concurrent accesses. This may cause
liveliness issues for mapped tables (history or relational). To
avoid such issues, it is recommended to activate snapshot
isolation on your SQL Server database.
The user with which EBX connects to the database requires
the following privileges:
• CONNECT, SELECT and CREATE TABLE on the
database hosting the EBX repository,
• ALTER, CONTROL, UPDATE, INSERT, DELETE on
its default schema.
Microsoft Azure SQL Database EBX has been qualified on Microsoft Azure SQL
Database v12 (12.00.700), and is regularly tested to verify
compatibility with the current version of the Azure database
service.
When used with Microsoft Azure SQL, EBX uses the
default database collation to compare and sort strings stored
in the database. This applies to strings used in the data model
definition, as well as data stored in relational and history
tables. The default database collation can be specified when
the database is created. Otherwise, the database engine
server collation is used. To avoid naming conflicts or
unexpected behaviors, a case- and accent-sensitive collation
as the default database collation must be used (the collation
name is suffixed by "CS_AS" or the collation is binary).
The user with which EBX connects to the database requires
the following privileges:
• CONNECT, SELECT and CREATE TABLE on the
database hosting the EBX repository,
• ALTER, CONTROL, UPDATE, INSERT, DELETE on
its default schema.
Attention
In order to guarantee the integrity of the EBX repository, it is strictly forbidden to perform direct
modifications to the database (for example, using direct SQL writes), except in the specific use cases
described in the section SQL access to data in relational mode [p 246].
See also
Repository administration [p 346]
Data source of the EBX repository [p 320]
Configuring the EBX repository [p 327]
CHAPTER 54
Java EE deployment
This chapter contains the following topics:
1. Introduction
2. Software components
3. Third-party libraries
4. Web applications
5. Deployment details
6. Java EE deployment examples
54.1 Introduction
This chapter details deployment specifications for EBX on a Java application server. For specific
information regarding supported application servers and inherent limitations, see Supported
environments. [p 308]
Database drivers
The EBX repository requires a database. Generally, the required driver is configured along with a data
source, if one is used. Depending on the database defined in the main configuration file, one of the
following drivers is required. Keep in mind that, whichever database you use, the version of the JDBC
client driver must be equal to or higher than the version of the database server.
Oracle JDBC Oracle database 11gR2 and 12cR1 are validated on their
latest patch set update.
Determine the driver that should be used according to the
database server version and the Java runtime environment
version. You can include ojdbc6.jar or ojdbc7.jar or
ojdbc8.jar depending on the Java runtime environment
version you use; it does not make any difference as EBX
does not make use of any JDBC 4.1 specific features.
Oracle database JDBC drivers download.
SQL Server JDBC SQL Server 2008, 2008R2 and 2012, with all corrective and
maintenance patches applied, are validated.
Remember to use an up-to-date JDBC driver, as some
difficulties have been encountered with older versions.
JAR file to include: mssql-jdbc-6.4.0.jre8.jar
See also
Data source of the EBX repository [p 320]
Configuring the EBX repository [p 327]
Note
In EBX, the supported JMS model is exclusively Point-to-Point (PTP). PTP systems
allow working with queues of messages.
ebx-root-1.0 EBX root web application. Any application that uses EBX
requires the root web application to be deployed.
ebx-dma EBX data model assistant, which helps with the creation of
data models through the user interface.
Note: The data model assistant requires the ebx-manager
user interface web application to be deployed.
See also
Packaging EBX modules [p 427]
Declaring modules as undeployed [p 337]
Attention
For JBoss application servers, any unused resources must be removed from the WEB-INF/web.xml
deployment descriptor.
See also
EBX main configuration file [p 325]
Supported application servers [p 309]
• When several EBX Web Components are to be displayed on the same HTML page, for instance
using iFrames, it may be required to disable the management of cookies due to limitations present
in some Internet browsers.
For example, on Tomcat, this configuration is provided by the attribute cookies in the
configuration file server.xml, as follows:
<Context path="/ebx" docBase="(...)" cookies="false"/>
The JDBC datasource for EBX is specified in the deployment descriptor WEB-INF/web.xml of the 'ebx'
web application as follows:
See also
Configuring the EBX repository [p 327]
Rules for the database access and user privileges [p 347]
Mail sessions
Note
If the EBX main configuration does not set ebx.mail.activate to 'true', or if it specifies
the property ebx.mail.smtp.host, then the environment entry below will be ignored by
EBX runtime. See SMTP [p 332] in the EBX main configuration properties for more
information on these properties.
SMTP and email is declared in the deployment descriptor WEB-INF/web.xml of the 'ebx' web
application as follows:
mail/EBX_MAIL_SESSION Weblogic: Java Mail session used to send emails from EBX.
EBX_MAIL_SESSION
Java type: javax.mail.Session
JBoss: java:/
EBX_MAIL_SESSION
The JMS connection factory is declared in the deployment descriptor WEB-INF/web.xml of the 'ebx'
web application as follows:
Note
For deployment on WildFly, JBoss and WebLogic application servers with JNDI
capabilities, you must update EBX.ear or EBXForWebLogic.ear for additional mappings
of all required resource names to JNDI names.
See JMS [p 333] for more information on the associated EBX main configuration properties.
Note
If the EBX main configuration does not activate JMS through the property
ebx.jms.activate, then the environment entries below will be ignored by EBX runtime.
See JMS [p 333] in the EBX main configuration properties for more information on this
property.
• [HTML documentation]
Attention
• The EBX installation notes on Java EE application servers do not replace the native
documentation for each application server.
• These are not general installation recommendations, as the installation process is determined by
architectural decisions, such as the technical environment, application mutualization, delivery
process, and organizational decisions.
• In these examples, no additional EBX modules are deployed. To deploy additional modules, the
best practice is to rebuild an EAR with the module as a web application at the same level as the
other EBX modules. The web application must declare its class path dependency as specified
by the Java™ 2 Platform Enterprise Edition Specification, v1.4:
J2EE.8.2 Optional Package Support
(...)
A JAR format file (such as a JAR file, WAR file, or RAR file) can reference a JAR file by naming the
referenced JAR file in a Class-Path header in the Manifest file of the referencing JAR file. The
referenced JAR file is named using a URL relative to the URL of the referencing JAR file. The Manifest
file is named META-INF/MANIFEST.MF in the JAR file. The Class-Path entry in the Manifest file is of the
form:
Class-Path: list-of-jar-files-separated-by-spaces
In an "industrialized" process, it is strongly recommended to develop a script that automatically
builds the EAR, with the custom EBX modules, the EBX web applications, as well as all the
required shared libraries.
• In order to avoid unpredictable behavior, the guideline to follow is to avoid any duplicates of
ebx.jar or other libraries in the class-loading system.
CHAPTER 55
EBX main configuration file
This chapter contains the following topics:
1. Overview
2. Setting an EBX license key
3. Setting the EBX root directory
4. Configuring the EBX repository
5. Configuring the user and roles directory
6. Configuring EBX localization
7. Setting temporary files directories
8. Activating the XML audit trail
9. Configuring the EBX logs
10.Activating and configuring SMTP and emails
11.Configuring data services
12.Activating and configuring JMS
13.Configuring distributed data delivery (D3)
14.Configuring REST toolkit services
15.Configuring web access from end-user browsers
16.Configuring failover
17.Tuning the EBX repository
18.Miscellaneous
55.1 Overview
The EBX main configuration file, by default named ebx.properties, contains most of the basic
parameters for running EBX. It is a Java properties file that uses the standard simple line-oriented
format.
The main configuration file complements the Java EE deployment descriptor [p 319]. Administrators
can also perform further configuration through the user interface, which is then stored in the EBX
repository.
See also
Deployment details [p 319]
UI administration [p 357]
Note
In addition to specifying properties in the main configuration file, it is also possible to
set the values of properties directly in the system properties. For example, using the -D
argument of the java command-line command.
#################################################
## EBX License number
## (as specified by your license agreement)
#################################################
ebx.license=paste_here_your_license_key
ebx.repository.directory=${ebx.home}/ebxRepository
See also
Repository administration [p 346]
Rules for the database access and user privileges [p 347]
Supported databases [p 311]
Data source of the EBX repository [p 320]
Database drivers [p 316]
################################################################
## The maximum time to set up the database connection,
## in milliseconds.
################################################################
ebx.persistence.timeout=10000
################################################################
## The prefix to add to all table names of persistence system.
## This may be useful for supporting multiple repositories in the relational database.
## Default value is 'EBX_'.
################################################################
ebx.persistence.table.prefix=
################################################################
## Case EBX persistence system is H2 'standalone'.
################################################################
ebx.persistence.factory=h2.standalone
ebx.persistence.user=sa
ebx.persistence.password=
################################################################
## Case EBX persistence system is H2 'server mode',
################################################################
#ebx.persistence.factory=h2.server
## Specific properties to be set only only if you want to ignore the standard
## deployment process of 'ebx' web application in the target operational environment
## (see the deployment descriptor 'web.xml' of 'ebx' web application).
#ebx.persistence.url=jdbc:h2:tcp://127.0.0.1/ebxdb
#ebx.persistence.user=xxxxxxxxx
#ebx.persistence.password=yyyyyyyy
################################################################
## Case EBX persistence system is Oracle database.
################################################################
#ebx.persistence.factory=oracle
## Specific properties to be set only only if you want to ignore the standard
## deployment process of 'ebx' web application in the target operational environment
## (see the deployment descriptor 'web.xml' of 'ebx' web application).
#ebx.persistence.url=jdbc:oracle:thin:@127.0.0.1:1521:ebxDatabase
#ebx.persistence.driver=oracle.jdbc.OracleDriver
#ebx.persistence.user=xxxxxxxxx
#ebx.persistence.password=yyyyyyyy
## Activate to use VARCHAR2 instead of NVARCHAR2 on Oracle; never modify on an existing repository.
#ebx.persistence.oracle.useVARCHAR2=false
################################################################
## Case EBX persistence system is SAP Hana
################################################################
#ebx.persistence.factory=hana
## Specific properties to be set only only if you want to ignore the standard
################################################################
## Case EBX persistence system is Microsoft SQL Server.
################################################################
#ebx.persistence.factory=sqlserver
## Specific properties to be set only only if you want to ignore the standard
## deployment process of 'ebx' web application in the target operational environment
## (see the deployment descriptor 'web.xml' of 'ebx' web application).
#ebx.persistence.url= \
#jdbc:sqlserver://127.0.0.1:1036;databasename=ebxDatabase
#ebx.persistence.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
#ebx.persistence.user=xxxxxxxxx
#ebx.persistence.password=yyyyyyyy
################################################################
## Case EBX persistence system is Microsoft Azure SQL database.
################################################################
#ebx.persistence.factory=azure.sql
## Specific properties to be set only only if you want to ignore the standard
## deployment process of 'ebx' web application in the target operational environment
## (see the deployment descriptor 'web.xml' of 'ebx' web application).
#ebx.persistence.url= \
#jdbc:sqlserver://myhost.database.windows.net:1433;database=ebxDatabase;encrypt=true;\
#trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;
#ebx.persistence.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
#ebx.persistence.user=xxxxxxxxx
#ebx.persistence.password=yyyyyyyy
################################################################
## Case EBX persistence system is PostgreSQL.
################################################################
#ebx.persistence.factory=postgresql
## Specific properties to be set only only if you want to ignore the standard
## deployment process of 'ebx' web application in the target operational environment
## (see the deployment descriptor 'web.xml' of 'ebx' web application).
#ebx.persistence.url=jdbc:postgresql://127.0.0.1:5432/ebxDatabase
#ebx.persistence.driver=org.postgresql.Driver
#ebx.persistence.user=xxxxxxxxx
#ebx.persistence.password=yyyyyyyy
See also
Users and roles directory [p 373]
API
DirectoryFactory
#################################################
## Specifies the Java directory factory class name.
## Value must be the fully qualified name of the Java class.
## The class must extend com.orchestranetworks.service.directory.DirectoryFactory.
#################################################
#ebx.directory.factory=xxx.yyy.DirectoryFactoryImpl
# The property ebx.temp.cache.directory allows to specify the directory containing temporary files for cache.
# Default value is ${ebx.temp.directory}/ebx.platform.
#ebx.temp.cache.directory = ${ebx.temp.directory}/ebx.platform
# The property ebx.temp.import.directory allows to specify the directory containing temporary files for import.
# Default value is ${ebx.temp.directory}/ebx.platform.
#ebx.temp.import.directory = ${ebx.temp.directory}/ebx.platform
Some of these categories can also be written to through custom code using the LoggingCategory
API
interface.
#################################################
## Log4J properties:
##
## We have some specific syntax extensions:
## - Appender ebxFile:<aFileName>
## Defines a file appender with default settings (threshold=DEBUG)
##
## - property log.defaultConversionPattern is set by Java
#################################################
#ebx.log4j.debug=true
#ebx.log4j.disableOverride=
#ebx.log4j.disable=
ebx.log4j.rootCategory= INFO
ebx.log4j.category.log.kernel= INFO, Console, ebxFile:kernel, kernelMail
ebx.log4j.category.log.workflow= INFO, ebxFile:workflow
ebx.log4j.category.log.persistence= INFO, ebxFile:persistence
ebx.log4j.category.log.setup= INFO, Console, ebxFile:kernel
ebx.log4j.category.log.mail= INFO, Console, ebxFile:mail
ebx.log4j.category.log.frontEnd= INFO, Console, ebxFile:kernel
ebx.log4j.category.log.frontEnd.incomingRequest= INFO
ebx.log4j.category.log.frontEnd.requestHistory= INFO
ebx.log4j.category.log.frontEnd.UIComponentInput= INFO
ebx.log4j.category.log.fsm= INFO, Console, ebxFile:fsm
ebx.log4j.category.log.fsm.dispatch= INFO
ebx.log4j.category.log.fsm.pageHistory= INFO
ebx.log4j.category.log.wbp= FATAL, Console
#--------------------------------------------------
ebx.log4j.appender.Console.Threshold = INFO
ebx.log4j.appender.Console=com.onwbp.org.apache.log4j.ConsoleAppender
ebx.log4j.appender.Console.layout=com.onwbp.org.apache.log4j.PatternLayout
ebx.log4j.appender.Console.layout.ConversionPattern=${log.defaultConversionPattern}
#--------------------------------------------------
ebx.log4j.appender.kernelMail.Threshold = ERROR
ebx.log4j.appender.kernelMail = com.onwbp.org.apache.log4j.net.SMTPAppender
ebx.log4j.appender.kernelMail.To = [email protected]
ebx.log4j.appender.kernelMail.From = admin${ebx.site.name}
ebx.log4j.appender.kernelMail.Subject = EBX Error on Site ${ebx.site.name} (VM ${ebx.vm.id})
ebx.log4j.appender.kernelMail.layout.ConversionPattern=**Site ${ebx.site.name} (VM${ebx.vm.id})**%n
${log.defaultConversionPattern}
ebx.log4j.appender.kernelMail.layout = com.onwbp.org.apache.log4j.PatternLayout
#--------------------------------------------------
ebx.log4j.category.log.monitoring= INFO, ebxFile:monitoring
ebx.log4j.category.log.dataServices= INFO, ebxFile:dataServices
ebx.log4j.category.log.d3= INFO, ebxFile:d3
ebx.log4j.category.log.request= INFO, ebxFile:request
ebx.log4j.category.log.restServices= INFO, ebxFile:dataServices
################################################################
# Daily rollover threshold of log files 'ebxFile:'
# Specifies the maximum number of backup files for daily rollover of 'ebxFile:' appenders.
# When set to a negative value, backup log files are never purged.
# Default value is -1.
################################################################
ebx.log4j.appender.ebxFile.backup.Threshold=-1
#################################################
## SMTP and emails
#################################################
## Specific properties to be set only only if you want to ignore the standard
## deployment process of 'ebx' web application in the target operational environment
## (see the deployment descriptor 'web.xml' of 'ebx' web application).
#ebx.mail.smtp.host = smtp.domain.com
## SMTP port default is 25.
#ebx.mail.smtp.port= 25
#ebx.mail.smtp.login=
#ebx.mail.smtp.password=
## Activate SSL (true or false, default is false).
## If SSL is activated, a SSL factory and a SSL provider are required.
#ebx.mail.smtp.ssl.activate=true
#ebx.mail.smtp.ssl.provider=com.sun.net.ssl.internal.ssl.Provider
#ebx.mail.smtp.ssl.factory=javax.net.ssl.SSLSocketFactory
# Specifies the default value for deletion at the end of close and
# merge operations.
# If the parameter is set in the request operation, it overrides
# this default setting.
# If unspecified, default is false.
#ebx.dataservices.dataDeletionOnCloseOrMerge.default=false
#ebx.dataservices.historyDeletionOnCloseOrMerge.default=false
##################################################################
## REST configuration
##################################################################
##################################################################
## JMS configuration for Data Services
##################################################################
See also
JMS for distributed data delivery (D3) [p 407]
Introduction to D3 [p 398]
#ebx.servlet.useLocalUrl=true
#ebx.servlet.http.host=
#ebx.servlet.http.port=
ebx.servlet.http.path=ebx/
#ebx.servlet.https.host=
#ebx.servlet.https.port=
#ebx.servlet.https.path=
##################################################################
## External resources: default properties for computing external resources address
##
## The same rules apply as EBX FrontServlet properties (see comments).
##
## Each property may be inherited from EBX FrontServlet.
##################################################################
#ebx.externalResources.useLocalUrl=true
#ebx.externalResources.http.host=
#ebx.externalResources.http.port=
#ebx.externalResources.http.path=
#ebx.externalResources.https.host=
#ebx.externalResources.https.port=
#ebx.externalResources.https.path=
Proxy mode
Proxy mode allows using a front-end HTTP server to provide static resources (images, CSS,
JavaScript, etc.). This architecture reduces the load on the application server for static HTTP requests.
This configuration also allows using SSL security on the front-end server.
The web server sends requests to the application server according to a path in the URL. This
servletAlias path is specified in the main configuration file.
The web server provides all external resources. These resources are stored in a dedicated directory,
accessible using the resourcesAlias path.
EBX must also be able to access external resources from the file system. To do so, the property
ebx.webapps.directory.externalResources must be specified.
#################################################
#ebx.servlet.useLocalUrl=true
#ebx.servlet.http.host=
#ebx.servlet.http.port=
ebx.servlet.http.path= servletAlias
#ebx.servlet.https.host=
#ebx.servlet.https.port=
ebx.servlet.https.path= servletAlias
#################################################
#ebx.externalResources.useLocalUrl=true
#ebx.externalResources.http.host=
#ebx.externalResources.http.port=
ebx.externalResources.http.path= resourcesAlias
#ebx.externalResources.https.host=
#ebx.externalResources.https.port=
ebx.externalResources.https.path= resourcesAlias
#################################################
## Mode used to qualify the way in which a server accesses the repository.
## Possible values are: unique, failovermain, failoverstandby.
## Default value is: unique.
#################################################
#ebx.repository.ownership.mode=unique
## Activation key used in case of failover. The backup server must include this
## key in the HTTP request used to transfer exclusive ownership of the repository.
## The activation key must be an alphanumeric ASCII string longer than 8 characters.
#ebx.repository.ownership.activationkey=
55.18 Miscellaneous
Activating data workflows
This parameter specifies whether data workflows are activated. This parameter is not taken into
account on the fly. The server must be restarted whenever the value changes.
#################################################
## Workflow activation.
## Default is false.
#################################################
ebx.workflow.activation = true
In development mode, this parameter can be set to as low as one second. On production systems,
where changes are expected to be less frequent, the value can be greater, or set to '0' to disable hot
reloading entirely.
This property is not always supported when the module is deployed as a WAR, as it would then depend
on the application server.
EBX repository, it is necessary to compile all the data models used by at least a dataset, hence EBX
will wait endlessly for referenced modules to be registered.
If a module is referenced by a data model but is not deployed (or no longer deployed), it is necessary
to declare this module as undeployed to unlock the wait and continue the startup process.
Note
The kernel logging category indicates which modules are awaited.
Note
A module declared as undeployed cannot be registered into EBX until it is removed from
the property ebx.module.undeployedModules.
Note
Any data model based on an unregistered module will have an "undeployed module"
compilation error.
See also
Module registration [p 428]
Dynamically reloading the main configuration [p 337]
#################################################
## Comma-separated list of EBX modules declared
## as undeployed.
## If a module is expected by the EBX repository but is
## not deployed, it must be declared in this property.
## Caution:
## if the "thisfile.checks.intervalInSeconds" property is deactivated,
## a restart is mandatory, otherwise it will be hot-reloaded.
#################################################
ebx.module.undeployedModules=
When running in development mode, the development tools [p 437] are activated in EBX, some
features thus become fully accessible and more technical information is displayed.
Note
The administrator can always access this information regardless of the mode used.
The additional features accessible when running in development mode include the following (non-
exhaustive list):
Documentation pane In the case of a computed value, the Java class name is
displayed. A button is displayed giving access to the path
to a node.
Web component link generator The Web component link generator is available on datasets
and dataspaces.
Data model assistant Data model configuration and additional options, such
as Services, Business Objects and Rules, Java Bindings,
Toolbars and some advanced properties.
Product documentation The product documentation is always the most complete one
(i.e "advanced"), including administration and development
chapters.
#################################################
## Server Mode
## Value must be one of: development, integration, production
## Default is production.
#################################################
backend.mode=integration
Note
There is no difference between the integration and production modes.
Resource filtering
This property allows the filtering of certain files and directories in the resource directory contents
(resource type node, with an associated facet that indicates the directory that contains usable
resources).
#################################################
## list (separated by comma) of regexps excluding resource
## the regexp must be of type "m:[pattern]:[options]".
## the list can be void
#################################################
ebx.resource.exclude=m:CVS/*:
CHAPTER 56
Initialization and first-launch
assistant
The EBX Configuration Assistant helps with the initial configuration of the EBX repository. If EBX
does not have a repository installed upon start up, the configuration assistant launches automatically.
Before starting the configuration of the repository, ensure that EBX is correctly deployed on the
application server. See Java EE deployment [p 315].
Note
The EBX main configuration file must also be properly configured. See EBX main
configuration file [p 325].
Id of the repository Must uniquely identify the repository (in the scope of
(repositoryId) the enterprise). The identifier is 48 bits (6 bytes) long
and is usually represented as 12 hexadecimal digits. This
information is used for generating the Universally Unique
Identifiers (UUIDs) of entities created in the repository, and
also of transactions logged in the history. This identifier acts
as the "UUID node", as specified by RFC 4122.
Repository label Defines a user-friendly label that indicates the purpose and
context of the repository.
CHAPTER 57
Deploying and registering EBX add-
ons
Note
Refer to the documentation of each add-on for additional installation and configuration
information in conjunction with this documentation.
The web application deployment descriptor for the add-on module must specify that class definitions
and resources from the web application are to be loaded in preference to classes from the parent and
server classloaders.
For example, on WebSphere Application Server, this can be done by setting <context-priority-
classloader>true</context-priority-classloader> in the web-app element of the deployment
descriptor.
On WebLogic, include <prefer-web-inf-classes>True</prefer-web-inf-classes> in
weblogic.xml.
See the documentation on class loading of your application server for more information.
The EBX add-on common JAR file, named lib/ebx-addons.jar, must be copied in the library
directory shared by all web applications.
Note
The add-on log level can be managed in the main configuration file [p 332].
Note
If the EBX repository is under a trial license, no license key is required for the add-
on. The add-on will be subject to the same trial period as the EBX repository itself.
5. Click on Save.
Technical
administration
CHAPTER 58
Repository administration
This chapter contains the following topics:
1. Technical architecture
2. Auto-increments
3. Repository management
4. Monitoring management
5. Dataspaces
See also
Configuring the EBX repository [p 327]
Supported databases [p 311]
Data source of the EBX repository [p 320]
Attention
In order to guarantee the integrity of persisted master data, it is strictly forbidden to perform direct
SQL writes to the database, except for specific use cases described in the section SQL access to
data in relational mode [p 246].
It is required for the database user specified by the configured data source [p 327] to have the 'create/
alter' privileges on tables, indexes and sequences. This allows for automatic repository installation
and upgrades [p 349].
See also
SQL access to history [p 252]
Accessing a replica table using SQL [p 259]
SQL access to data in relational mode [p 246]
Data source of the EBX repository [p 320]
Attention
To avoid an additional wait period at the next start up, it is recommended to always properly shut
down the application server.
In order to activate the backup server and transfer exclusive ownership of the repository to it, a specific
request must be issued by an HTTP request, or using the Java API:
• Using HTTP, the request must include the parameter activationKeyFromStandbyMode,
and the value of this parameter must be equal to the value declared for the entry
ebx.repository.ownership.activationkey in the EBX main configuration file. See Configuring
failover [p 335].
The format of the request URL must be:
http[s]://<host>[:<port>]/ebx?activationKeyFromStandbyMode={value}
If the main server is still up and accessing the database, the following applies: the backup server marks
the ownership table in the database, requesting a clean shutdown for the main server (yet allowing
any running transactions to finish). Only after the main server has returned ownership can the backup
server start using the repository.
It is also possible to get the repository status information using an HTTP request that includes the
parameter repositoryInformationRequest with one of following values:
heart_beat_count The number of times that the repository has made contact
since associating with the database.
58.2 Auto-increments
Several technical tables can be accessed in the 'Administration' area of the EBX user interface. These
tables are for internal use only and their content should not be edited manually, unless removing
obsolete or erroneous data. Among these technical tables are:
Inter-database migration
EBX provides a way to export the full content of a repository to another database. The export includes
all dataspaces, configuration datasets, and mapped tables. To operate this migration, the following
guidelines must be respected:
• The source repository must be shut down: no EBX server process must be accessing it; not
strictly complying with this requirement can lead to a corrupted target repository;
• A new EBX server process must be launched on the target repository, which must be empty. In
addition to the classic Java system property -Debx.properties, this process must also specify
ebx.migration.source.properties: the location of an EBX properties file specifying the source
repository. (It is allowed to provide distinct table prefixes between target and source.)
• The migration process will then take place automatically. Please note, however, that this process
is not transactional: should it fail halfway, it will be necessary to delete the created objects in the
target database, before starting over.
• After the migration is complete, an exception will be thrown, to force restarting the EBX server
process accessing the target repository.
Limitations:
• For technical reasons, migration to an Oracle database is not supported.
• The names of the database objects representing the mapped tables (history, replication, relational)
may have to be altered when migrated to the target database, to comply with the limitations of
its database engine (maximum length, reserved words, ...). Such alterations will be logged during
the migration process.
• As a consequence, the names specified for replicated tables in the data model will not be consistent
with the adapted name in the database. The first recompilation of this data model will force to
correct this inconsistency.
• Due to different representations of numeric types, values for xs:decimal types might get rounded
if the target database engine offers a lesser precision than the source. For example, a value of
Repository backup
A global backup of the EBX repository must be delegated to the underlying RDBMS. The database
administrator must use the standard backup procedures of the underlying database.
Archives directory
Archives are stored in a sub-directory called archives within the ebx.repository.directory (see
configuration [p 325]). This directory is automatically created during the first export from EBX.
Attention
If manually creating this directory, make sure that the EBX process has read-write access to it.
Furthermore, the administrator is responsible for cleaning this directory, as EBX does not maintain it.
Note
The transfer of files between two EBX environments must be performed using tools such
as FTP or simple file copies by network sharing.
Repository attributes
A repository has the following attributes:
Attention
It is the administrator's responsibility to monitor and clean up these entities.
Database statistics
The performance of requests executed by EBX requires that the database has computed up-to-date
statistics on its tables. Since database engines regularly schedule statistics updates, this is usually not
an issue. Yet, it could be necessary to explicitly update the statistics in cases where tables are heavily
modified over a short period of time (e.g. by an import creating many records).
Impact on UI
Some UI components use statistics to adapt their behavior in order to prevent users from executing
costly requests unwillingly.
For example, the combo box will not automatically search on user input if the table contains a large
volume of records. This behavior may also occur if the database's statistics are not up to date, because
a table may be considered as containing a large volume of records even if it is not actually the case.
• Using the data service "close dataspace" and "close snapshot" operations. See Closing a dataspace
or snapshot [p 607] for more information.
Once the dataspaces and snapshots have been closed, the data can be safely removed from the
repository.
Note
Closed dataspaces and snapshots can be reopened in the 'Administration' area, under
'Dataspaces'.
Note
The deletion of a dataspace, a snapshot, or of the history associated with them is
recursive. The deletion operation will be performed on every descendant of the selected
dataspace.
After the deletion of a dataspace or snapshot, some entities will remain until a repository-wide purge
of obsolete data is performed. In particular, the complete history of a dataspace remains visible until
a repository-wide purge is performed. Both steps, the deletion and the repository-wide purge, must be
completed in order to totally remove the data and history. The process has been divided into two steps
for performance issues. As the total clean-up of the repository can be time-intensive, this allows the
purge execution to be initiated during off-peak periods on the server.
The process of deleting the history of a dataspace takes into account all history transactions recorded
up until the deletion is submitted or until a date specified by the user. Any subsequent historized
operations will not be included when the purge operation is executed. To delete new transactions, the
history of the dataspace must be deleted again.
Note
It is not possible to set a deletion date in the future. The specified date will thus be ignored
and the current date will be used instead.
The deletion of dataspaces, snapshots, and history can be performed in a number of different ways:
• From the 'Dataspaces/Snapshots' table under 'Dataspaces' in the 'Administration' area, using the
Actions menu button in the workspace. The action can be used on a filtered view of the table.
• Using the Java API, and more specifically the methods and
API
Repository.deleteHome
RepositoryPurge.markHomeForHistoryPurge .
API
• At the end of the data service "close dataspace" operation, using the parameters
deleteDataOnClose and deleteHistoryOnClose, or at the end of a "merge dataspace" operation,
using the parameters deleteDataOnMerge and deleteHistoryOnMerge.
• Using the task scheduler. See Task scheduler [p 387] for more information.
The purge process is logged in the directory ${ebx.repository.directory}/db.purge/.
Purge
A purge can be executed to clean up the remaining data from all deletions, that is, deleted dataspaces,
snapshots and history performed up until that point. A purge can be initiated by selecting in the
'Administration' area Actions > Execute purge in the navigation pane.
User interactions
User interactions are used by the EBX component as a reliable means for an application to initiate and
get the result of a service execution. They are persisted in the ebx-interactions administration section.
It is recommended to regularly monitor the user interactions table, as well as to clean it, if needed.
Workflow history
The workflow events are persisted in the workflow history table, in the 'Workflow' section of the
'Administration' area. Data workflows constantly add to this table as they are executed. Even when
an execution terminates normally, the records are not automatically deleted. It is thus recommended
to delete old records regularly.
The steps to clean history are the following
• Make sure the process executions are removed (it can be done by selecting in the 'Administration'
area of Workflows Actions > Terminate and clean this workflow or Actions > Clean from a
date in the navigation pane).
• Clean main processes in history (it can be done by selecting in the 'Administration' area of
Workflows history Actions > Clear from a date or Actions > Clean from selected workflows
in the navigation pane).
• Purge remaining entities in workflow history using 'standard EBX purge'
Attention
In order to guarantee the correct operation of EBX, the disk usage and disk availability of the
following directories must be supervised by the administrator, as EBX does not perform any clean up:
Attention
For XML audit trail, if large transactions are executed with full update details activated (default
setting), the required disk space can increase.
Attention
For pagination in the data services getChanges operation, a persistent store is used in the Temporary
directory. Large changes may require a large amount of disk space.
See also
XML audit Trail [p 393]
58.5 Dataspaces
Some dataspace administrative tasks can be performed from the 'Administration' area of EBX by
selecting 'Dataspaces'.
Dataspaces/snapshots
This table lists all the existing dataspaces and snapshots in the repository, whether open or closed.
You can view and modify the information of dataspaces included in this table.
From this section, it is also possible to close open dataspaces, reopen previously closed dataspaces,
as well as delete and purge open or closed dataspaces, associated history, and snapshots.
Dataspace permissions
This table lists all the existing permission rules defined on all the dataspaces in the repository. You
can view the permission rules and modify their information.
Repository history
The table 'Deleted dataspaces/snapshots' lists all the dataspaces that have already been purged from
the repository.
From this section, it is also possible to delete the history of purged dataspaces.
CHAPTER 59
UI administration
EBX comes with a full user interface called Advanced perspective [p 358] that includes all available
features. The interface is fully customizable [p 361] (custom logo, colors, field size, default values,
etc.) and available to built-in administrators.
Access to the advanced perspective can be restricted in order to simplify the end-user experience,
through global permissions [p 357], giving the possibility to grant or restrict access to functional
categories. Administrators can create simplified perspectives called recommended perspectives [p
369] for end-users, containing only the features and menus they need for their daily tasks.
The 'Display area' property allows restricting access to areas of the user interface. To define the access
rules, select 'Global permissions' in the 'Administration' area.
Restriction policy Indicates if the permissions defined here restrict the ones
defined for other profiles. See the Restriction policy concept
[p 281] for more information.
Note
Permissions can be defined by administrators and by the dataspace or dataset owner.
The advanced perspective is available by default to all end-users but access can be restricted.
Note: Administrators can always access the advanced perspective even when it is deactivated.
It is possible to configure which perspective is applied by default when users log in. This 'default
perspective' is based on two criteria: 'recommended perspectives', defined by administrators and
'favorite perspectives', defined by users.
See also
Recommended perspectives [p 369]
Favorite perspectives [p 17]
Perspective creation
To create a perspective, open the 'Select an administration feature' drop-down menu and click on the
+ sign to create a child dataset.
User interface
Options are available in the Administration area for configuring the web interface, in the 'User
interface' section.
Attention
Be careful when configuring the 'URL Policy'. If the web interface configuration is invalid,
it can lead to the unusability of EBX. If this occurs, use the "rescue mode" by setting
frontEnd.rescueMode.enable=true in EBX main configuration file [p 325], and accessing the
following URL in your browser as a built-in administrator user: http://.../ebx/?onwbpID=iebx-
manager-rescue.
Session configuration
These parameters configure the user session options:
Session time-out (in seconds) Maximum duration of user inactivity before the session
is considered inactive and is terminated. A negative value
indicates that the session should never timeout.
Interface configuration
Entry policy
Describes the URL to access the application.
The entry policy defines an EBX login page, replacing the default one.
If defined,
• it replaces an authentication URL that may have been defined using a specific user Directory ,
API
URL policy
Describes the URL and proxy policy. Both dynamic (servlet) and static (resources) URLs can be
configured.
HTTP external resources policy Header content of the external resources URL in HTTP:
• if a field is not set, the default value in the environment
configuration is used,
• if a default value is not set, the value in the initial
request is used.
HTTPS external resources Header content of the external resources URL in HTTPS:
policy
• if a field is not set, the default value in the environment
configuration is used,
• if a default value is not set, the value in the initial
request is used.
Exit policy
Describes how the application is exited.
Normal redirection Specifies the redirection URL used when exiting the session
normally.
Error redirection Specifies the redirection URL used when exiting the session
due to an error.
This feature is now deprecated and may be ignored by EBX.
Redirection restrictions Specifies the list of authorized domains and whether HTTPS
is mandatory for each domain.
Allowed profiles The list of authorized user profiles for the perspective.
Application locking
EBX availability status:
Security policy
EBX access security policy. These parameters only apply to new HTTP sessions.
Unique session control Specifies whether EBX should control the uniqueness of
user sessions. When set to 'Yes', if a user does not log out
before closing the browser, it will not be possible for that
user to log in again until the previous session has timed out.
Max table columns to display According to network and browser performance, adjusts the
maximum number of columns to display in a table. This
property is not used when a view is applied on a table.
Maximum auto-width for table Defines the maximum width to which a table column
columns can auto-size during table initialization. This is to prevent
columns from being too wide, which could occur for very
long values, such as URLs. Users will still be able to
manually resize columns beyond this value.
Max expanded elements for a Defines the maximum number of elements that can be
hierarchy expanded in a hierarchy when using the action "Expand all".
A value less than or equal to '0' disables this parameter.
Default table filter Defines the default table filter to display in the filters list in
tabular views. If modified, users must log out and log in for
the new value to take effect.
Display the message box Defines the message severity threshold for displaying the
automatically messages pop-up.
Forms: height of text areas The height of text entry fields in forms.
Searchable list selection page Maximum number of rows downloaded at each request of
size the searchable list selection (used for selecting foreign keys,
enumerations, etc.).
Record form: rendering mode Specifies how to display non-terminal nodes in record
for nodes forms. This should be chosen according to network and
browser performance. For impact on page loading, link
mode is light, expanded and collapsed modes are heavier. If
this property is modified, users are required to log out and
log in for the new value to take effect.
Record form: display of If enabled, the selection and association nodes will be
selection and association nodes displayed in record creation forms.
in creation mode
Avatar displayed in the header This property defines the display mode of avatars in the
header. For example, it is possible to enable or disable the
use of avatars in the header by updating this property. If no
value is defined, the default value is 'Avatar only'. If it is a
relative path, prefix it with "../" to get back to the application
root URL.
Avatar displayed in the history This property defines the display mode of avatars in the
history. For example, it is possible to enable or disable the
use of avatars in the history by updating this property. If no
value is defined, the default value is 'Avatar only'. If it is a
relative path, prefix it with "../" to get back to the application
root URL.
Avatar displayed in the This property defines the display mode of avatars in the
workflow workflow. For example, it is possible to enable or disable
the use of avatars in the workflow by updating this property.
If no value is defined, the default value is 'Avatar only'. If
it is a relative path, prefix it with "../" to get back to the
application root URL.
Import/Export
CSV file encoding Specifies the default character encoding to use for CSV file
import and export.
CSV : field separator Specifies the default separator character to use for CSV file
import and export.
CSV : list separator Specifies the default list separator character to use for CSV
file import and export.
Missing XML values as 'null' If 'Yes', when updating existing records, if a node is missing
or empty in the imported file, the value is considered as
'null'. If 'No', the value is not modified.
Web site icon URL (favicon) Sets a custom favicon. The recommended format is ICO,
which is compatible with Internet Explorer.
Logo URL (SVG) Specifies the SVG image used for compatible browsers.
Leave this field blank to use the PNG image, if specified.
The user interface will attempt to use the specified PNG
image if the browser is not SVG-compatible. If no PNG
image is specified, the GIF/JPG image will be used. The
logo must have a maximum height of 40px. If the height
exceeds 40px, it will be cropped, not scaled, to a height of
40px. The width of the logo will determine the position of
the buttons in the header. If it is a relative path, prefix it with
"../" to get back to the application root URL.
Logo URL (PNG) Specifies the PNG image used for compatible browsers.
Leave both this field and the SVG image URL field blank to
use the GIF/JPG image. The user interface will use the GIF/
JPG image if the browser is not PNG-compatible. The logo
must have a maximum height of 40px. If the height exceeds
40px, it will be cropped, not scaled, to a height of 40px. The
width of the logo will determine the position of the buttons
in the header. If it is a relative path, prefix it with "../" to get
back to the application root URL.
Logo URL (GIF/JPG) Specifies the GIF/JPG image to use when neither the PNG
nor SVG are defined. The recommended formats are GIF
and JPG. The logo must have a maximum height of 40px.
If the height exceeds 40px, it will be cropped, not scaled, to
a height of 40px. The width of the logo will determine the
position of the buttons in the header. If it is a relative path,
prefix it with "../" to get back to the application root URL.
Main Main user interface theme color, used for selections and
highlights.
Text of link style buttons Text color of some buttons having a link style (the text is
not dark or light, but colored). By default, set to the same
value as the main color.
Selected tab border Border color of the selected tab. By default, set to the same
value as the Main color.
Table history view: technical Background color of technical data cells in the table history
data view.
Table history view: creation Background color of cells with the state 'creation' in the
table history view.
Table history view: deletion Background color of cells with the state 'deletion' in the
table history view.
Table history view: update Background color of cells with the state 'update' in the table
history view.
Note
Any specific parameter set for this perspective will override the default parameters that
have been set in the 'Advanced perspective' configuration.
Perspective Menu
This view displays the perspective menu. It is a hierarchical table view.
From this view, a user can create, delete or reorder menu item records.
Section Menu Item This is a top level menu item. It contains other menu items.
Action Menu Item This menu item displays a user service in the workspace
area.
Top separator Indicates that the menu item section has a top separator.
This property is only available for section menu items.
Action The user service to execute when the user clicks on the menu
item.
Selection on close The menu item that will be selected when the service
terminates.
Built-in services use this property when the user clicks on
the 'Close' button.
Note
Any specific parameter set for this perspective will override the default parameters that
have been set in the 'Advanced perspective' configuration.
Note
Any specific parameter set for this perspective will override the default parameters that
have been set in the 'Advanced perspective' configuration.
Resolution
When a user logs in, the following algorithm determines which perspective is selected by default:
// 1) favorite perspective
IF the user has a favorite perspective
AND this perspective is active
AND the user is authorized for this perspective
SELECT this perspective
DONE
// 2) recommended perspective
FOR EACH association in the recommended perspectives list, in the declared order
IF the user is in the declared profile
// 3) advanced perspective
IF the advanced perspective is active
AND the user is authorized for this perspective
SELECT this perspective
DONE
// 4) any perspective
SELECT any active perspective for which the user is authorized
DONE
Views
This table contains all custom views defined on any table. Only a subset of fields is editable:
View group Indicates the menu group in which this view is displayed in
the 'View' menu.
Share with Defines the users allowed to select this view from their
'View' menu.
Views permissions
This table allows to manage permissions relative to custom views, by data model and profile. The
following permissions can be configured (the default value is applied when no permission is set for
a given user):
Recommend views Allows the user to manage recommended If the user is the dataset owner, the
views. default value is 'Yes', otherwise it is 'No'.
Manage views Defines the views the user can modify If the user is a built-in administrator,
and delete. the default value is 'Owned + shared',
otherwise it is 'Owned'.
Share views Defines the views for which the user can If the user is a built-in administrator,
edit the 'Share with' field. the default value is 'Owned + shared',
else if the user is the dataset owner, it is
'Owned', otherwise it is 'None'.
Publish views Allows the user to publish views to make If the user is a built-in administrator, the
them available to all users using Web default value is 'Yes', otherwise it is 'No'.
components, workflow user tasks, or
data services.
CHAPTER 60
Users and roles directory
This chapter contains the following topics:
1. Overview
2. Concepts
3. Default directory
4. Custom directory
60.1 Overview
EBX uses a directory for user authentication and user role definition.
A default directory is provided and integrated into theEBX repository; the 'Directory' administration
section allows defining which users can connect and what their roles are.
It is also possible to integrate another type of enterprise directory.
See also
Configuring the user and roles directory [p 328]
Custom directory [p 376]
60.2 Concepts
In EBX, a user can be a member of several roles, and a role can be shared by several users. Moreover,
a role can be included into another role. The generic term profile is used to describe either a user or
a role.
In addition to the directory-defined roles, EBX provides the following built-in roles:
Role Definition
Attention
Associations between users and the built-in roles OWNER and EVERYONE are managed
automatically by EBX, and thus must not be modified through the directory.
User permissions are managed separately from the directory. See Permissions [p 273].
See also
profile [p 21]
role [p 22]
user [p 21]
administrator [p 22]
user and roles directory [p 22]
Policy
These properties configure the policies of the user and roles directory, for example, whether or not
users can edit their own profiles.
Users
This table lists all the users defined in the internal directory. New users can be added from there.
Roles
This table lists all the users defined in the internal directory. New roles can be created in this table.
Note
If a role inclusion cycle is detected, the role inclusion is ignored at the permission
resolution. Refresh and check the directory validation report for cycle detection.
Note
Users' roles, roles' inclusions and salutations tables are hidden by default [p 510].
Depending on the policies defined, users can modify information related to their own accounts,
regardless of the permissions defined on the directory dataset.
Note
It is not possible to delete or duplicate the default directory.
Note
For security reasons, the password recovery procedure is not available for administrator
profiles. If required, use the administrator recovery procedure instead.
Note
While the 'ebx.directory.factory' property is set for the recovery procedure,
authentication of users will be denied.
CHAPTER 61
Data model administration
This chapter contains the following topics:
1. Administrating publications and versions
2. Migration of previous data models in the repository
3. Schema evolutions
CHAPTER 62
Database mapping administration
This chapter contains the following topics:
1. Overview
2. Renaming columns in the database
3. Purging columns in the database
4. Renaming master tables in the database
5. Renaming auxiliary tables in the database
6. Purging master tables in the database
62.1 Overview
Information and services relative to database mapping can be found in the Administration area.
See also
Mapped modes [p 241]
API
DatabaseMapping
Note
It is required that the new identifier begins with a letter.
Besides, the new name must be a valid column identifier, which depends on the naming rules
of the underlying RDBMS.
See also
Disabling history on a specific field or group [p 250]
Disabling replication on a specific field or group [p 259]
Note that this behavior will change for aggregated lists:
• when deactivating a complex aggregated list, its inner fields will still be in the LIVING state,
whereas the list node is disabled. As lists are considered as auxiliary tables in the mapping system,
this information can be checked in the 'Tables' table,
• on the other hand, when the deactivation is just for inner nodes of the list, then the list will remain
LIVING, while its children will be DISABLED IN MODEL.
A column can be purged only if its own state is DISABLED IN MODEL, or if it is an inner field of a
DISABLED IN MODEL list.
Once the service is selected on a record, a summary screen displays information regarding the selected
master table and the administrator is prompted to enter a new name for the master table in the database.
Note
It is required that the new identifier begins with a letter and with the repository prefix.
For history tables, it is also required that the repository prefix is followed by the history tables
prefix.
For relational tables, it is also required that the repository prefix is followed by the relational
tables prefix.
Besides, the new name must be a valid table identifier, which depends on the naming rules
of the underlying RDBMS.
Note
It is required that the new identifier begins with a letter.
It is required that the new identifier begins with the repository prefix.
It is also required that the repository prefix is followed by the history tables prefix.
Besides, the new name must be a valid table identifier, which depends on the naming rules
of the underlying RDBMS.
CHAPTER 63
Workflow management
This chapter contains the following topics:
1. Workflows
2. Interactions
3. Workflow history
63.1 Workflows
To define general parameters for the execution of data workflows, the management of workflow
publications, or to oversee data workflows in progress, navigate to the 'Administration' area. Click on
the down arrow in the navigation pane and select Workflow management > Workflows.
Note
In cases where unexpected inconsistencies arise in the workflow execution technical tables,
data workflows may encounter errors. It may then be necessary to run the operation 'Clean up
inconsistencies in workflow execution tables' from the 'Actions' menu in the navigation pane
under Administration > Workflow Management > Workflows.
Execution of workflows
Various tables can be used to manage the data workflows that are currently in progress. These tables
are accessible in Workflow management > Workflows in the navigation pane.
Workflows table
The 'Workflows' table contains instances of all data workflows in the repository, including those
invoked as sub-workflows. A data workflow is a particular execution instance of a workflow model
publication. This table provides access to the data context variables for all data workflows. It can be
used to access the status of advancement of the data workflow in terms of current variable values, and
in case of a data workflow suspension, to modify the variable values.
From the 'Actions' menu of the 'Workflows' table, it is possible to clear the completed data workflows
that are older than a given date, by selecting the 'Clean from a date' service. This service automatically
ignores the active data workflows.
Tokens table
The 'Tokens' table allows managing the progress of data workflows. Each token marks the current
step being executed in a running data workflow, as well as the current state of the data workflow.
Comment table
The 'Comments' table contains the user's comments for main workflows and their sub-workflows.
Workflow publications
The 'Workflow publications' table is a technical table that contains all the workflow model publications
of the repository. This table associates published workflow models with their snapshots. It is not
recommended to directly modify this table, but rather to use the actions available in the workflow
modeling area to make changes to publications.
Configuration
Email configuration
In order for email notifications to be sent during the data workflow execution, the following settings
must be configured under 'Email configuration':
• The URL definition field is used to build links and value mail variables in the workflow.
• The 'From email' field must be completed with the email address that will be used to send email
notifications.
Interface customization
Modeling default values
The default value for some properties can be customized in this section.
The administrator has the possibility to define the default values to be used when a new workflow
model or workflow step is created in the 'Workflow Modeling' section.
Priorities configuration
The property 'Default priority' defines how data workflows and their work items across the repository
display if they have no priority level. For example, if this property is set to the value 'Normal', any
workflow and work item with no priority will appear to have the 'Normal' priority.
The 'priorities' table defines all priority levels available to data workflows in the repository. As many
integer priority levels as needed can be added, along with their labels, which will appear when users
hover over the priority icon in the work item tables. The icons that correspond to each priority level can
also be selected, either from the set provided by EBX, or by specifying a URL to an icon image file.
Temporal tasks
Under 'Temporal tasks', the polling interval for time-dependent tasks in the workflow can be set, such
as deadlines and reminders. If no interval value is set, the 'in progress' steps are checked every hour.
Cache expiry (seconds) Expiration time (in seconds) before a new update of the
inbox cache. Please note that this parameter can impact the
CPU load and performance since the computation time can
be costly for a repository with many work items. If no value
is defined, the default value is 600.
User interface refresh Refresh time (in seconds) between two updates of the
periodicity (seconds) inbox counter in the user interface. Please note that this
refresh concerns all inbox counters in the user interface:
inbox counters of the custom perspective, header inbox
counter and Data Workflows inbox counter for the advanced
perspective. If no value is defined, default value is 5. If the
value is zero (or negative), the refresh is disabled. Also, the
modification will only be effective after a logout/login from
the user.
Also, please note that some actions can force the inbox counter to refresh:
• access on Data workflows
• access on any subdivision of the Data workflows section
• accept or reject a work item
• launch a workflow
These parameters are accessible in Workflow management > Workflows > Configuration > Temporal
tasks in the navigation pane.
63.2 Interactions
To manage workflow interactions, navigate to the Administration area. Click the down arrow in the
navigation pane and select the entry Workflow management > Interactions.
An interaction is generated automatically for every work item that is created. It is a local data context
of a work item and is accessible from an EBX session. When a work item is executed, the user performs
the assigned actions based upon its interaction, independently of the workflow engine. User tasks
define mappings for their input and output parameters to link interactions with the overall data contexts
of data workflows.
Interactions can be useful for monitoring the current parameters of work items. For example, an
interaction can be updated manually by a trigger or a user service.
Clean history
From the 'Actions' menu of the 'Workflows' table, the history of completed data workflows older than
a given date can be cleared by selecting the 'Clear from a date' service.
Only the history of workflows that have been previously cleaned (e.g. their execution data deleted)
is cleared. This service automatically ignores the history associated with existing workflows. It is
necessary to clear data workflows before clearing the associated history, by using the dedicated service
'Clear from a date' from the 'Workflows' table. Also, a scheduled 'Clear from a date' can be used with
the built-in scheduled task SchedulerPurgeWorkflowMainHistory.
Please note that only main processes are cleaned. In order to remove sub-processes and all related
data, it will be necessary to run a 'standard EBX purge'.
Note
An API is available to fetch the history of a workflow. Direct
access to the underlying workflow history SQL tables is not
supported. See WorkflowEngine.getProcessInstanceHistory WorkflowEngine.
getProcessInstanceHistory .
API
CHAPTER 64
Task scheduler
This chapter contains the following topics:
1. Overview
2. Configuration from EBX
3. Cron expression
4. Task definition
5. Task configuration
64.1 Overview
EBX offers the ability to schedule programmatic tasks.
Note
In order to avoid conflicts and deadlocks, tasks are scheduled in a single queue.
Format
A cron expression is a string comprised of 6 or 7 fields separated by a white space. Fields can contain
any of the allowed values, along with various combinations of the allowed special characters for that
field. The fields are as follows:
Note
The legal characters and the names of months and days of the week are not case sensitive.
MON is the same as mon.
Special characters
A cron expression is a string comprised of 6 or 7 fields separated by a white space. Fields can contain
any of the allowed values, along with various combinations of the allowed special characters for that
field. The fields are as follows:
• * ("all values") - used to select all values within a field. For example, "*" in the Minutes field
means "every minute".
• ? ("no specific value") - useful when you need to specify something in one of the two fields in
which the character is allowed, but not the other. For example, if I want my trigger to fire on a
particular day of the month (say, the 10th), but don't care what day of the week that happens to be,
I would put "10" in the day-of-month field, and "?" in the day-of-week field. See the examples
below for clarification.
• - - used to specify ranges. For example, "10-12" in the hour field means "the hours 10, 11 and 12".
• , - used to specify additional values. For example, "MON,WED,FRI" in the day-of-week field
means "the days Monday, Wednesday, and Friday".
• / - used to specify increments. For example, "0/15" in the seconds field means "the seconds 0, 15,
30, and 45". And "5/15" in the seconds field means "the seconds 5, 20, 35, and 50". You can also
specify '/' after the '' character - in this case '' is equivalent to having '0' before the '/'. '1/3' in the
day-of-month field means "fire every 3 days starting on the first day of the month".
• L ("last") - has different meaning in each of the two fields in which it is allowed. For example,
the value "L" in the day-of-month field means "the last day of the month" - day 31 for January,
day 28 for February on non-leap years. If used in the day-of-week field by itself, it simply means
"7" or "SAT". But if used in the day-of-week field after another value, it means "the last xxx day
of the month" - for example "6L" means "the last friday of the month". When using the 'L' option,
it is important not to specify lists, or ranges of values, as you'll get confusing results.
• W ("weekday") - used to specify the weekday (Monday-Friday) nearest the given day. As an
example, if you were to specify "15W" as the value for the day-of-month field, the meaning is:
"the nearest weekday to the 15th of the month". So if the 15th is a Saturday, the trigger will fire on
Friday the 14th. If the 15th is a Sunday, the trigger will fire on Monday the 16th. If the 15th is a
Tuesday, then it will fire on Tuesday the 15th. However if you specify "1W" as the value for day-
of-month, and the 1st is a Saturday, the trigger will fire on Monday the 3rd, as it will not 'jump'
over the boundary of a month's days. The 'W' character can only be specified when the day-of-
month is a single day, not a range or list of days.
Note
The 'L' and 'W' characters can also be combined in the day-of-month field to yield
'LW', which translates to "last weekday of the month".
• # - used to specify "the nth" day-of-week day of the month. For example, the value of "6#3" in
the day-of-week field means "the third Friday of the month" (day 6 = Friday and "#3" = the 3rd
one in the month). Other examples: "2#1" = the first Monday of the month and "4#5" = the fifth
Wednesday of the month. Note that if you specify "#5" and there is not 5 of the given day-of-
week in the month, then no firing will occur that month.
Examples
Expression Meaning
0 0/5 14,18 * * ? Fire every 5 minutes starting at 2pm and ending at 2:55pm,
AND fire every 5 minutes starting at 6pm and ending at
6:55pm, every day.
0 0-5 14 * * ? Fire every minute starting at 2pm and ending at 2:05pm, every
day.
0 10,44 14 ? 3 WED Fire at 2:10pm and at 2:44pm every Wednesday in the month of
March.
0 15 10 ? * 6L 2002-2005 Fire at 10:15am on every last friday of every month during the
years 2002, 2003, 2004 and 2005.
0 0 12 1/5 * ? Fire at 12pm (noon) every 5 days every month, starting on the
first day of the month.
Note
Pay attention to the effects of '?' and '*' in the day-of-week and day-of-month fields.
Support for specifying both a day-of-week and a day-of-month value is not complete (you
must currently use the '?' character in one of these fields).
Be careful when setting fire times between the hours of the morning when "daylight savings"
changes occur in your locale (for US locales, this would typically be the hour before and after
2:00 AM - because the time shift can cause a skip or a repeat depending on whether the time
moves back or jumps forward.
Note
The user will not be authenticated, and no password is required. As a consequence, a
user with no password set in the directory can only be used to run scheduled tasks.
A custom task can be parameterized by means of a JavaBean specification (getter and setter).
Supported parameter types are:
• java.lang.boolean
• java.lang.int
• java.lang.Boolean
• java.lang.Integer
• java.math.BigDecimal
• java.lang.String
• java.lang.Date
• java.net.URI
• java.net.URL
Parameter values are set in XML format.
CHAPTER 65
Audit trail
This chapter contains the following topics:
1. Overview
2. Update details and disk management
3. File organization
65.1 Overview
XML audit trail is a feature that allows logging updates to XML files. An alternative history feature
is also available to record table updates in the relational database; see History [p 249].
Any persistent updates performed in the EBX repository are logged to an audit trail XML file.
Procedure executions are also logged, even if they do not perform any updates, as procedures are
always considered to be transactions. The following information is logged:
• Transaction type, such as dataset creation, record modification, record deletion, specific
procedure, etc.
• Dataspace or snapshot on which the transaction is executed.
• Transaction source. If the action was initiated by EBX, this source is described by the user identity,
HTTP session identifier and client IP address. If the action was initiated programmatically, only
the user's identity is logged.
• Optional "trackingInfo" value regarding the session
• Transaction date and time (in milliseconds);
• Transaction UUID (conform to the Leach-Salz variant, version 1);
• Error information; if the transaction has failed.
• Details of the updates performed. If there are updates and if history detail is activated, see next
section.
1. If an archive import is executed in non-interactive mode (without a change set), the audit trail
does not detail the updates; it only specifies the archive that has been imported. In this case, if it
is important to keep a fine trace of the import-replace, the archive itself must be preserved.
2. If an archive import is executed in interactive mode (with a change set), or if a dataspace is merged
to its parent, the resulting log size will nearly triple the unzipped size of the archive. Furthermore,
for consistency concerns, each transaction is logged to a temporary file (in the audit trail directory)
before being moved to the main file. Therefore, EBX requires at least six times the unzipped size
of the largest archive that may be imported.
3. In the context of a custom procedure that performs many updates not requiring auditing, it is
possible for the developer to disable the detailed history using the method ProcedureContext.
setHistoryActivation .
API
where <yyyy-mm-dd> is the file date and <nn> is the file index for the current day.
The standard XML format is still available in an XML file that references the text file. This file is
named:
<yyyy-mm-dd>-part<nn>Ref.xml
These two files are then re-aggregated in a "closed" XML file when the repository has been cleanly
shut down, or if EBX is restarted.
2004-04-05-part00.xml
2004-04-05-part01.xml
2004-04-06-part00.xml
2004-04-06-part01.xml
2004-04-06-part02.xml
2004-04-06-part03.xml
2004-04-07-part00.xml
2004-04-10-part00.xml
2004-04-11-part00Content.txt
2004-04-11-part00Ref.xml
CHAPTER 66
Other
This chapter contains the following topics:
1. Lineage
2. Event broker
66.1 Lineage
To administer lineage, three tables are accessible:
• Authorized profiles: Profiles must be added to this table to be used for data lineage WSDL
generation.
• History: Lists the general data lineage WSDLs and their configuration.
• JMS location: Lists the JMS URL locations.
Terminology
Topics
Administration
The management console is located under 'Event broker' in the 'Administration' area. It contains three
tables: 'Topics', 'Subscribers' and 'Subscriptions'.
All content is read-only, except for the following operations:
• Topics and subscribers can be manually activated or deactivated using dedicated services.
• Subscribers that are no longer registered to the broker can be deleted.
Distributed Data
Delivery (D3)
CHAPTER 67
Introduction to D3
This chapter contains the following topics:
1. Overview
2. D3 terminology
3. Known limitations
67.1 Overview
EBX offers the ability to send data from an EBX instance to other instances. Using a broadcast action,
it also provides an additional layer of security and control to the other features of EBX. It is particularly
suitable for situations where data governance requires the highest levels of data consistency, approvals
and the ability to rollback.
D3 architecture
A typical D3 installation consists of one primary node and multiple replica nodes. In the primary node,
a Data Steward declares which dataspaces must be broadcast, as well as which user profile is allowed
to broadcast them to the replica nodes. The Data Steward also defines delivery profiles, which are
groups of one or more dataspaces.
Each replica node must define from which delivery profile it receives broadcasts.
Protocols
If JMS is activated, the conversation between a primary node and a replica node is based on SOAP
over JMS, while archive transfer is based on JMS binary messages.
If JMS is not activated, conversation between a primary node and a replica node is based on SOAP
over HTTP(S), while binary archive transfer is based on TCP sockets. If HTTPS is used, make sure
that the target node connector is correctly configured by enabling SSL with a trusted certificate.
67.2 D3 terminology
delivery profile A delivery profile is a logical name that groups one or more
delivery dataspaces. Replica nodes subscribe to one or more
delivery profiles.
Primary node An instance of EBX that can define one or more delivery
dataspaces, and to which replica nodes can subscribe. A
primary node can also act as a regular EBX server.
Administration limitations
Technical dataspaces cannot be broadcast, thus the EBX default user directory cannot be synchronized
using D3.
CHAPTER 68
D3 broadcasts and delivery
dataspaces
This chapter contains the following topics:
1. Broadcast
2. Replica node registration
3. Accessing delivery dataspaces
68.1 Broadcast
Scope and contents of a broadcast
A D3 broadcast occurs at the dataspace or snapshot level. For dataspace broadcasts, D3 first creates
a snapshot to capture the current state, then broadcasts this newly created snapshot.
A broadcast performs one of the following procedures depending on the situation:
• An update of the differences computed between the new broadcast snapshot and the current
'commit' one on the replica node.
• A full synchronization containing all datasets, tables, records, and permissions. This is done on
the first broadcast to a given replica node, if the previous replica node commit is not known to
the primary node, or on demand using the user service in '[D3] Primary node configuration'.
Performing a broadcast
The broadcast can be performed:
• By the end-user, using the Broadcast action available in the dataspace or snapshot (this action is
available only if the dataspace is registered as a delivery dataspace)
• Using custom Java code that uses D3NodeAsMaster .
API
Conditions
In order to be able to broadcast, the following conditions must be fulfilled:
• The authenticated user profile has permission to broadcast.
Persistence
When a primary node shuts down, all waiting or in progress broadcast requests abort, then they will
be persisted on a temporary file. On startup, all aborted broadcasts are restarted.
Note
Broadcasts are performed asynchronously. Therefore, no information is displayed in the user
interface about the success or failure of a broadcast. Nevertheless, it is possible to monitor the
broadcast operations inside '[D3] Primary node configuration'. See Supervision [p 421].
Note
If the registered replica node repository ID or communication layer already exists, the replica
node entry in the 'Registered replica nodes' technical table is updated, otherwise a new entry
is created.
Performing an initialization
The initialization can be done:
• Automatically at replica node server startup.
• Manually when calling the replica node service 'Register replica node'.
Conditions
To be able to register, the following conditions must be fulfilled:
• The D3 mode must be 'hub' or 'slave'.
• The primary and replica node authentication parameters must correspond to the primary node
administrator and replica node administrator defined in their respective directories.
• The delivery profiles defined on the replica node must exist in the primary node configuration.
• All data models contained in the registered dataspaces must exist in the replica node. If embedded,
the data model names must be the same. If packaged, they must be located at the same module
name and the schema path in the module must be the same in both the primary and replica nodes.
• The D3 primary node configuration has no validation error on the following scope: the technical
record of the registered replica node and all its dependencies (dependent delivery profiles, delivery
mappings and delivery dataspaces).
Note
To set the parameters, see the replica or hub EBX properties in Configuring primary, hub and
replica nodes [p 417].
Access restrictions
On the primary node, a delivery dataspace can neither be merged nor closed. Other operations are
available depending on permissions. For example, modifying a delivery dataspace directly, creating
a snapshot independent from a broadcast, or creating and merging a child dataspace.
On the replica node, aside from the broadcast process, no modifications of any kind can be made to
a delivery dataspace, whether by the end-user, data services, or a Java program. Furthermore, any
dataspace-related operations, such as merge, close, etc., are forbidden on the replica node.
CHAPTER 69
D3 JMS Configuration
This chapter contains the following topics:
1. JMS for distributed data delivery (D3)
Note
If the EBX main configuration does not activate JMS and D3 ('slave', 'hub'
or 'master' node) through the properties ebx.d3.mode, ebx.jms.activate and
ebx.jms.d3.activate, then the environment entries below will be ignored by EBX
runtime. See JMS [p 333] and Distributed data delivery (D3) [p 333] in the EBX main
configuration properties for more information on these properties.
Note
These JNDI names are set by default, but can be modified inside the web application
archive ebx.war, included in EBXForWebLogic.ear (if using Weblogic) or in EBX.ear (if
using JBoss, Websphere or other application servers).
The deployment descriptor of the primary node must be manually modified by declaring specific
communication and archive queues for each replica node. It consists in adding resource names in
'web.xml' inside 'ebx.war'. The replica-specific node queues can be used by one or more replica nodes.
Resources can be freely named, but the physical names of their associated queue must
correspond to the definition of replica nodes for resources jms/EBX_D3ArchiveQueue and jms/
EBX_D3CommunicationQueue.
Note
Physical queue names matching: on registration, the replica node sends the
communication and archive physical queue names. These queues are matched by
physical queue name among all resources declared on the primary node. If unmatched,
the registration fails.
Primary-Replica nodes architecture Between a primary node and two replica Between a primary node and a replica
nodes with shared queues [p 410] node with replica-specific queues [p 411]
Hub-Hub architecture Between two hub nodes with shared Between two hub nodes with replica-
queues [p 412] specific queues [p 413]
Between a primary node and two replica nodes with shared queues
CHAPTER 70
D3 administration
This chapter contains the following topics:
1. Quick start
2. Configuring D3 nodes
3. Supervision
Note
Deploy EBX on two different web application containers. If both instances are running on the
same host, ensure that all communication TCP ports are distinct.
Note
The primary node can be started after the configuration.
5. Map the delivery dataspace with the delivery profile into the 'Delivery mapping' table.
Note
The primary node is now ready for the replica node(s) registration on the delivery profile.
Check that the D3 broadcast menu appears in the 'Actions' menu of the dataspace or one of
its snapshots.
Note
The replica node can be started after the configuration.
Note
Please check that the model is available before broadcast (from data model assistant, it must
be published).
The replica node is then ready for broadcast.
Delivery profiles Profiles to which replica nodes can subscribe. The delivery
mode must be defined for each delivery profile.
Note
The tables above are read-only while some broadcasts are pending or in progress.
Primary node
In order to act as a primary node, an instance of EBX must declare the following property in its main
configuration file.
Sample configuration for ebx.d3.mode=master node:
##################################################################
## D3 configuration
##################################################################
##################################################################
# Configuration for master, hub and slave
##################################################################
# Optional property.
# Possibles values are single, master, hub, slave
# Default is single meaning the server will be a standalone instance.
ebx.d3.mode=master
Hub node
In order to act as a hub node (combination of primary and replica node configurations), an instance
of EBX must declare the following property in its main configuration file.
Sample configuration for ebx.d3.mode=hub node:
##################################################################
## D3 configuration
##################################################################
##################################################################
# Configuration for master, hub and slave
##################################################################
# Optional property.
# Possibles values are single, master, hub, slave
# Default is single meaning the server will be a standalone instance.
ebx.d3.mode=hub
##################################################################
# Configuration dedicated to hub or slave
##################################################################
# Profiles to subscribe to
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.delivery.profiles=
# User and password to be used by the master to communicate with the hub or slave.
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.slave.username=
ebx.d3.slave.password=
Replica node
In order to act as a replica node, an instance of EBX must declare the following property in its main
configuration file.
Sample configuration for ebx.d3.mode=slave node:
##################################################################
## D3 configuration
##################################################################
##################################################################
# Configuration for master, hub and slave
##################################################################
# Optional property.
# Possibles values are single, master, hub, slave
# Default is single meaning the server will be a standalone instance.
ebx.d3.mode=slave
##################################################################
# Configuration dedicated to hub or slave
##################################################################
# Profiles to subscribe to
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.delivery.profiles=
# User and password to be used by the master to communicate with the hub or slave.
# Mandatory property if ebx.d3.mode=hub or ebx.d3.mode=slave
ebx.d3.slave.username=
ebx.d3.slave.password=
JMS protocol
If JMS is activated, the following properties can be defined in order to enable JMS functionalities
for a D3 node.
Sample configuration for all D3 nodes with JMS network protocol:
##################################################################
## JMS configuration for D3
##################################################################
# Taken into account only if Data Services JMS is configured properly
##################################################################
# Configuration for master, hub and slave
##################################################################
# Default is false, activate JMS for D3
## If activated, the deployer must ensure that the entries
## 'jms/EBX_D3ReplyQueue', 'jms/EBX_D3ArchiveQueue' and 'jms/EBX_D3CommunicationQueue'
## are bound in the operational environment of the application server.
## On slave or hub mode, the entry 'jms/EBX_D3MasterQueue' must also be bound.
ebx.jms.d3.activate=false
# Archive maximum size in KB for the JMS body message. If exceeds, the message
# is transferred into several sequences messages in a same group, where each one does
# not exceed the maximum size defined.
# Must be a positive integer equals to 0 or above 100.
# Default is 0 that corresponds to unbounded.
#ebx.jms.d3.archiveMaxSizeInKB=
##################################################################
# Configuration dedicated to hub or slave
##################################################################
# Master repository ID, used to set a message filter for the concerned master when sending JMS message
# Mandatory property if ebx.jms.d3.activate=true and if ebx.d3.mode=hub or ebx.d3.mode=slave
#ebx.jms.d3.master.repositoryId=
Delete replica node delivery Delete the delivery dataspace on chosen replica nodes and/
dataspace or unregister it from the configuration of the D3 primary
node.
To access the service, select a delivery dataspace from the
'Delivery dataspaces' table on the primary node, then launch
the wizard.
Fully resynchronize Broadcast the full content of the last broadcast snapshot to
the registered replica nodes.
Deactivate replica nodes Remove the selected replica nodes from the broadcast scope
and switch their states to 'Unavailable'.
Note
The "in progress" broadcast contexts are rolled
back.
Unregister replica nodes Disconnects the selected replica nodes from the primary
node.
Note
The "in progress" broadcast contexts are rolled
back.
Note
The primary node services above are hidden while some broadcasts are pending or in progress.
Register replica node Re-subscribes the replica node to the primary node if it has
been unregistered.
Unregister replica node Disconnects the replica node from the primary node.
Note
The "in progress" broadcast contexts are rolled
back.
70.3 Supervision
The last broadcast snapshot is highlighted in the snapshot table of the dataspace, it is represented by
an icon displayed in the first column.
Registered replica nodes Replica nodes registered with the primary node. From this
table, several services are available on each record.
Replica node registration log History of initialization operations that have taken place.
Detailed history History of archive deliveries that have taken place. The list
of associated delivery archives can be accessed from the
tables 'Broadcast history' and 'Initialization history' using
selection nodes.
Check replica node information Lists the replica nodes and related information, such as
the replica node's state, associated delivery profiles, and
delivered snapshots.
Clear history content Deletes all records in all history tables, such as 'Broadcast
history', 'Replica node registration log' and 'Detailed
history'.
Log supervision
The technical supervision can be done through the log category 'ebx.d3', declared in the EBX main
configuration file. For example:
ebx.log4j.category.log.d3= INFO, Console, ebxFile:d3
Temporary files
Some temporary files, such as exchanged archives, SOAP messages, broadcast queue, (...), are created
and written to the EBX temporary directory. This location is defined in the EBX main configuration
file:
#################################################
# When set, allows specifying the directory containing temporary files for import.
# If unset, the used directory is ${ebx.temp.directory}/ebx.platform.
#ebx.temp.import.directory = ${ebx.temp.directory}/ebx.platform
Introduction
CHAPTER 71
Packaging EBX modules
An EBX module is a standard Java EE web application, packaging various resources such as XML
Schema documents, Java classes and static resources.
Since EBX modules are web applications they benefit from features such as class-loading isolation,
WAR or EAR packaging, and Web resources exposure.
This chapter contains the following topics:
1. Module structure
2. Module declaration
3. Module registration
4. Packaged resources
See the associated schema for documentation about each property. The main properties are as follows:
name Defines the unique identifier of the module in the server Yes.
instance. The module name usually corresponds to the name of
the web application (the name of its directory).
publicPath Defines a path other than the module's name identifying the No.
web application in public URLs. This path is added to the URL
of external resources of the module when computing absolute
URLs. If this field is not defined, the public path is the module's
name, defined above.
services Declares user services using the legacy API. See [HTML No.
documentation] of legacy user services. From the version
5.8.0, it is strongly advised to use the new user services [p 529].
beans Declares reusable Java bean components. See the workflow No.
package [p 515].
API.
or:
• contain a Servlet extending the class ModuleRegistrationServlet ;
API
logger associated with the module and defining additional behavior such as common JavaScript
and CSS resources.
• The specific class extending ModuleRegistrationServlet must be located in the web application
(under /WEB-INF/classes or /WEB-INF/lib; due to the fact that this class is internally used as a
hook to the application's class-loader, to load Java classes used by the data models associated
with the module).
• The application server startup process is asynchronous and web applications / EBX modules are
discovered dynamically. The EBX repository initialization depends on this process and will wait
for the registration of all used modules up to an unlimited amount of time. As a consequence, if
a used module is not deployed for any reason, it must be declared in the EBX main configuration
file. For more information, see the property Declaring modules as undeployed [p 337].
• All module registrations and unregistrations are logged in the log.kernel category.
• If an exception occurs while loading a module, the cause is written in the application server log.
• Once the servlet is out of service, the module is unregistered and the data models and associated
datasets become unavailable. Note that hot deployment/undeployment is not supported [p 309].
Registration example
Here is an implementation example of the ModuleRegistrationServlet:
package com.foo;
import javax.servlet.*;
import javax.servlet.http.*;
import com.onwbp.base.repository.*;
/**
*/
public class RegisterServlet extends ModuleRegistrationServlet
{
aContext.addPackagedStyleSheetResource("myModule.css");
aContext.addPackagedJavaScriptResource("myModule.js");
...
}
See also
API
ResourceType
Directory structure
The packaged resources must be located under the following directory structure:
1. On the first level, the directory /www/ must be located at the root of the module (web application).
2. On the second level, the directory must specify the localization. It can be:
• common/ should contain all the resources to be used by default, either because they are locale-
independent or as the default localization (in EBX, the default localization is en, namely
English);
• {lang}/ when localization is required for the resources located underneath, with {lang} to
be replaced by the actual locale code; it should correspond to the locales supported by EBX;
for more information, see Configuring EBX localization [p 329].
3. On the third level, the directory must specify the resource type. It can be:
• jscripts/ for JavaScript resources;
• stylesheets/ for Cascading Style Sheet (CSS) resources;
• html/ for HTML resources;
• icons/ for icon typed resources;
• images/ for image typed resources.
Example
In this example, the image logoWithText.jpg is the only resource that is localized:
/www
├── common
│ ├── images
│ │ ├── myCompanyLogo.jpg
│ │ └── logoWithText.jpg
│ ├── jscripts
│ │ └── myCompanyCommon.js
│ └── stylesheets
│ └── myCompanyCommon.css
├── de
│ └── images
│ └── logoWithText.jpg
└── fr
└── images
└── logoWithText.jpg
CHAPTER 72
Mapping to Java
This chapter contains the following topics:
1. How to access data from Java?
2. Concurrency and isolation levels
3. Mapping of data types
4. Java bindings
getter methods for these classes return objects that are typed according to the mapping rules described
in the section Mapping of data types [p 432].
Write access
Data updates must be performed in a well-managed context:
• In the context of a procedure execution, by calling the methods setValue... of the interface
ValueContextForUpdate , or
API
• During the user input validation, by calling the method setNewValue of the class
ValueContextForInputValidation .
API
Note
For custom read-only transactions that run on a dataspace, it is recommended to use
ReadOnlyProcedure .
API
• In relational mode, the default isolation level is the database default isolation level.
See also
API
AdaptationHome.findAdaptationOrNull
API
AdaptationTable.lookupAdaptationByPrimaryKey
API
Adaptation.getUpToDateInstance
Example
In this example, the Java class com.carRental.Customer must define the methods getFirstName()
and setFirstName(String).
A JavaBean can have a custom user interface within EBX, by using a UIBeanEditor . API
Benefits
Ensuring the link between XML Schema structure and Java code provides a number of benefits:
• Development assistance: Auto-completion when typing an access path to parameters, if
supported by your IDE.
• Access code verification: All accesses to parameters are verified at code compilation.
• Impact verification: Each modification of the data model impacts the code compilation state.
• Cross-referencing: By using the reference tools of your IDE, it is easy to verify where a
parameter is used.
Consequently, it is strongly recommended to use Java bindings.
XML declaration
The specification of the Java types to be generated from the data model is included in the main schema.
Each binding element defines a generation target. It must be located at, in XPath notation, xs:schema/
xs:annotation/xs:appinfo/ebxbnd:binding, where the prefix ebxbnd is a reference to the namespace
identified by the URI urn:ebx-schemas:binding_1.0. Several binding elements can be defined if you
have different generation targets.
The attribute targetDirectory of the element ebxbnd:binding defines the root directory used for Java
type generation. Generally, it is the directory containing the project source code, src. A relative path
is interpreted based on the current runtime directory of the VM, as opposed to the XML schema.
See bindings XML Schema.
</xs:schema>
Java constants can be defined for XML schema paths. To do so, generate one or more interfaces from
a schema node, including the root node /. The example generates two Java path constant interfaces,
one from the node /rules and the other from the node /stylesheet in the schema. Interface names
are described by the element javaPathConstants with the attribute typeName. The associated node is
described by the element nodes with the attribute root.
CHAPTER 73
Tools for Java developers
EBX provides Java developers with tools to facilitate use of the EBX API, as well as integration with
development environments.
This chapter contains the following topics:
1. Activating the development tools
2. Data model refresh tool
3. Generating Java bindings
4. Path to a node
5. Web component link generator
Attention
Since the operation is critical regarding data consistency, refreshing the data models acquires a global
exclusive lock on the repository. This means that most other operations (data access and update,
validation, etc.) will wait until the completion of the data model refresh.
Note
This field is always available to administrators.
CHAPTER 74
Terminology changes
A new EBX release can introduce new vocabulary for users. To preserve the backward compatibility,
these terminology changes do not usually impact the API. Consequently, Java class names, method
names, data services operation names, etc. still use the older version terminology. This chapter purpose
is to facilitate the correspondence of the old term in the API to the new terms.
Dataspace Branch
Snapshot Version
Field Attribute
Record Record/occurrence
Data model
CHAPTER 75
Introduction
A data model is a structural definition of the data to be managed in the EBX repository. Data models
contribute to EBX's ability to guarantee the highest level of data consistency and to facilitate data
management.
Specifically, the data model is a document that conforms to the XML Schema standard (W3C
recommendation). Its main features are as follows:
• A rich library of well-defined simple data types [p 445], such as integer, boolean, decimal, date,
time;
• The ability to define additional simple types [p 447] and complex types [p 447];
• The ability to define simple lists of items, called aggregated lists [p 456];
• Validation constraints [p 479] (facets), for example: enumerations, uniqueness constraints,
minimum/maximum boundaries.
EBX also uses the extensibility features of XML Schema for other useful information, such as:
• Predefined types [p 448], for example: locale, resource, html;
• Definition of tables [p 459] and foreign key constraints [p 464];
• Mapping data in EBX to Java beans;
• Advanced validation constraints [p 479] (extended facets), such as dynamic enumerations;
• Extensive presentation information [p 497], such as labels, descriptions, and error messages.
Note
EBX supports a subset of the W3C recommendations, as some features are not relevant
to Master Data Management.
75.2 References
For an introduction to XML Schema, see the W3Schools XML Schema Tutorial.
See also
XML Schema Part 0: Primer
XML Schema Part 1: Structures
XML Schema Part 2: Datatypes
75.5 Conventions
By convention, namespaces are always defined as follows:
Prefix Namespace
xs: http://www.w3.org/2001/XMLSchema
osd: urn:ebx-schemas:common_1.0
fmt: urn:ebx-schemas:format_1.0
usd: urn:ebx-schemas:userServices_1.0
emd: urn:ebx-schemas:entityMappings_1.0
CHAPTER 76
Data types
This chapter details the data types supported by EBX.
xs:string java.lang.String
xs:boolean java.lang.Boolean
xs:dateTime java.util.Date
xs:anyURI java.net.URI
The mapping between XML Schema types and Java types are detailed in the section Mapping of data
types [p 432].
• In the data model, only the element restriction is allowed in a named simple type, and even
then, only derivation by restriction is supported. Notably, the elements list and union are not
supported.
• Facet definition is not cumulative. That is, if an element and its named type both define the same
kind of facet, then the facet defined in the type is overridden by the local facet definition. However,
this restriction does not apply to programmatic facets defined by the element osd:constraint.
For osd:constraint, if an element and its named type both define a programmatic facet with
different Java classes, the definition of these facets will be cumulative. Contrary to the XML
Schema Specification, EBX is not strict regarding the definition of a facet of the same kind in
an element and its named type. That is, the value of a same kind of facet defined in an element
is not checked according to the one defined in the named type. However, in the case of static
enumerations defined both in an element and its type, the local enumeration will be replaced by
the intersection between these enumerations.
• It is not possible to define different types of enumerations on both an element and its named type.
For instance, you cannot specify a static enumeration in an element and a dynamic enumeration
in its named type.
• It is not possible to simultaneously define a pattern facet in both an element and its named type.
The above types are defined by the internal schema common-1.0.xsd. They are defined as follows:
• includes
Specifies the dataspaces that can be referenced by this
field. An include must at least be defined.
• branch
osd:UDA User Defined Attribute: This type allows any user, according
to their access rights, to define a value associated with an
attribute defined in a dictionary called a UDA Catalog.
• xs:dateTime
• xs:time
• xs:date
• xs:anyURI
• xs:Name
• xs:int
• osd:html
• osd:email
• osd:password
• osd:locale
• osd:text
</xs:annotation>
</xs:element>
<xs:element name="monthly" type="xs:int">
<xs:annotation>
<xs:documentation>
<osd:label>Monthly payment </osd:label>
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="cost" type="xs:int">
<xs:annotation>
<xs:documentation>
<osd:label>Cost</osd:label>
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
Aggregated lists have a dedicated editor in EBX. This editor allows you to add or to delete occurrences.
Attention
The addition of an osd:table declaration to an element with maxOccurs > 1 is a very important
consideration that must be taken into account during the design process. An aggregated list is severely
limited with respect to the many features that are supported by tables. Some features unsupported
on aggregated lists that are supported on tables are:
• Performance and memory optimization;
• Lookups, filters and searches;
• Sorting, view and display in hierarchies;
• Identity constraints (primary keys and uniqueness constraints);
• Detailed permissions for creation, modification, deletion and particular permissions at the record
level;
• Detailed comparison and merge.
Thus, aggregated lists should be used only for small volumes of simple data (one or two dozen
occurrences), with no advanced requirements. For larger volumes of data or more advanced
functionalities, it is strongly advised to use an osd:table declaration.
For more information on table declarations, see Tables and relationships [p 459].
The attribute schemaLocation is mandatory and must specify either an absolute or a relative path to
the XML Schema Document to include.
The inclusion of XML Schema Documents is not namespace aware, thus all included data types must
belong to the same namespace. As a consequence, including XML Schema Documents that define
data types of the same name is not supported.
EBX includes extensions with specific URNs for including embedded data models and data models
packaged in modules.
To include an embedded data model in a model, specify the URN defined by EBX. For example:
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:osd="urn:ebx-schemas:common_1.0" xmlns:fmt="urn:ebx-schemas:format_1.0">
<xs:include schemaLocation="urn:ebx:publication:myPublication"/>
...
</xs:schema>
To include a data model packaged in a module, specify the specific URN defined by EBX. For
example:
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:osd="urn:ebx-schemas:common_1.0" xmlns:fmt="urn:ebx-schemas:format_1.0">
<xs:include schemaLocation="urn:ebx:module:aModuleName:/WEB-INF/ebx/schema/myDataModel.xsd"/>
...
</xs:schema>
See SchemaLocation for more information about specific URNs supported by EBX.
API
Note
If the packaged data model uses Java resources, the class loader of the module containing
the data model will be used at runtime for resolving these resources.
CHAPTER 77
Tables and relationships
This chapter contains the following topics:
1. Tables
2. Foreign keys
3. Associations
77.1 Tables
Overview
EBX supports the features of relational database tables, including the handling of large volumes of
records, and identification by primary key.
Tables provide many benefits that are not offered by aggregated lists [p 456]. Beyond relational
capabilities, some features that tables provide are:
• filters and searches;
• sorting, views and hierarchies;
• identity constraints: primary keys, foreign keys [p 464] and uniqueness constraints [p 481];
• specific permissions for creation, modification, and deletion;
• dynamic and contextual permissions at the individual record level;
• detailed comparison and merge;
• ability to have inheritance at the record level (see dataset inheritance [p 270]);
• performance and memory optimization.
See also
Foreign keys [p 464]
Associations [p 467]
Working with existing datasets [p 121]
Simple tabular views [p 114]
Hierarchical views [p 115]
History [p 249]
Declaration
A table element, which is an element with maxOccurs > 1, is declared by adding the following
annotation:
<xs:annotation>
<xs:appinfo>
<osd:table>
<primaryKeys>/pathToField1 /pathToField...n</primaryKeys>
</osd:table>
</xs:appinfo>
</xs:annotation>
Common properties
defaultLabel Defines the end-user display of records. Multiple variants can be No.
specified:
• A static non-localized expression is defined using the
defaultLabel element, for example:
<defaultLabel>Product: ${./productCode}</
defaultLabel>
Note: The priority of the tags when displaying the user interface is
the following:
1. defaultLabel tags with a JavaBean (but it is not allowed to
define several renderers of the same type);
2. defaultLabel tags with a static localized expression using the
xml:lang attribute;
index Specifies an index for speeding up requests that match this index (see No.
performances [p 293]).
The attribute name is mandatory. Each field of the index must be
denoted by its absolute XPath notation, which starts just under the
root element of the table. If there are multiple fields in the index, the
list is delimited by whitespace.
Note:
• Indexing only concerns semantic and relational tables. History
and replica tables are not affected.
• It is possible to define multiple indexes on a table.
• It is not possible to define two indexes with the same name.
recordForm Defines a specific component for customizing the record form in a No.
dataset. This component is defined using a JavaBean that extends
UIForm or implements UserServiceRecordFormFactory .
API API
Example
Below is an example of a product catalog:
<xs:element name="Products" minOccurs="0" maxOccurs="unbounded">
<xs:annotation>
<xs:documentation>
<osd:label>Product Table </osd:label>
<osd:description>List of products in Catalog </osd:description>
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:annotation>
<xs:appinfo>
<osd:table>
<primaryKeys>./productRange /productCode</primaryKeys>
<index name="indexProductCode">/productCode</index>
</osd:table>
</xs:appinfo>
</xs:annotation>
<xs:sequence>
<xs:element name="productRange" type="xs:string"/><!-- key -->
<xs:element name="productCode" type="xs:string"/><!-- key -->
<xs:element name="productLabel" type="xs:string"/>
<xs:element name="productDescription" type="xs:string"/>
<xs:element name="productWeight" type="xs:int"/>
<xs:element name="productType" type="xs:string"/>
<xs:element name="productCreationDate" type="xs:date"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</osd:table>
</xs:appinfo>
</xs:annotation>
<xs:sequence>
<xs:element name="catalogId" type="xs:string"/><!-- key -->
<xs:element name="catalogLabel" type="xs:string"/>
<xs:element name="catalogDescription" type="xs:string"/>
<xs:element name="catalogType" type="xs:string"/>
<xs:element name="catalogPublicationDate" type="xs:date"/>
</xs:sequence>
</xs:complexType>
</xs:element>
onDelete- Specifies whether, upon record deletion, child records in occulting No, default is
deleteOccultingChildren mode are also to be deleted. never.
mayCreateRoot Specifies whether root record creation is allowed. The expression No, default is
must follow the syntax below. See definition modes [p 270]. always.
mayCreateOverwriting Specifies whether records are allowed to be overwritten in child No, default is
datasets. The expression must follow the syntax below. See definition always.
modes [p 270].
mayCreateOcculting Specifies whether records are allowed to be occulted in child No, default is
datasets. The expression must follow the syntax below. See definition always.
modes [p 270].
mayDuplicate Specifies whether record duplication is allowed. The expression must No, default is
follow the syntax below. always.
mayDelete Specifies whether record deletion is allowed. The expression must No, default is
follow the syntax below. always.
The may... expressions specify when the action is possible, though the ultimate availability of the
action also depends on the user access rights. The expressions have the following syntax:
expression ::= always | never | <condition>*
condition ::= [root:yes | root:no]
"always": the operation is "always" possible (but user rights may restrict this).
"never": the operation is never possible.
"root:yes": the operation is possible if the record is in a root instance.
"root:no": the operation is not possible if the record is in a root instance.
If the record does not define any specific conditions, the default is used.
Using toolbars
It is possible to define the toolbars to display in the user interface using the element defaultView/
toolbars under xs:annotation/appinfo/osd:table. A toolbar allows to customize the buttons and
menus to display when displaying a table view, a hierarchical view, or a record form.
The table below presents the elements that can be defined under defaultView/toolbars.
tabularViewTop Defines the toolbar to use on top of the default table view. No.
tabularViewRow Defines the toolbar to use on each row of the default table view. No.
hierarchyViewTop Defines the toolbar to use in the default hierarchy view of the table. No.
Example
Below is an example of custom toolbars used by a product catalog:
<xs:element name="Products" minOccurs="0" maxOccurs="unbounded">
<xs:annotation>
<xs:documentation>
<osd:label>Product Table </osd:label>
<osd:description>List of products in Catalog </osd:description>
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:annotation>
<xs:appinfo>
<osd:table>
<primaryKeys>./productRange /productCode</primaryKeys>
<defaultView>
<toolbars>
<tabularViewTop>toolbar_name_for_tabularViewTop</tabularViewTop>
<tabularViewRow>toolbar_name_for_tabularViewRow</tabularViewRow>
<recordTop>toolbar_name_for_recordTop</recordTop>
<hierarchyViewTop>toolbar_name_for_hierarchyViewTop</hierarchyViewTop>
</toolbars>
</defaultView>
</osd:table>
</xs:appinfo>
</xs:annotation>
...
</xs:complexType>
</xs:element>
Note
If a toolbar does not exist or is not available for a specific location then no toolbar will
be displayed in the user interface in the corresponding location.
syntax . This extended facet is also interpreted as an enumeration whose values refer to the records
API
container Reference of the dataset that contains the target table. Only if the dataspace element
is defined. Otherwise, default
is the current dataset.
branch Reference of the dataspace that contains the container dataset. No, default is the current
dataspace or snapshot.
display Custom display for presenting the selected foreign key in No, if the display property
the current record and the sorted list of possible keys. Two is not specified, the table's
variants can be specified, either pattern-based expressions, or a record rendering [p 461] is
JavaBean if the needs are very specific: used.
• Static expressions are specified using the display and
pattern elements. These static expressions can be
localized using the additional attribute xml:lang on the
pattern element, for example:
<display>
<pattern>Product : ${./productCode}</pattern>
</display>
<display osd:class="com.wombat.MyLabel"></
display>
filter Specifies an additional constraint that filters the records of the No.
target table. Two types of filters are available:
• An XPath filter is an XPath predicate in the target table
context. It is specified using the predicate element. For
example:
<filter><predicate>type = ${../refType}</
predicate></filter>
Note:
The attributes osd:class and the property predicate cannot be
set simultaneously.
The validation search XPath functions are forbidden on a
tableRef filter.
Attention
You can create a dataset which has a foreign key to a container that does not exist in the repository.
However, the content of this dataset will not be available until the container is created. After the
creation of the container, a data model refresh is required to make the dataset available. When
creating a dataset that refers to a container that does not yet exist, the following limitations apply:
• Triggers defined at the dataset level are not executed.
• Default values for fields that are not contained in tables are not initialized.
• During an archive import, it is not possible to create a dataset that refers to a container that does
not exist.
Example
The example below specifies a foreign key in the 'Products' table to a record of the 'Catalogs' table.
<xs:element name="catalog_ref" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:tableRef>
<tablePath>/root/Catalogs</tablePath>
<display>
<pattern xml:lang="en-US">Catalog: ${./catalogId}</pattern>
<pattern xml:lang="fr-FR">Catalogue : ${./catalogId}</pattern>
</display>
<validation>
<severity>error</severity>
<blocksCommit>onInsertUpdateOrDelete</blocksCommit>
<message>A default error message</message>
<message xml:lang="en-US">A localized error message</message>
<message xml:lang="fr-FR">Un message d'erreur localisé</message>
</validation>
</osd:tableRef>
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
</xs:element>
See also
Table definition [p 459]
Primary key syntax PrimaryKey.syntax API
77.3 Associations
Overview
An association provides an abstraction over an existing relationship in the data model, and allows an
easy model-driven integration of associated objects in the user interface and in data services.
Several types of associations are supported:
• 'By foreign key' specifies the inverse relationship of an existing foreign key field [p 464].
• 'Over a link table' specifies a relationship based on an intermediate link table (such tables are often
called "join tables"). This link table has to define two foreign keys, one referring to the 'source'
table (the table holding the association element) and another one referring to the 'target' table.
• 'By an XPath predicate' specifies a relationship based on an XPath predicate.
For an association, it is also possible to:
• Filter associated objects by specifying an additional XPath filter.
• Configure a tabular view to define the fields that must be displayed in the associated table.
• Define how associated objects are to be rendered in forms.
• Hide/show associated objects in the data service 'select' operation. See Hiding a field in Data
Services [p 506].
• Specify the minimum and maximum number of associated objects that are required.
• Add validation constraints using XPath predicates for restricting associated objects.
See also
API
SchemaNode.getAssociationLink
API
SchemaNode.isAssociationNode
API
AssociationLink
Declaration
Associations are defined in the data model using the XML Schema element osd:association under
xs:annotation/appInfo.
Restrictions:
• An association must be a simple element of type xs:string.
• An association can only be defined inside a table.
Note
The "official" cardinality constraints (minOccurs="0" maxOccurs="0") are required
because, from an instance of XML Schema, the corresponding node is absent. In other
words, an association has no value and is considered as a "virtual" element as far as XML
and XML Schema is concerned.
The table below presents the elements that can be defined under xs:annotation/appInfo/
osd:association.
Note
Note
The validation search XPath
functions are forbidden for
association filter.
<recordForm
osd:class="com.wombat.MyRecordFormFactory"/
>
It is possible to refer to another dataset. For that, the following properties must be defined either under
the element tableRefInverse, linkTable or xpathLink depending on the type of the association:
schemaLocation Defines the data model containing the fields used Yes.
by the association. The data model is defined using
a specific URN that allows referring to embedded
data models and data models packaged in modules.
See SchemaLocation for more information about
API
Important: When creating a dataset, you can create a dataset that defines an association to a container
that does not yet exist in the repository. However, the content of this dataset will not be available
immediately upon creation. After the absent container is created, a data model refresh is required in
order to make the dataset available. When creating a dataset that refers to a container that does not
yet exist, the following limitations apply:
• Triggers defined at the dataset level are not executed.
• Default values on fields outside tables are not initialized.
• During an archive import, it is not possible to create a dataset that refers to a container that does
not exist.
</xs:element>
The following example specifies that associated objects are to be rendered in a specific tab:
<xs:element name="products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
</osd:association>
<osd:defaultView>
<displayMode>tab</displayMode>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
Using toolbars
It is possible to define the toolbars to display in the user interface using the element osd:defaultView/
toolbars under xs:annotation/appinfo. A toolbar allows to customize the buttons and menus to
display when displaying the tabular view of an association.
The table below presents the elements that can be defined under osd:defaultView/toolbars.
tabularViewTop Defines the toolbar to use in the default table view of this association. No.
tabularViewRow Defines the toolbar to use for each row of the default view of this No.
association.
The following example shows how to use toolbars from the previous association between a catalog
and its products:
<xs:element name="Products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
</osd:association>
<osd:defaultView>
<toolbars>
<tabularViewTop>toolbar_name_for_tabularViewTop</tabularViewTop>
<tabularViewRow>toolbar_name_for_tabularViewRow</tabularViewRow>
</toolbars>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
Note
It is only possible to use the toolbars defined in the data model containing the target table
of the association. That is, if the target table of the association is defined in another data
model, then it is only possible to reference a toolbar defined in this data model and not
in the one holding the association.
The table below shows the elements that can be defined under osd:defaultView/tabularView.
The following example shows how to define a tabular view from the previous association between
a catalog and its products:
<xs:element name="Products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
</osd:association>
<osd:defaultView>
<tabularView>
<column>/productRange</column>
<column>/productCode</column>
<column>/productLabel</column>
<column>/productDescription</column>
<sort>
<nodePath>/productLabel</nodePath>
<isAscending>true</isAscending>
</sort>
</tabularView>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
• Associate: associates an existing object with the current record. In the case of an association over
a link table, a record in the link table is automatically created to materialize the link between the
current record and the existing object.
• Move: associates the selected objects to a different record than the current one. In the case of an
association over a link table, the previous link record is automatically deleted and a new record
in the link table is automatically created to materialize the link between the selected objects and
their new parent record.
• Delete: deletes selected associated objects in the target table of the association.
• Detach: breaks the semantic link between the current record and the selected associated objects.
In the case of an association over a link table, the records in the link table are automatically
deleted, to break the links between the current record and associated objects.
Note
The actions associate and detach are not available when the association is defined using
an XPath predicate (element xpathLink).
The table below shows the elements that can be defined under osd:defaultView/associationViews.
The following example shows how to define views from the previous association between a catalog
and its products:
<xs:element name="Products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
</osd:association>
<osd:defaultView>
<associationViews>
<viewForAssociateAction>view_name_for_catalogs</viewForAssociateAction>
<viewForMoveAction>view_name_for_products</viewForMoveAction>
</associationViews>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
Validation
Some controls can be defined on associations, in order to restrict associated objects. These controls
are defined under the element osd:association.
The table below presents the controls that can be defined under xs:annotation/appInfo/osd:association.
service select operation. If this property is not defined then, by default, associated objects will be
shown in the Data service select operation.
See also
Hiding a field in Data Services [p 506]
Association field [p 582]
Examples
For example, the product catalog data model defined previously [p 462] specifies that a product
belongs to a catalog (explicitly defined by a foreign key in the 'Products' table). The reverse
relationship (that a catalog has certain products) is not easily represented in XML Schema, unless the
'Catalogs' table includes the following association that is the inverse of a foreign key:
<xs:element name="products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
</osd:association>
</xs:appinfo>
</xs:annotation>
</xs:element>
For an association over a link table, we can consider the previous example and bring some updates.
For instance, the foreign key in the 'Products' table is deleted and the relation between a product and
a catalog is redefined by a link table (named 'Catalogs_Products') that has a primary key composed
of two foreign keys: one that refers to the 'Products' table (named 'productRef') and another to the
'Catalogs' table (named 'catalogRef'). The following example shows how to define an association over
a link table from this new relationship:
<xs:element name="products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<linkTable>
<table>/root/Catalogs_Products</table>
<fieldToSource>./catalogRef</fieldToSource>
<fieldToTarget>./productRef</fieldToTarget>
</linkTable>
</osd:association>
</xs:appinfo>
</xs:annotation>
</xs:element>
The following example shows an association that refers to a foreign key in another dataset. In this
example, the 'Products' and 'Catalogs' tables are not in the same dataset:
<xs:element name="products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<schemaLocation>urn:ebx:module:aModuleName:/WEB-INF/ebx/schema/products.xsd</schemaLocation>
<dataSet>Products</dataSet>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
</osd:association>
</xs:appinfo>
</xs:annotation>
</xs:element>
The following example defines an XPath filter to associate only products of the 'Technology' type:
<xs:element name="products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
<filter>
<predicate>./productType = 'Technology'</predicate>
<checkOnAssociatedRecordCreation>
<message>A default message</message>
<message xml:lang="en-US">A localized message</message>
<message xml:lang="fr-FR">Un message localisé</message>
</checkOnAssociatedRecordCreation>
</filter>
</osd:association>
</xs:appinfo>
</xs:annotation>
</xs:element>
The following example specifies the minimum number of products that are required for a catalog:
<xs:element name="products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
<minOccurs>
<value>1</value>
<validation>
<severity>warning</severity>
<message xml:lang="en-US">One product should at least be associated to this catalog.</message>
<message xml:lang="fr-FR">Un produit doit au moins être associé à ce catalogue.</message>
</validation>
</minOccurs>
</osd:association>
</xs:appinfo>
</xs:annotation>
</xs:element>
The following example specifies that a catalog must contain at most ten products:
<xs:element name="products" minOccurs="0" maxOccurs="0" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:association>
<tableRefInverse>
<fieldToSource>/root/Products/catalog_ref</fieldToSource>
</tableRefInverse>
<maxOccurs>
<value>10</value>
<validation>
<severity>warning</severity>
<message xml:lang="en-US">Too much products for this catalog.</message>
<message xml:lang="fr-FR">Ce catalogue a trop de produits.</message>
</validation>
</maxOccurs>
</osd:association>
</xs:appinfo>
</xs:annotation>
</xs:element>
CHAPTER 78
Constraints, triggers and functions
Facets allow you to define data constraints in your data models. EBX supports XML Schema facets
and provides extended and programmatic facets for advanced data controls.
This chapter contains the following topics:
1. XML Schema supported facets
2. Extended facets
3. Programmatic facets
4. Control policy
5. Triggers and functions
xs:string X X X X X 1
xs:boolean X 1
xs:decimal X X 1
xs:dateTime X X 1
xs:time X X 1
xs:date X X 1
xs:anyURI X X X X X 1
xs:Name X X X X X 1
xs:integer X X 1
3
osd:resource [p 448] 1
osd:dataspaceKey [p
4
1
450]
osd:datasetName [p
4
1
452]
4
osd:color [p 450] 1
xs:string 2 2 2 2
xs:boolean
xs:decimal X X X X X X
xs:dateTime X X X X
xs:time X X X X
xs:date X X X X
xs:anyURI
xs:Name 2 2 2 2
xs:integer X X X X X X
3
osd:resource [p 448]
osd:dataspaceKey [p
4
450]
osd:datasetName [p
4
452]
4
osd:color [p 450]
Example:
<xs:element name="loanRate">
<xs:simpleType>
<xs:restriction base="xs:decimal">
<xs:minInclusive value="4.5" />
<xs:maxExclusive value="17.5" />
</xs:restriction>
</xs:simpleType>
</xs:element>
Uniqueness constraint
It is possible to define a uniqueness constraint, using the standard XML Schema element xs:unique.
This constraint indicates that a value or a set of values has to be unique inside a table.
Example:
In the example below, a uniqueness constraint is defined on the 'publisher' table, for the target field
'name'. This means that no two records in the 'publisher' table can have the same name.
<xs:element name="publisher">
...
<xs:complexType>
<xs:sequence>
...
<xs:element name="name" type="xs:string" />
...
</xs:sequence>
</xs:complexType>
<xs:unique name="uniqueName">
<xs:annotation>
<xs:appinfo>
<osd:validation>
<severity>error</severity>
<message>Name must be unique in table.</message>
<message xml:lang="en-US">Name must be unique in table.</message>
<message xml:lang="fr-FR">Le nom doit être unique dans la table.</message>
</osd:validation>
</xs:appinfo>
</xs:annotation>
<xs:selector xpath="." />
<xs:field xpath="name" />
</xs:unique>
</xs:element>
A uniqueness constraint has to be defined within a table and has the following properties:
xs:selector element Indicates the table to which the uniqueness constraint applies Yes
using a restricted XPath expression ('..' is forbidden). It can
also indicate an element within the table (without changing the
meaning of the constraint).
xs:field element Indicates the field in the context whose values must be unique, Yes
using a restricted XPath expression.
It is possible to indicate that a set of values must be unique by
defining multiple xs:field elements.
Note
Undefined values (null values) are ignored on uniqueness constraints applied to single
fields. On multiple fields, undefined values are taken into account. That is, sets of values
are considered as being duplicated if they have the same defined and undefined values.
Additional localized validation messages can be defined using the element osd:validation under the
elements annotation/appinfo. If no custom validation messages are defined, a built-in validation
message will be used.
Limitations:
1. The target of the xs:field element must be in a table.
2. The uniqueness constraint does not apply to fields inside an aggregated list.
3. The uniqueness constraint does not apply to computed fields.
Foreign keys
EBX allows to create a reference to an existing table by means of a specific facet. See Foreign keys
[p 464] for more information.
Dynamic constraints
Dynamic constraint facets retain the semantics of XML Schema, but the value attribute is replaced
with a path attribute that allows fetching the value from another element. The available dynamic
constraints are:
• length
• minLength
• maxLength
• maxInclusive
• maxExclusive
• minInclusive
• minExclusive
In this example, the boundary of the facet minInclusive is not statically defined. The value of the
boundary comes from the node /domain/Loan/Pricing/AmountMini/amount.
Restrictions:
• Target field cannot be an aggregated list. That is, it cannot define maxOccurs = 1.
• Data type of the target field must be compatible with the facet. That is, it must be:
• of type integer for facets length, minLength and maxLength.
• compatible with the data type of the field holding the facet for facets maxInclusive,
maxExclusive, minInclusive and minExclusive.
• Target field cannot be in a table if the field holding the facet is not in a table.
• Target field must be in the same table or outside a table if the field holding the facet is in a table.
• If the target field is under one or more aggregated lists, the field holding the facet must also be
under these aggregated lists. That is: the field holding the facet must be in the same list occurrence
as the target field, or in a parent occurrence, so that the target field refers to a single value, from
an XPath perspective.
FacetOResource constraint
This facet must be defined for every definition using the type osd:resource, to specify the subset
of available packaged resource files as an enumeration. For more information on this type, see
osd:resource type [p 450]. It has the following attributes:
This facet has the same behavior as an enumeration facet: the values are collected by recursively
listing all the files in the local path in the specified resource type directory in the specified module.
Example:
<xs:element name="promotion" type="osd:resource">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:FacetOResource osd:moduleName="wbp"
osd:resourceType="ext-images" osd:relativePath="promotion/" />
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
</ xs:element>
For an overview of the standard directory structure of an EBX module (Java EE web application),
see Module structure [p 427].
excludeValue constraint
This facet verifies that a value is not the same as the specified excluded value.
In this example, the empty string is excluded from the allowed values.
Example:
<xs:element name="roleName">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:excludeValue value="">
<osd:validation>
<severity>error</severity>
<message>Please select address role(s).</message>
</osd:validation>
</osd:excludeValue>
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
<xs:simpleType type="xs:string" />
</xs:element>
excludeSegment constraint
This facet verifies that a value is not included in a range of values. Boundaries are excluded.
Example:
In this example, values between 20000 and 20999 are not allowed.
<xs:element name="zipCode">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:excludeSegment minValue="20000" maxValue="20999">
<osd:validation>
<severity>error</severity>
<message>Postal code not valid.</message>
</osd:validation>
</osd:excludeSegment>
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
<xs:simpleType type="xs:string" />
</xs:element>
<xs:sequence>
<xs:element name="CountryList" maxOccurs="unbounded">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:enumeration value="DE" osd:label="Germany" />
<xs:enumeration value="AT" osd:label="Austria" />
<xs:enumeration value="BE" osd:label="Belgium" />
<xs:enumeration value="JP" osd:label="Japan" />
<xs:enumeration value="KR" osd:label="Korea" />
<xs:enumeration value="CN" osd:label="China" />
</xs:restriction>
</xs:simpleType>
</xs:element>
<xs:element name="CountryChoice" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:enumeration osd:path="../CountryList" />
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
Programmatic constraints
A programmatic constraint is defined by a Java class that implements the interface Constraint . API
As additional parameters can be defined, the implemented Java class must conform to the JavaBean
protocol.
Example:
In the example below, the Java class must define the methods: getParam1(), setParam1(String),
getParamX(), setParamX(String), etc.
<xs:element name="amount">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:constraint class="com.foo.CheckAmount">
<param1>...</param1>
<param...n>...</param...n>
</osd:constraint>
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
...
</xs:element>
Example:
<xs:element name="amount">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:constraintEnumeration class="com.foo.CheckAmountInEnumeration">
<param1>...</param1>
<param...n>...</param...n>
</osd:constraintEnumeration>
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
...
</xs:element>
3. The XML Schema cardinality attributes must specify that the element is optional (minOccurs="0"
and maxOccurs="1" ).
Note
By default, constraints on 'null' values are not checked upon user input. In order to enable
a check at the input, the 'checkNullInput' property [p 491] must be set. Also, if the element
is terminal, the dataset must also be activated.
Example:
<xs:element name="amount" minOccurs="0" maxOccurs="1">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:constraint class="com.foo.CheckIfNull">
<param1>...</param1>
<param...n>...</param...n>
</osd:constraint>
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
...
</xs:element>
Constraints on table
A constraint on table is defined by a Java class that implements the interface ConstraintOnTable . API
<osd:otherFacets>
<osd:constraint class="com.foo.checkTable">
<param1>...</param1>
<param...n>...</param...n>
</osd:constraint>
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
</xs:element>
Attention
For performance reasons, constraints on tables are only checked when getting the validation report
of a dataset or table. This means that these constraints are not checked when updates, such as record
insertions, deletions or modifications, occur on tables. However, the internal incremental validation
framework will optimize the validation cost of these constraints if dependencies are defined. For
more information, see Validation [p 298].
can be corrected later). The element blocksCommit within the element osd:validation allows this
specification, with the following supported values:
On foreign key constraints, the control policy that blocks all operations does not apply to filtered
records. That is, a foreign key constraint is not blocking if a referenced record exists but does not
satisfy a foreign key filter. In this case, updates are not rejected and a validation error occurs.
It is not possible to specify a control policy on structural constraints that are defined on relational
data models or in mapped tables. That is, this property is not available for fixed length, maximum
length, maximum number of digits, and decimal place constraints due to the validation policy of the
underlying RDBMS blocking constraints.
This property does not apply to archive imports. That is, all blocking constraints, except structural
constraints, are always disabled when importing archives.
See also
Facet validation message with severity [p 500]
Foreign keys [p 464]
Relational mode [p 243]
EBX facet
The control policy is described by the element osd:validation under the definition of the facet (which
is defined in annotation/appinfo/otherFacets).
The control policy with values onInsertUpdateOrDelete and onUserSubmit-checkModifiedValues is
only available on osd:excludeSegment, osd:excludeValue and osd:tableRef EBX facets.
The control policy with the value never can be defined on all EBX facets. On programmatic
constraints, the control policy with the value never can only be set directly during the
setup of the corresponding constraint. See ConstraintContext.setBlocksCommitToNever and API
Example:
<xs:element name="price" type="xs:decimal">
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:minInclusive path="../priceMin">
<osd:validation>
<blocksCommit>onInsertUpdateOrDelete</blocksCommit>
</osd:validation>
</osd:minInclusive>
</osd:otherFacets>
</xs:appinfo>
</xs:annotation>
</xs:element>
Note
A value is mandatory if the data model specifies a mandatory element, either statically,
using minOccurs="1", or dynamically, using a constraint on 'null'. For terminal elements,
mandatory values are only checked for an activated dataset. For non-terminal elements,
the dataset does not need to be activated.
Example:
<xs:element name="amount" osd:checkNullInput="true" minOccurs="1">
...
</xs:element>
See also
Constraint on 'null' [p 487]
Whitespace management [p 491]
Empty string management [p 493]
replace All occurrences of #x9 (tab), #xA (line feed) and #xD
(carriage return) are replaced with #x20 (space).
• For other fields (non-xs:string type), whitespaces are always collapsed and empty strings are
converted to null.
Attention
Exceptions:
• For fields of type osd:html or osd:password, whitespaces are always preserved and empty
strings are converted to null.
• For fields of type xs:string that define the property osd:checkNullInput="true", an empty
string is interpreted as null at user input by EBX.
Attention
Exceptions:
• For fields of type osd:password, whitespaces are not trimmed upon user input.
• For foreign key fields, whitespaces are not trimmed upon user input.
It is possible to indicate in a data model that whitespaces should not be trimmed upon user input.
The attribute osd:trim="disable" can be set on the fields that allow leading and trailing whitespaces
upon user input.
Example:
<xs:element name="field" osd:trim="disable" type="xs:string">
...
</xs:element>
Default conversion
For nodes of type xs:string, no distinction is made at user input between an empty string and a null
value. That is, an empty string value is automatically converted to null at user input.
Attention
In relational mode, the Oracle database does not support the distinction between empty strings and
null values, and these specific cases are not supported.
• Additional parameters may be specified at the data model level, in which case the JavaBean
convention is applied.
Example:
<xs:element name="computedValue">
<xs:annotation>
<xs:appinfo>
<osd:function class="com.foo.ComputeValue">
<param1>...</param1>
<param...n>...</param...n>
</osd:function>
</xs:appinfo>
</xs:annotation>
...
</xs:element>
In some cases, it can be useful to disable the validation of computed values if the execution of a
function is time-consuming. Indeed, if the function is attached to a table with N records, then it will
be called N times when validating this table. The property osd:disableValidation= "true" specified
in the data model allows to disable the validation of a computed value (see example below).
Example:
<xs:element name="computedValue" osd:disableValidation="true">
<xs:annotation>
<xs:appinfo>
<osd:function class="com.foo.ComputeValue">
<param1>...</param1>
<param...n>...</param...n>
</osd:function>
</xs:appinfo>
</xs:annotation>
...
</xs:element>
Triggers
Datasets or table records can be associated with methods that are automatically executed when some
operations are performed, such as creations, updates, or deletions.
In the data model, these triggers must be declared under the annotation/appinfo element using the
osd:trigger element.
For dataset triggers, a Java class that extends the abstract class InstanceTrigger must be declared
API
For the definition of table record triggers, a Java class that extends the abstract class TableTrigger API
must be defined inside the osd:trigger element. It is advised to define the annotation/appinfo/
osd:trigger elements just under the element describing the associated table or table type.
Examples:
On a table element:
<xs:element name="myTable" type="MyTableType" minOccurs="0" maxOccurs="unbounded">
<xs:annotation>
<xs:appinfo>
<osd:table>
<primaryKeys>/key</primaryKeys>
</osd:table>
<osd:trigger class="com.foo.MyTableTrigger" />
</xs:appinfo>
</xs:annotation>
</xs:element>
As additional parameters can be defined, the implemented Java class must conform to the
JavaBean protocol. In the example above, the Java class must define the methods: getParam1(),
setParam1(String), getParamX(), setParamX(String), etc.
Auto-incremented values
It is possible to define auto-incremented values. Auto-incremented values are only allowed inside
tables, and they must be of the type xs:int or xs:integer.
An auto-increment is specified in the data model using the element osd:autoIncrement under the
element annotation/appinfo.
Example:
<xs:element name="autoIncrementedValue" type="xs:int">
<xs:annotation>
<xs:appinfo>
<osd:autoIncrement />
</xs:appinfo>
</xs:annotation>
</xs:element>
• The computation and allocation of the field value are performed whenever a new record is inserted
and the field value is undefined.
• No allocation is performed if a programmatic insertion already specifies a non-null value. For
example, if an archive import or an XML import specifies the value, that value is preserved.
Consequently, the allocation is not performed for a record insertion in occulting or overwriting
modes.
• A newly allocated value is, whenever possible, unique in the scope of the repository. More
precisely, the uniqueness of the allocation spans over all the datasets of the data model, and it
also spans over all the dataspaces. The latter case allows the merge of a dataspace into its parent
with a reasonable guarantee that there will be no conflict if the osd:autoIncrement is part of the
records' primary key.
This principle has a very specific limitation: when a mass update transaction that specifies values
is performed at the same time as a transaction that allocates a value on the same field, it is possible
that the latter transaction will allocate a value that will be set by the first transaction (there is no
locking between different dataspaces).
Internally, the auto-increment value is stored in the 'Auto-increments' table of the repository. In the user
interface, it can be accessed by administrators in the 'Administration' area. This field is automatically
updated so that it defines the greatest value ever set on the associated osd:autoIncrement field, in
any instance or dataspace in the repository. This value is computed, taking into account the max value
found in the table being updated.
In certain cases, for example when multiple environments have to be managed (development, test,
production), each with different auto-increment ranges, it may be required to avoid this "max value"
check. This particular behavior can be achieved using the disableMaxTableCheck property. It is
generally not recommended to enable this property unless it is absolutely necessary, as this could
generate conflicts in the auto-increment values. However, this property can be set in the following
ways:
• Locally, by setting a parameter element in the auto-increment declaration:
<disableMaxTableCheck>true</disableMaxTableCheck>,
Note
When this option is enabled globally, it becomes possible to create records in the table
of auto-increments, for example by importing from XML or CSV. If this option is not
selected, creating records in the table of auto-increments is prohibited to ensure the
integrity of the repository.
CHAPTER 79
Labels and messages
EBX allows to have custom labels and error messages for data models to be displayed in the interface.
This chapter contains the following topics:
1. Label and description
2. Enumeration labels
3. Mandatory error message (osd:mandatoryErrorMessage)
4. Conversion error message
5. Facet validation message with severity
Simple The label is extracted from the text content, ending at the
first period ('.'), with a maximum of 60 characters. The
description uses the remainder of the text.
The description may also have a hyperlink, either a standard HTML href to an external document, or
a link to another node of the adaptation within EBX.
• When using the href notation or any other HTML, it must be properly escaped.
• EBX link notation is not escaped and must specify the path of the target, for example:
<osd:link path="../misc1">Link to another node in the adaptation</osd:link>
Example:
<xs:element name="misc1" type="xs:string">
<xs:annotation>
<xs:documentation>
Miscellaneous 1. This is the description of miscellaneous element #1.
Click <a href="http://www.orchestranetworks.com" target="_blank">here</a>
to learn more.
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="misc2" type="xs:string">
<xs:annotation>
<xs:documentation>
<osd:label>
Miscellaneous 2
</osd:label>
<osd:description>
This is the miscellaneous element #2 and here is a
<osd:link path="../misc1"> link to another node in the
adaptation</osd:link>.
</osd:description>
</xs:documentation>
</xs:annotation>
</xs:element>
If a node points to a named type, then the label of the node replaces the label of the named type. The
same mechanism applies to the description of the node (element osd:description).
Note
Regarding whitespace management, the label of a node is always collapsed when
displayed. That is, contiguous sequences of blanks are collapsed to a single blank,
and leading and trailing blanks are removed. In descriptions, however, whitespaces are
always preserved.
The labels and descriptions that are provided programmatically take precedence over the ones defined
locally on individual nodes.
Attention
Labels defined for an enumeration element are always collapsed when displayed.
Example:
<xs:element name="Service" maxOccurs="unbounded">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:enumeration value="1" osd:label="Blue" />
<xs:enumeration value="2" osd:label="Red" />
<xs:enumeration value="3" osd:label="White" />
</xs:restriction>
</xs:simpleType>
</xs:element>
It is also possible to fully localize the labels using the standard xs:documentation element. If both
non-localized and localized labels are added to an enumeration element, the non-localized label will
be displayed in any locale that does not have a label defined.
Example:
<xs:element name="access" minOccurs="0">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:enumeration value="readOnly">
<xs:annotation>
<xs:documentation xml:lang="en-US">
read only
</xs:documentation>
<xs:documentation xml:lang="fr-FR">
lecture seule
</xs:documentation>
</xs:annotation>
</xs:enumeration>
<xs:enumeration value="readWrite">
<xs:annotation>
<xs:documentation xml:lang="en-US">
read/write
</xs:documentation>
<xs:documentation xml:lang="fr-FR">
lecture écriture
</xs:documentation>
</xs:annotation>
</xs:enumeration>
<xs:enumeration value="hidden">
<xs:annotation>
<xs:documentation xml:lang="en-US">
hidden
</xs:documentation>
<xs:documentation xml:lang="fr-FR">
masqué
</xs:documentation>
</xs:annotation>
</xs:enumeration>
</xs:restriction>
</xs:simpleType>
</xs:element>
Note
Regarding whitespace management, the enumeration labels are always collapsed when
displayed.
Example:
<xs:element name="Gender">
<xs:annotation>
<xs:appinfo>
<osd:enumerationValidation>
<severity>error</severity>
<message>Non-localized message.</message>
<message xml:lang="en-US">English error message.</message>
<message xml:lang="fr-FR">Message d'erreur en français.</message>
</osd:enumerationValidation>
</xs:appinfo>
</xs:annotation>
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:enumeration value="0" osd:label="male" />
<xs:enumeration value="1" osd:label="female" />
</xs:restriction>
</xs:simpleType>
</xs:element>
CHAPTER 80
Additional properties
This chapter contains the following topics:
1. Default values
2. Access properties
3. Information
4. Default view
5. Comparison mode
6. Apply last modifications policy
7. Categories
This node is a complex type that is only displayed in EBX if it has one child node that is
also an adaptation terminal node. It has no value of its own. When accessed using the method
Adaptation.get(), it returns null.
3. Non-adaptable node
This node is not an adaptation terminal node and has no child adaptation terminal nodes. This
node is never displayed in EBX. When accessing using the method Adaptation.get(), it returns
the node default value if one is defined, otherwise it returns null.
Example:
In this example, the element is adaptable because it is an adaptation terminal node.
<xs:element name="proxyIpAddress" type="xs:string" osd:access="RW"/>
80.3 Information
The element osd:information allows specifying additional information. This information can then be
used by the integration code, for any purpose, by calling the method SchemaNode.getInformation . API
Example:
<xs:element name="misc" type="xs:string">
<xs:annotation>
<xs:appinfo>
<osd:information>
This is the text information of miscellaneous element.
</osd:information>
</xs:appinfo>
</xs:annotation>
</xs:element>
Attention
• If an element is configured to be hidden in the default view of a dataset, then the access
permissions associated with this field will not be evaluated.
• It is possible to display a field that is hidden in the default view of a dataset by defining a view.
Only in this case will the access permissions associated with this field be evaluated to determine
whether the field will be displayed or not.
• It is not possible to display a table that is hidden in the default view of a dataset (in the navigation
pane).
Example:
In this example, the element is hidden in the default view of a dataset.
<xs:element name="hiddenField" type="xs:string" minOccurs="0"/>
<xs:annotation>
<xs:appinfo>
<osd:defaultView>
<hidden>true</hidden>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
If this property is set to true, then the element will not be selectable when creating a custom view. As
a consequence, the element will not be displayed in all views of a table in a dataset.
If a group is configured as hidden in views, then all the fields nested under this group will not be
displayed respectively in the views of the table.
If this property is set to true, then the field will not be selectable in the text and typed search tools
of a dataset.
If this property is set to textSearchOnly, then the field will not be selectable only in the text search
of a dataset but will be selectable in the typed search.
Note
If a group is configured as hidden in search tools or only in the text search, then all the fields
nested under this group will not be displayed respectively in the search tools or only in the
text search.
Example:
<xs:element name="hiddenFieldInSearch" type="xs:string" minOccurs="0"/>
<xs:annotation>
<xs:appinfo>
<osd:defaultView>
<hiddenInSearch>true</hiddenInSearch>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
In this example, the element is hidden in the text and typed search tools of a dataset.
<xs:element name="hiddenFieldOnlyInTextSearch" type="xs:string" minOccurs="0"/>
<xs:annotation>
<xs:appinfo>
<osd:defaultView>
<hiddenInSearch>textSearchOnly</hiddenInSearch>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
In this example, the element is hidden only in the text search tool of a dataset.
Note
• If a group is configured as being hidden, then all the fields nested under this group will
be considered as hidden by data services.
Example:
<xs:element name="hiddenFieldInDataService" type="xs:string" minOccurs="0"/>
<xs:annotation>
<xs:appinfo>
<osd:defaultView>
<hiddenInDataServices>true</hiddenInDataServices>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
In this example, the element is hidden in the Data Service select operation.
Note
Example:
In this example, the name of a published view is defined to display the target table of a foreign key
in the advanced selection.
<xs:element name="catalog_ref" type="xs:string" minOccurs="0"/>
<xs:annotation>
<xs:appinfo>
<osd:otherFacets>
<osd:tableRef>
<tablePath>/root/Catalogs</tablePath>
</osd:tableRef>
</osd:otherFacets>
<osd:defaultView>
<widget>
<viewForAdvancedSelection>catalogView</viewForAdvancedSelection>
</widget>
</osd:defaultView>
</xs:appinfo>
</xs:annotation>
</xs:element>
Note
• If a group is configured as being ignored during comparisons, then all the fields nested
under this group will also be ignored.
• If a terminal field does not include the attribute osd:comparison, then it will be included
by default during comparisons.
Restrictions:
• This property cannot be defined on non-terminal fields.
• Primary key fields cannot be ignored during comparison.
Example:
In this example, the first element is explicitly ignored during comparison, the second element is
explicitly included.
<xs:element name="fieldExplicitlyIgnoredInComparison"
type="xs:string" minOccurs="0" osd:comparison="ignored"/>
<xs:element name="fieldExplicitlyNotIgnoredInComparison"
type="xs:string" minOccurs="0" osd:comparison="default"/>
Note
• If a group is configured as being ignored by the 'apply last modifications' service, then all
fields nested under this group will also be ignored.
• If a terminal field does not include the attribute osd:applyLastModification, then it will
be included by default in the apply last modifications service.
Restriction:
• This property cannot be defined on non-terminal fields.
Example:
In this example, the first element is explicitly ignored in the 'apply last modifications' service, the
second element is explicitly included.
<xs:element name="fieldExplicitlyIgnoredInApplyLastModification"
type="xs:string" minOccurs="0" osd:applyLastModification="ignored"/>
<xs:element name="fieldExplicitlyNotIgnoredApplyLastModification"
type="xs:string" minOccurs="0" osd:applyLastModification="default"/>
80.7 Categories
Categories can be used for "filtering", by restricting the display of data model elements.
To create a category, add the attribute osd:category to a table node in the data model XSD.
Filters on data
In the example below, the attribute osd:category is added to the node in order to create a category
named mycategory.
<xs:element name="rebate" osd:category="mycategory">
<xs:complexType>
<xs:sequence>
<xs:element name="label" type="xs:string"/>
<xs:element name="beginDate" type="xs:date"/>
<xs:element name="endDate" type="xs:date"/>
<xs:element name="rate" type="xs:decimal"/>
</xs:sequence>
</xs:complexType>
</xs:element>
To activate a defined category filter on a dataset in the user interface, select Actions > Categories >
<category name> from the navigation pane.
Predefined categories
Two categories with localized labels are predefined:
• Hidden
An instance node, including a table node itself, is hidden in the default view, but can be revealed
by selecting Actions > Categories > [hidden nodes] from the navigation pane.
A table record node is always hidden.
• Constraint (deprecated)
Restriction
Categories do not apply to table record nodes, except the category 'Hidden'.
CHAPTER 81
Data services
This chapter details how WSDL operations' names related to a table are defined and managed by EBX.
This chapter contains the following topics:
1. Definition
2. Configuration
3. Publication
4. WSDL and table operations
5. Limitations
81.1 Definition
EBX generates a WSDL that complies with the W3C Web Services Description Language 1.1
standard. By default, WSDL operations refer to a table using the last element of the table path. A
WSDL operation name is composed of the action name (prefix) and the table name (suffix). It is
possible to refer to tables in WSDL operations using unique names instead of the last element of their
paths by overriding the suffix operations' names.
See also Data services using the Data Model Assistant [p 41]
81.2 Configuration
Embedded data model
WSDL suffix operations' names are embedded in EBX's repository and linked to a publication. That is,
when publishing an embedded data model, the list of WSDL suffix operations' names can be defined
in the data model definition, under the 'Configuration > Data services' table and managed by EBX.
81.3 Publication
The suffix operations' names are validated at compilation time and contain a list of couples containing
Path with a unique table name. Checked validation rules are:
SOAP operations
When an operation request on table has been invoked from the SOAP connector, the target table is
retrieved by priority, the name corresponds to:
1. an overridden table name,
2. the last step of the table path.
81.5 Limitations
WSDL operations' names are not available with external data models.
CHAPTER 82
Toolbars
This chapter details how toolbars are defined and managed by EBX.
This chapter contains the following topics:
1. Definition
2. Using toolbars
82.1 Definition
Toolbars allow to customize the buttons and menus to display when accessing a table view, a
hierarchical view, or a record form.
Toolbars can only be created and published using the Data Model Assistant and are available only on
embedded and packaged data models.
For embedded data models, toolbars are embedded in EBX's repository and linked to a publication.
That is, when publishing an embedded data model, the toolbars defined in the data model are embedded
with the publication of the data model and managed by EBX.
For packaged data models, toolbars are defined in a dedicated XML document and must be named
as the data model and end with the keyword _toolbars. For instance, if a data model is named
catalog.xsd then the XML document containing the definition of the toolbars must be named
catalog_toolbars.xml. This XML document must also be placed in the same location as the data model.
The toolbar document is automatically loaded by EBX if a file complying with this pattern is found
when compiling a data model.
See also
Configuring toolbars using the Data Model Assistant [p 73]
Using toolbars in data models [p 463]
Toolbar API ToolbarFactory API
See also
Using toolbars [p 463]
Associations [p 467]
CHAPTER 83
Workflow model
The workflow offers two types of steps: 'library' or 'specific'.
'Library' is a bean defined in module.xml and is reusable. Using the 'library' bean improves the
ergonomics: parameters are dynamically displayed in the definition screens.
A 'specific' object is a bean defined only by its class name. In this case, the display is not dynamic.
This chapter contains the following topics:
1. Bean categories
2. Sample of ScriptTask
3. Sample of ScriptTaskBean
4. Samples of UserTask
5. Samples of Condition
6. Sample of ConditionBean
7. Sample of SubWorkflowsInvocationBean
8. Sample of WaitTaskBean
9. Sample of ActionPermissionsOnWorkflow
10.Sample of WorkflowTriggerBean
11.Sample of trigger starting a process instance
ScriptTask
Scripts ScriptTaskBean
Condition
Conditions ConditionBean
UserTask
User task
this.setNewBranch(branchCreate.getKey().getName());
}
}
<label>Service workflow</label>
<description>
The purpose of this service is ...
</description>
</documentation>
<properties>
<property name="param1" input="true">
<documentation xml:lang="fr-FR">
<label>Param1</label>
<description>Param1 ...</description>
</documentation>
</property>
<property name="param2" output="true">
</property>
</properties>
</service>
<serviceLink serviceName="adaptationService">
<importFromSchema>
/WEB-INF/ebx/schema/schema.xsd
</importFromSchema>
</serviceLink>
</services>
</beans>
</module>
// Conditional launching.
if (aContext.getVariableString("productType").equals("book"))
{
final ProcessLauncher subWorkflow3 = aContext.registerSubWorkflow(
AdaptationName.forName("generateISBN"),
"generateISBN");
subWorkflow3.setLabel(UserMessage.createInfo("Generate ISBN"));
subWorkflow3.setInputParameter(
"workingBranch",
aContext.getVariableString("workingBranch"));
subWorkflow3.setInputParameter("code", aContext.getVariableString("code"));
}
aContext.launchSubWorkflows();
}
@Override
public void handleCompleteAllSubWorkflows(SubWorkflowsCompletionContext aContext)
throws OperationException
{
aContext.getCompletedSubWorkflows();
final ProcessInstance validateProductMarketing = aContext.getCompletedSubWorkflow("validateProduct1");
final ProcessInstance validateProductDirection = aContext.getCompletedSubWorkflow("validateProduct2");
if (aContext.getVariableString("productType").equals("book"))
{
final ProcessInstance generateISBN = aContext.getCompletedSubWorkflow("generateISBN");
aContext.setVariableString("isbn", generateISBN.getDataContext().getVariableString(
"newCode"));
}
if (validateProductMarketing.getDataContext().getVariableString("Accepted").equals("true")
&& validateProductDirection.getDataContext().getVariableString("Accepted").equals(
"true"))
aContext.setVariableString("validation", "ok");
}
}
</module>
@Override
public void onResume(WaitTaskOnResumeContext aContext) throws OperationException
{
// Defines a specific mapping.
aContext.setVariableString("code", aContext.getOutputParameters().get("isbn"));
aContext.setVariableString("comment", aContext.getOutputParameters().get("isbnComment"));
}
}
import com.orchestranetworks.service.*;
import com.orchestranetworks.workflow.*;
import com.orchestranetworks.workflow.ProcessExecutionContext.*;
/**
*/
public class MyDynamicPermissions extends ActionPermissionsOnWorkflow
{
spec.sendMail(Locale.US);
}
@Override
public void handleBeforeProcessInstanceTermination(
WorkflowTriggerBeforeProcessInstanceTerminationContext aContext) throws OperationException
{
final DisplayPolicy policy = DisplayPolicyFactory.getPolicyForSession(aContext.getSession());
spec.sendMail(Locale.US);
}
@Override
public void handleAfterWorkItemCreation(WorkflowTriggerAfterWorkItemCreationContext aContext)
throws OperationException
{
DisplayPolicy policy = DisplayPolicyFactory.getPolicyForSession(aContext.getSession());
if (workItem.getOfferedTo() != null)
body += "\n The role is :" + workItem.getOfferedTo().format();
if (workItem.getUserReference() != null)
body += "\n The user is :" + workItem.getUserReference().format();
spec.setBody(body);
spec.sendMail(Locale.US);
}
@Override
spec.sendMail(Locale.US);
}
@Override
public void handleBeforeWorkItemAllocation(
WorkflowTriggerBeforeWorkItemAllocationContext aContext) throws OperationException
{
DisplayPolicy policy = DisplayPolicyFactory.getPolicyForSession(aContext.getSession());
spec.sendMail(Locale.US);
}
@Override
public void handleBeforeWorkItemDeallocation(
WorkflowTriggerBeforeWorkItemDeallocationContext aContext) throws OperationException
{
DisplayPolicy policy = DisplayPolicyFactory.getPolicyForSession(aContext.getSession());
spec.sendMail(Locale.US);
@Override
public void handleBeforeWorkItemReallocation(
WorkflowTriggerBeforeWorkItemReallocationContext aContext) throws OperationException
{
DisplayPolicy policy = DisplayPolicyFactory.getPolicyForSession(aContext.getSession());
+ DirectoryHandler.getInstance(aContext.getRepository()).displayUser(
aContext.getWorkItem().getUserReference(),
aContext.getSession().getLocale()) + "'.");
spec.sendMail(Locale.US);
}
@Override
public void handleBeforeWorkItemTermination(
WorkflowTriggerBeforeWorkItemTerminationContext aContext) throws OperationException
{
DisplayPolicy policy = DisplayPolicyFactory.getPolicyForSession(aContext.getSession());
spec.sendMail(Locale.US);
}
}
//Starts process
launcher.launchProcess();
}
//...
}
User interface
CHAPTER 84
Interface customization
The EBX graphical interface can be customized through various EBX APIs.
This chapter contains the following topics:
1. How to embed a Web Component
2. User services
3. Form layout
4. Custom widgets
5. Table filter
6. Record label
7. CSS and JavaScript
See also
API
UIForm
API
UIFormPaneWriter
API
UIWidget
API
UIFormHeader
API
UIFormBottomBar
See also
API
UITableFilter
See also
Module registration [p 428]
Development recommendations [p 557]
API
UIDependencyRegisterer
CHAPTER 85
Overview
A user service is an extension to EBX that provides a graphical user interface (GUI) allowing users
to access specific or advanced functionalities.
An API is available allowing the development of powerful custom user services using the same visual
components and data validation mechanisms as standard EBX user interfaces.
This chapter contains the following topics:
1. Nature
2. Declaration
3. Display
4. Legacy user services
85.1 Nature
User services exist in different types called natures. The nature defines the minimal elements
(dataspace, dataset, table, record...) that need to be selected to execute the service. The following table
lists the available natures.
Nature Description
Dataspace The nature of a user service that can be launched from the actions menu of a dataspace (branch or
snapshot) or from any context where the current selection implies selecting a dataspace.
Dataset The nature of a user service that can be launched from the actions menu of a dataset or from any context
where the current selection implies selecting a dataset.
TableView The nature of a user service that can be launched from the toolbar of a table, regardless of the selected
view, or from any context where the current selection implies selecting a table.
Record The nature of a user service that can be launched from the toolbar of a record form or from any context
where the current selection implies selecting a single record.
Hierarchy The nature of a user service that can be launched from the toolbar of a table when a hierarchy view is
selected.
HierarchyNode The nature of a user service that can be launched from the menu of a table hierarchy view node.
Currently, only record hierarchy nodes are supported.
Association The nature of a user service that can be launched from the target table view of an association or for any
context where the current selection implies selecting the target table view of an association.
AssociationRecord The nature of a user service that can be launched from the form of a target record of an association node
or from any context where the current selection implies selecting a single association target record.
85.2 Declaration
A user service can be declared at two levels:
• Module,
• Data model.
A service declared by a data model can only be launched when the current selection includes a dataset
of this model. The user service cannot be of the Dataspace nature.
A service declared by a module may be launched for any dataspace or dataset.
The declaration can add restrictions on selections that are valid for the user service.
85.3 Display
On the following figure are displayed the functional areas of a user service.
A. Header 1. Breadcrumb
B. Form 2. Message box button
3. Top toolbar
4. Navigation buttons
5. Help button
6. Close button (pop-ups only)
7. Tabs
8. Form pane (one per tab)
9. Bottom buttons
Most areas are optional and customizable. Refer to Quick start [p 533], Implementing a user service
[p 537] and Declaring a user service [p 551] for more details.
CHAPTER 86
Quick start
This chapter contains the following topics:
1. Main classes
2. Hello world
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<DatasetEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
// Set bottom bar
UIButtonSpecNavigation closeButton = aConfigurator.newCloseButton();
closeButton.setDefaultButton(true);
aConfigurator.setLeftButtons(closeButton);
{
// Display Hello World!
aWriter.add("<div ");
aWriter.addSafeAttribute("class", UICSSClasses.CONTAINER_WITH_TEXT_PADDING);
aWriter.add(">");
aWriter.add("Hello World!");
aWriter.add("</div>");
}
@Override
public void setupObjectContext(
UserServiceSetupObjectContext<DatasetEntitySelection> aContext,
UserServiceObjectContextBuilder aBuilder)
{
// No context yet.
}
@Override
public void validate(UserServiceValidateContext<DatasetEntitySelection> aContext)
{
// No custom validation is necessary.
}
@Override
public UserServiceEventOutcome processEventOutcome(
UserServiceProcessEventOutcomeContext<DatasetEntitySelection> aContext,
UserServiceEventOutcome anEventOutcome)
{
// By default do not modify the outcome.
return anEventOutcome;
}
}
public HelloWorldServiceDeclaration()
{
}
@Override
public ServiceKey getServiceKey()
{
return serviceKey;
}
@Override
public UserService<DatasetEntitySelection> createUserService()
{
// Creates an instance of the user service.
return new HelloWordService();
}
@Override
public void defineActivation(ActivationContextOnDataset aContext)
{
// The service is activated for all datasets instanciated with
// the associated data model (see next example).
}
@Override
public void defineProperties(UserServicePropertiesDefinitionContext aContext)
{
// This label is displayed in menus that can execute the user service.
aContext.setLabel("Hello World Service");
}
@Override
public void declareWebComponent(WebComponentDeclarationContext aContext)
{
}
}
In this sample, the user service is registered by a data model. The data model needs to define a schema
extension that implements the following code:
public class CustomSchemaExtensions implements SchemaExtensions
{
@Override
public void defineExtensions(SchemaExtensionsContext aContext)
{
// Register the service.
aContext.registerUserService(new HelloWorldServiceDeclaration());
}
}
CHAPTER 87
Implementing a user service
This chapter contains the following topics:
1. Implementation interface
2. Life cycle and threading model
3. Object Context
4. Display setup
5. Database updates
6. Ajax
7. REST data services
8. File upload
9. File download
10.User service without display
Nature Interface
Dataspace UserService<DataspaceEntitySelection>
Dataset UserService<DatasetEntitySelection>
TableView UserService<TableViewEntitySelection>
Record UserService<RecordEntitySelection>
Hierarchy UserService<HierarchyEntitySelection>
HierarchyNode UserService<HierarchyNodeEntitySelection>
Association UserService<AssociationEntitySelection>
AssociationRecord UserService<AssociationRecordEntitySelection>
• Discarded when the current page goes out of scope or when the session times out.
Access to this class is synchronized by EBX to make sure that only one HTTP request is processed
at a time. Therefore, the class does not need to be thread-safe.
The user service may have attributes. The state of these attributes will be preserved between HTTP
requests. However, developers must be aware that these attributes should have moderate use of
resources, such as memory, not to overload the EBX server.
In the following sample, an event callback gets the value of the attribute of an object:
// Get value of customer's last name.
ValueContext customerValueContext = aValueContext.getValueContext(customerKey);
String lastName = customerValueContext.getValue(Path.parse("lastName"));
A dynamic object is an object whose schema is defined by the user service itself. An API is provided
to define the schema programmatically. This API allows defining only instance elements (instance
nodes). Defining tables is not supported. It supports most other features available with standard
EBX data models, such as types, labels, custom widgets, enumerations and constraints, including
programmatic ones.
The following sample defines two objects having the same schema:
public class SampleService implements UserService<TableViewEntitySelection>
{
// Define an object key per object:
private final static ObjectKey _PersonObjectKey = ObjectKey.forName("person");
private final static ObjectKey _PartnerObjectKey = ObjectKey.forName("partner");
...
aBuilder.registerBean(_PersonObjectKey, def);
aBuilder.registerBean(_PartnerObjectKey, def);
}
...
}
This method is called at each request and can set the following:
• The title (the default is the label specified by the user service declaration),
• The contextual help URL,
• The breadcrumbs,
• The toolbar,
• The bottom buttons.
If necessary, the header and the bottom buttons can be hidden.
The display setup is not persisted and, at each HTTP request, is reset to default before calling the
method UserService.setupDisplay . API
Bottom buttons
Buttons may be of two types: action and submit.
An action button triggers an action event without submitting the form. By default, the user needs to
acknowledge that, by leaving the page, the last changes will be lost. This behavior can be customized.
A submit button triggers a submit event that always submits the form.
callback can also be an instance of UserServiceRootTabbedPane to render an EBX form with tabs.
API
For specific cases, the callback can implement UserServiceRawPane . This interface has restrictions
API
but is useful when one wants to implement an HTML form that is not managed by EBX.
Toolbars
Toolbars are optional and come in two flavors.
The form style:
The style is automatically selected: toolbars defined for a record are of the form style and toolbars
defined for a table are of the table view style.
Samples
The following sample implements a button that closes the current user service and redirects the user
back to the current selection, only if saving the data was successful:
public class SampleService implements UserService<...>
{
private final static ObjectKey _RecordObjectKey = ObjectKey.forName("record");
...
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<RecordEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
...
// Define a "save and close" button with callback onSave().
aConfigurator.setLeftButtons(aConfigurator.newSaveCloseButton(this::onSave));
}
{
ProcedureResult result = anEventContext.save(_RecordObjectKey);
if (result.hasFailed())
{
// Save has failed. Redisplay the user message.
return null;
}
The following sample is compatible with the Java 6 syntax. Only differences with the previous code
are shown:
public class SampleService implements UserService<...>
{
...
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<RecordEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
...
// Define a "save and close" button with callback onSave().
aConfigurator.setLeftButtons(aConfigurator.newSaveCloseButton(new UserServiceEvent() {
@Override
public UserServiceEventOutcome processEvent(UserServiceEventContext anEventContext)
{
return onSave(anEventContext);
}
}));
}
}
The following sample implements a URL that closes the service and redirects the current user to
another user service:
public class SampleService implements UserService<...>
{
...
private void writePane(UserServicePaneContext aPaneContext, UserServicePaneWriter aWriter)
{
// Displays an ULR that redirect current user.
String url = aWriter.getURLForAction(this::goElsewhere);
aWriter.add("<a ");
aWriter.addSafeAttribute("href", url);
aWriter.add(">Go elsewhere</a");
}
The following sample is a more complex "wizard" service that includes three steps, each having its
own UserService.setupDisplay method: API
...
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<DataspaceEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
...
@Override
public UserServiceEventOutcome processEventOutcome(
UserServiceProcessEventOutcomeContext<DataspaceEntitySelection> aContext,
UserServiceEventOutcome anEventOutcome)
{
// Custom outcome value processing.
case displayStep2:
this.step = new WizardStep2();
break;
case displayStep3:
this.step = new WizardStep3();
break;
}
...
The following sample updates the database using a procedure Procedure : API
import com.orchestranetworks.service.*;
import com.orchestranetworks.userservice.*;
// Event callback.
private UserServiceEventOutcome onUpdateSomething(UserServiceEventContext aContext)
{
Procedure procedure = new Procedure()
{
public void execute(ProcedureContext aContext) throws Exception
{
// Code that updates database should be here.
...
}
};
return null;
}
87.6 Ajax
A user service can implement Ajax callbacks. An Ajax callback must implement the interface
UserServiceAjaxRequest .
API
The client calls an Ajax callback using the URL generated by: UserServiceResourceLocator.
getURLForAjaxRequest .
API
To facilitate the use of Ajax components, EBX provides the JavaScript prototype
EBX_AJAXResponseHandler for sending the request and handling the response. For more information
on EBX_AJAXResponseHandler see UserServiceAjaxRequest . API
The following sample implements an Ajax callback that returns partial HTML:
public class AjaxSampleService implements UserService<DataspaceEntitySelection>
{
...
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<DataspaceEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
aConfigurator.setLeftButtons(aConfigurator.newCloseButton());
aConfigurator.setContent(this::writePane);
/**
* Displays an URL that will execute the callback
* and display the returned partial HTML inside a <div> tag.
*/
private void writePane(UserServicePaneContext aPaneContext, UserServicePaneWriter aWriter)
{
// Generate the URL of the Ajax callback.
String url = aWriter.getURLForAjaxRequest(this::ajaxCallback);
// The id of the <div> that will display the partial HTML returned by the Ajax callback.
String divId = "sampleId";
aWriter.add("<div ");
aWriter.addSafeAttribute("class", UICSSClasses.CONTAINER_WITH_TEXT_PADDING);
aWriter.add(">");
// Output the <div> tag that will display the partial HTML returned by the callback.
aWriter.add("<div ");
aWriter.addSafeAttribute("id", divId);
aWriter.add("></div>");
aWriter.add("</div>");
aWriter.addJS_cr(" ajaxHandler.sendRequest(url);");
aWriter.addJS_cr("}");
}
/**
* The Ajax callback that returns partial HTML.
*/
private void ajaxCallback(
UserServiceAjaxContext anAjaxContext,
UserServiceAjaxResponse anAjaxResponse)
{
UserServiceWriter writer = anAjaxResponse.getWriter();
writer.add("<p style=\"color:green\">Ajax callback succeeded!</p>");
writer.add("<p>Current data and time is: ");
writer.add("</p>");
}
}
For more information on REST data services see the Built-in RESTful services [p 623].
The following sample implements a REST data service call whose response is printed in a textarea:
public class RestCallSampleService implements UserService<DataspaceEntitySelection>
{
...
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<DataspaceEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
aConfigurator.setLeftButtons(aConfigurator.newCloseButton());
aConfigurator.setContent(this::writePane);
}
...
@Override
public void setupObjectContext(
UserServiceSetupObjectContext<DataspaceEntitySelection> aContext,
UserServiceObjectContextBuilder aBuilder)
{
if (aContext.isInitialDisplay())
{
// Create a definition for the "model" object.
BeanElement element;
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<DataspaceEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
aConfigurator.setTitle("File upload service");
aConfigurator.setLeftButtons(aConfigurator.newSubmitButton("Upload", this::onUpload),
aConfigurator.newCancelButton());
aConfigurator.setContent(this::writePane);
}
aWriter.setCurrentObject(_File_ObjectKey);
aWriter.startTableFormRow();
// Title input.
aWriter.addFormRow(_Title);
aWriter.endTableFormRow();
}
InputStream in;
try
{
in = file.getInputStream();
}
catch (IOException e)
{
// Should not happen.
anEventContext.addError("Cannot read file.");
return null;
}
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<DataspaceEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
aConfigurator.setLeftButtons(aConfigurator.newCloseButton());
aConfigurator.setContent(this::writePane);
}
aWriter.add("<a ");
aWriter.addSafeAttribute("href", downloadURL);
aWriter.add(">Click here to download a sample file</a>");
aWriter.add("</div>");
}
out.println("Hello !");
out.println("This is a sample text file downloaded on " + format.format(now)
+ ", from EBX.");
out.close();
}
}
This type of service must implement the interface UserServiceExtended UserServiceExtended and API
The following sample deletes selected records in the current table view:
public class DeleteRecordsService implements UserServiceExtended<TableViewEntitySelection>
{
...
@Override
public UserServiceEventOutcome initialize(
UserServiceInitializeContext<TableViewEntitySelection> aContext)
{
final List<AdaptationName> records = new ArrayList<>();
@Override
public void setupObjectContext(
UserServiceSetupObjectContext<TableViewEntitySelection> aContext,
UserServiceObjectContextBuilder aBuilder)
{
//Do nothing.
}
@Override
public void setupDisplay(
UserServiceSetupDisplayContext<TableViewEntitySelection> aContext,
UserServiceDisplayConfigurator aConfigurator)
{
//Do nothing.
}
@Override
public void validate(UserServiceValidateContext<TableViewEntitySelection> aContext)
{
//Do nothing.
@Override
public UserServiceEventOutcome processEventOutcome(
UserServiceProcessEventOutcomeContext<TableViewEntitySelection> aContext,
UserServiceEventOutcome anEventOutcome)
{
return anEventOutcome;
}
}
Known limitation
If such service is called in the context of a Web component, an association, a perspective action or
a hierarchy node, The service will be launched, initialized and closed, but the service's target entity
will still be displayed.
CHAPTER 88
Declaring a user service
This chapter contains the following topics:
1. Declaration interface
2. Life cycle and threading model
3. Registration
4. Service properties
5. Service activation scope
6. Web component declaration
7. User service groups
Dataspace UserServiceDeclaration.OnDataspace
Dataset UserServiceDeclaration.OnDataset
TableView UserServiceDeclaration.OnTableView
Record UserServiceDeclaration.OnRecord
Hierarchy UserServiceDeclaration.OnHierarchy
HierarchyNode UserServiceDeclaration.OnHierarchyNode
Association UserServiceDeclaration.OnAssociation
AssociationRecord UserServiceDeclaration.OnAssociationRecord
88.3 Registration
A user service declaration must be registered by a module or a data model.
Registration by a module is achieved by the module registration servlet by a code similar to:
public class CustomRegistrationServlet extends ModuleRegistrationServlet
{
@Override
public void handleServiceRegistration(ModuleServiceRegistrationContext aContext)
{
// Register custom user service declaration.
aContext.registerUserService(new CustomServiceDeclaration());
}
}
For more information on the module registration servlet, see module registration [p 428] and
ModuleRegistrationServlet .
API
@Override
public void defineActivation(ActivationContextOnTableView aContext)
{
// activates the service in all dataspaces except the "Reference" branch.
aContext.includeAllDataspaces(DataspaceType.BRANCH);
aContext.excludeDataspacesMatching(Repository.REFERENCE, DataspaceChildrenPolicy.NONE);
aContext.forbidEmptyRecordSelection();
For more information about declaring the activation scope, see UserServiceDeclaration.
defineActivation .
API
For more information about the resolution of the user service availability, see Resolving permissions
on services [p 285].
@Override
public void declareWebComponent(WebComponentDeclarationContext aContext)
{
// makes this web component available when configuring a workflow user task.
aContext.setAvailableAsWorkflowUserTask(true);
@Override
public void handleServiceRegistration(ModuleServiceRegistrationContext aContext)
{
// Register user service extension declaration.
aContext.registerUserServiceExtension(new ServiceExtensionDeclaration());
}
}
For more information on the module registration servlet, see module registration [p 428] and
ModuleRegistrationServlet .
API
@Override
public void defineExtensions(SchemaExtensionsContext aContext)
{
// Register user service extension declaration.
aContext.registerUserServiceExtension(new ServiceExtensionDeclaration());
}
}
For more information on the data model extension, see SchemaExtensions . API
@Override
public void handleServiceRegistration(ModuleServiceRegistrationContext aContext)
{
// In CustomModuleConstants,
// CUSTOM_SERVICE_GROUP_KEY = ServiceGroupKey.forServiceGroupInModule("customModule", "customGroup")
@Override
public void defineProperties(UserServicePropertiesDefinitionContext aContext)
{
// associates the current service to the CUSTOM_SERVICE_GROUP_KEY group
aContext.setGroup(CustomModuleConstants.CUSTOM_SERVICE_GROUP_KEY);
}
}
A service can be associated with either a built-in or a custom service group. In the latter case, this
service will be displayed in this built-in group, just like other built-in services belonging to this group.
CHAPTER 89
Development recommendations
This chapter contains the following topics:
1. HTML
2. CSS
3. JavaScript
89.1 HTML
It is recommended to minimize the inclusion of specific HTML styles and tags to allow the default
styles of EBX to apply to custom interfaces. The approach of the API is to automatically apply a
standardized style to all elements on HTML pages, while simplifying the implementation process for
the developer.
XHTML
EBX is a Rich Internet Application developed in XHTML 1.0 Transitional. It means that the structure
of the HTML is strict XML file and that all tags must be closed, including "br" tags. This structure
allows for greater control over CSS rules, with fewer differences in browser rendering.
iFrames
Using iFrame is allowed in EBX, especially in collaboration with a URL of a
UIHttpManagerComponent . For technical reasons, it is advised to set the src attribute of an iFrame
API
using JavaScript only. In this way, the iFrame will be loaded once the page is fully rendered and when
all the built-in HTML components are ready.
Example
The following example, developed from any UIComponentWriter , uses a UIHttpManagerComponent
API API
writer.addJS("document.getElementById(\"").addJS(iFrameId).addJS("\").src = \"").addJS(iFrameURL).addJS("\";");
89.2 CSS
Public CSS classes
The constant catalog UICSSClasses offers the main CSS classes used in the software to style the
API
components. These CSS classes ensure a proper long-term integration into the software, because they
follow the background colors, borders, customizable text in the administration; the floating margins
and paddings fluctuate according to the variable density; to the style of the icons, etc.
Advanced CSS
EBX allows to integrate to all its pages one or more external Cascading Style Sheet. These external
CSS, considered as resources, need to be declared in the Module registration [p 428].
In order to ensure the proper functioning of your CSS rules and properties without altering the
software, the following recommendations should be respected. Failure to respect these rules could
lead to:
• Improper functioning of the software, both aesthetically and functionally: risk of losing the
display of some of the data and some input components may stop working.
• Improper functioning of your CSS rules and properties, since the native CSS rules will impact
the CSS implementation.
If you do not prefix your CSS selector using one of the CSS classes below, it will cause conflicts and
corrupt the UI of EBX.
Do
#myCustomComponent li.selected {
background-color: red;
}
89.3 JavaScript
Public JS functions
The catalog of JavaScript functions JavaScriptCatalog
API
offers a list of functions to use directly
(through copy-paste) in the JS files.
This JavaScript is executed once the whole page is loaded. It is possible to instantly manage the
HTML elements written with UIBodyWriter.add . Setting on-load functions (such as window.onload
API
= myFunctionToCallOnload;) is not supported because the execution context comes after the on-load
event.
Advanced JavaScript
EBX allows to include one or more external JavaScript files. These external JavaScript files,
considered as resources, need to be declared in the Module registration [p 428]. For performance reasons,
it is recommended to include the JavaScript resource only when necessary (in a User service or a
specific form, for example). The API UIDependencyRegisterer allows a developer to specify the
API
conditions for which the JavaScript resources will be integrated into a given page according to its
context.
In order to ensure the proper functioning of your JavaScript resources without altering the software,
the following recommendations should be respected. Failure to respect them could lead to:
• Improper functioning of the software: if functions or global variables of the software were to be
erased, some input or display components (including the whole screen) may stop working.
• Improper functioning of your JavaScript instructions, since global variables or function names
could be erased.
Reserved JS prefixes
The following prefixes are reserved and should not be used to create variables, functions, methods,
classes, etc.
CHAPTER 90
Introduction
This chapter contains the following topics:
1. Overview
2. Activation and configuration
3. Interactions
4. Security
5. Monitoring
6. SOAP and REST comparative
7. Limitations
90.1 Overview
Data services allow external systems to interact with the data governed in the EBX repository using
the SOAP/Web Services Description Language (WSDL) standards.
In order to invoke SOAP operations [p 581], for an integration use case, a WSDL [p 573] must be
generated from a data model. It will be possible to perform operations such as:
• Selecting, inserting, updating, deleting, or counting records
• Selecting or counting history records
• Selecting dataset values
• Getting the differences on a table between dataspaces or snapshots, or between two datasets based
on the same data model
• Getting the credentials of records
Other generic WSDLs can be generated and allow performing operations such as:
• Creating, merging, or closing a dataspace
• Creating or closing a snapshot
• Validating a dataset, dataspace, or a snapshot
Note
See SOAP and REST comparative [p 570].
90.3 Interactions
Input and output message encoding
All input messages must be exclusively in UTF-8. All output messages are in UTF-8.
Tracking information
Depending on the data services operation being called, it may be possible to specify session tracking
information.
• Example for a SOAP operation, the request header contains:
<SOAP-ENV:Header>
<!-- optional security header here -->
<m:session xmlns:m="urn:ebx-schemas:dataservices_1.0">
<trackingInformation>String</trackingInformation>
</m:session>
</SOAP-ENV:Header>
Session parameters
Depending on the data services operation being called, it is possible to specify session input
parameters. They are defined in the request body.
Input parameters are available on custom Java components with a session object, such as: triggers,
access rules, custom web services. They are also available on data workflow operations.
• Example for a SOAP operation, the optional request header contains:
<SOAP-ENV:Header>
<!-- optional security header here -->
<m:session xmlns:m="urn:ebx-schemas:dataservices_1.0">
<!-- optional trackingInformation header here -->
<inputParameters>
<parameter>
<name>String</name>
<value>String</value>
</parameter>
<!-- for some other parameters, copy complex
element 'parameter' -->
</inputParameters>
</m:session>
</SOAP-ENV:Header>
Exception handling
In case of unexpected server error upon execution of:
• A SOAP operation, a SOAP exception response is returned to the caller via the soap:Fault
element. For example:
<soapenv:Fault>
<faultcode>soapenv:java.lang.IllegalArgumentException</faultcode>
<faultstring />
<faultactor>admin</faultactor>
<detail>
<m:StandardException xmlns:m="urn:ebx-schemas:dataservices_1.0">
<code>java.lang.IllegalArgumentException</code>
<label/>
<description>java.lang.IllegalArgumentException:
Parent home not found at
com.orchestranetworks.XX.YY.ZZ.AA.BB(AA.java:44) at
com.orchestranetworks.XX.YY.ZZ.CC.DD(CC.java:40) ...
</description>
</m:StandardException>
</detail>
</soapenv:Fault>
Using JMS
It is possible to access SOAP operations using JMS instead of HTTP. The JMS architecture relies
on one JMS request queue (mandatory), on one JMS failure queue (optional), and on JMS response
queues, see configuration JMS [p 333]. The mandatory queue is the input queue. Request messages
must be put in the input queue, and response messages are put by EBX in the replyTo queue of the
JMS request. The optional queue is the failure queue which allows you to replay an input message
if necessary. If the queue is set and activated in the configuration file and an exception occurs while
handling a request message, this input message will be copied in the failure queue.
The relationship between a request and a response is made by copying the messageId message
identifier field of the JMS request into the correlId correlation identifier field of the response.
JMS location points must be defined in the Lineage administration in order to specialize the generated
WSDL. If no specific location point is given, the default value will be jms:queue:jms/EBX_QueueIn.
90.4 Security
Authentication
Authentication is mandatory to access to data. Several authentication methods are available and
described below. The descriptions are ordered by priority (EBX applies the highest priority
authentication method first).
• 'Basic Authentication Scheme' method is based on the HTTP-Header Authorization in base 64
encoding, as described in RFC 2617 (Basic Authentication Scheme).
If the user agent wishes to send the userid "Alibaba" and password "open sesame",
it will use the following header field:
> Authorization: Basic QWxpYmFiYTpvcGVuIHNlc2FtZQ==
Note
For REST operations only, the WWW-Authenticate [p 628] header can be valued
with this method.
• 'Standard Authentication Scheme' is based on the HTTP Request. User and password are extracted
from request parameters. For more information on request parameters, see Request parameters
[p 576] section.
• The 'SOAP Security Header Authentication Scheme' method is based on the Web Services
Security UsernameToken Profile 1.0 specification.
By default, the type PasswordText is supported. This is done with the following SOAP-Header
defined in the WSDL:
<SOAP-ENV:Header>
<wsse:Security xmlns:wsse="http://schemas.xmlsoap.org/ws/2002/04/secext">
<wsse:UsernameToken>
<wsse:Username>String</wsse:Username>
<wsse:Password Type="wsse:PasswordText">String</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</SOAP-ENV:Header>
Note
Only available for SOAP operations [p 581].
Global permissions
Global access permissions can be independently defined for the SOAP and WSDL connector accesses.
For more information see Global permissions [p 357].
use the 'SOAP Header Security declaration' configuration settings under Administration > Lineage,
which includes the following fields:
Schema location The URI of the Security XML Schema to import into the
WSDL.
Root element name The root element name of the security header. The name
must be the same as the one declared in the schema.
The purpose of overriding the default security header is to change the declaration of the WSDL
message matching the security header so that it contains the following:
<wsdl:definitions ... xmlns:MyPrefix="MyTargetNameSpace" ...
...
<xs:schema ...>
<xs:import namespace="MyTargetNameSpace" schemaLocation="MySchemaURI"/>
...
</xs:schema>
...
<wsdl:message name="MySecurityMessage">
<wsdl:part name="MyPartElementName" element="MyPrefix:MySecurityRootElement"/>
</wsdl:message>
...
<wsdl:operation name="...">
<soap:operation soapAction="..." style="document"/>
<wsdl:input>
<soap:body use="literal"/>
<soap:header message="impl:MySecurityMessage" part="MyPartElementName" use="literal"/>
...
</wsdl:operation>
</wsdl:definitions>
A SOAP message using the XML schema and configuration above would have the following header:
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<SOAP-ENV:Header>
<m:MySecurityRootElement xmlns:m="MyNameSpace">
<AuthToken>String</AuthToken>
</m:MySecurityRootElement>
...
</SOAP-ENV:Header>
<SOAP-ENV:Body>
...
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
Note
Only available for SOAP operations [p 581].
Lookup mechanism
Because EBX offers several authentication methods, a lookup mechanism based on conditions was set
to know which method should be applied for a given request. The method application conditions are
evaluated according to the authentication scheme priority. If the conditions are not satisfied, the server
evaluates the next method. The following table presents the available authentication methods for each
supported protocol and their application conditions. They are ordered from the highest priority to the
lowest.
In case of multiple authentication methods present in the same request, EBX will return an HTTP
code 401 Unauthorized.
90.5 Monitoring
Data service events can be monitored through the log category ebx.dataServices, as declared in
the EBX main configuration file. For example, ebx.log4j.category.log.dataServices= INFO,
ebxFile:dataservices.
See also
Configuring the EBX logs [p 330]
EBX main configuration file [p 325]
Data
Dataspaces
Validate a dataset X
Locking a dataspace X
Workflow
Administration
Other
90.7 Limitations
Date, time & dateTime format
Data services only support the following date and time formats:
CHAPTER 91
WSDL generation
This chapter contains the following topics:
1. Supported standard
2. Operation types
3. Supported access methods
4. WSDL download from the data services user interfaces
5. WSDL download using a HTTP request
directory WSDL for default EBX directory operations. It is also possible to filter data using the tablePaths [p
577] or operations [p 577] parameters.
userInterface Deprecated since version 5.8.1. This operation type has been replaced by administration. While
the user interface management operations are still available for backward compatibility reasons, it is
recommended to no longer use this type.
WSDL for user interface management operations (these operations can only be accessed by
administrators).
userInterface Deprecated since version 5.8.1. This operation type has been replaced by administration. While
the user interface management operations are still available for backward compatibility reasons, it is
recommended to no longer use this type.
Built-in administrator role or delegated administrator profiles, if all conditions are valid:
• Global access permissions are defined for the administration.
• 'User interface' dataset permissions have writing access for the current profile.
administration Built-in administrator role or delegated administrator profiles, if all conditions are valid:
• Global access permissions are defined for the administration.
• 'Administration' dataset permissions have write access for the current profile.
Note
See Generating a WSDL for dataspace operations [p 182] in the user guide for more
information.
Request details
• HTTP(S) URL format:
http[s]://<host>[:<port>]/ebx-dataservices/<pathInfo>?<key - values>
Both <pathInfo> and <key - values> are mandatory. For more information on possible values
see: Request parameters [p 576]. An HTTP code is always returned, errors are indicated by an
error code above 400.
• HTTP(S) response status codes:
200 (OK) The WSDL content was successfully generated and is returned by the request (optionally in an
attachment [p 578]).
Note
See WSDL and table operations [p 512] for more information.
500 (Internal error) Request generates an error (a stack trace and a detailed error message are returned).
Note
A detailed error message is returned for the HTTP response with status code 4xx.
POST 2 mega octets or more (depending on the servlet/JSP container). Each parameter is limited to a
value containing 1024 characters.
Request parameters
A request parameter can be specified by one of the following methods:
• a path info on the URL (recommended)
• key values in a standard HTTP parameter.
For more detail, refer to the following table (some parameters do not have a path info representation):
• I = Insert record(s)
• U = Update record(s)
• R = Read operations (equivalent to CEGS)
• S = Select record(s)
• W = Write operations (equivalent to DIU)
String type value.
namespaceURI yes yes (**) Unique name space URI of the custom web
service.
(**)Is required when type parameter is set to
custom types. Otherwise is ignored.
Request examples
Some of the following examples are displayed in two formats: path info and key - values.
• The WSDL will contain all repository operations.
http[s]://<host>[:<port>]/ebx-dataservices/repository?
WSDL&login=<login>&password=<password>
• The WSDL will contain all tables operations for the dataset 'dataset1' in dataspace
'dataspace1'.
Path info
http[s]://<host>[:<port>]/ebx-dataservices/tables/dataspace1/dataset1?
WSDL&login=<login>&password=<password>
Key - values
http[s]://<host>[:<port>]/ebx-dataservices/tables?
WSDL&login=<login>&password=<password>&branch=<dataspace1>&instance=<dataset1>
• The WSDL will contain all tables with only readable operations for the dataset 'dataset1' in
dataspace 'dataspace1'.
Path info
http[s]://<host>[:<port>]/ebx-dataservices/tables/dataspace1/dataset1?
WSDL&login=<login>&password=<password>&operations=R
Key - values
http[s]://<host>[:<port>]/ebx-dataservices/tables?
WSDL&login=<login>&password=<password>&
branch=dataspace1&instance=dataset1&operations=R
• The WSDL will contain two selected tables operations for the dataset 'dataset1' in dataspace
'dataspace1'.
Path info
http[s]://<host>[:<port>]/ebx-dataservices/tables/dataspace1/dataset1?
WSDL&login=<login>&password=<password>&tablePaths=/root/table1,/root/table2
Key - values
http[s]://<host>[:<port>]/ebx-dataservices/tables?
WSDL&login=<login>&password=<password>&
branch=dataspace1&instance=dataset1&tablePaths=/root/table1,/root/table2
• The WSDL will contain custom web service operations for the dedicated URI.
Path info
http[s]://<host>[:<port>]/ebx-dataservices/custom/urn:ebx-
test:com.orchestranetworks.dataservices.WSDemo?
WSDL&login=<login>&password=<password>
Key - values
http[s]://<host>[:<port>]/ebx-dataservices/custom?
WSDL&login=<login>&password=<password>&namespaceURI=urn:ebx-
test:com.orchestranetworks.dataservices.WSDemo
CHAPTER 92
SOAP operations
This chapter contains the following topics:
1. Operations generated from a data model
2. Operations on datasets and dataspaces
3. Operations on data workflows
4. Administrative services
Attention
Since the WSDL and the SOAP operations tightly depend on the data model structure, it is important
to redistribute the up-to-date WSDL after any data model change.
Content policy
Access to the content of records, the presence or absence of XML elements, depend on the resolved
permissions [p 273] of the authenticated user session. Additional aspects, detailed below, can impact
the content.
• On response, if the schema property has been changed to false, WSDL validation will return an
error if it is activated.
This setting of "Default view" is defined inside data model.
See also
Hiding a field in Data Services [p 506]
Permissions [p 273]
Association field
Read-access on table records can export the association fields as displayed in UI Manager. This feature
can be coupled with the 'hiddenInDataServices' model parameter.
Note
Limitations: change and update operations do not manage association fields. Also, the
select operation only exports the first level of association elements (the content of
associated objects cannot contain association elements).
branch The identifier of the dataspace to which the dataset belongs. Either this
parameter or
the 'version'
parameter must
be defined.
Required for the
'insert', 'update'
and 'delete'
operations.
version The identifier of the snapshot to which the dataset belongs. Either this
parameter or
the 'branch'
parameter must
be defined
instance The unique name of the dataset which contains the table to query. Yes
predicate XPath predicate [p 225] defines the records on which the request is applied. Only required
If empty, all records will be retrieved, except for the 'delete' operation for the 'delete'
where this field is mandatory. operation
data Contains the records to be modified, represented within the structure of Only required
their data model. The whole operation is equivalent to an XML import. for the insert
The details of the operations performed on data content are specified in and update
the section Import [p 213]. operations
viewPublication This parameter can be combined with the predicate [p 583] parameter as a No
logical AND operation.
The behavior of this parameter is described in the section EBX as a Web
Component [p 194].
It cannot be used if the 'viewId' parameter is used, and cannot be used on
hierarchical views.
viewId Deprecated since version 5.2.3. This parameter has been replaced by the No
parameter 'viewPublication'. While it remains available for backward
compatibility, it will eventually be removed in a future version.
This parameter cannot be used if the 'viewPublication' parameter is used.
blockingConstraintsDisabled This property is available for all table updates data service operations. No
If true, the validation process disables blocking constraints defined in
the data model.
If this parameter is not present, the default is false.
See Blocking and non-blocking constraints [p 488] for more information.
Select operations
with:
includesTechnicalData The response will contain technical data if true. See also the optimistic No
locking [p 600] section.
Each returned record will contain additional attributes for this technical
information, for instance:
...<table
ebxd:lastTime="2010-06-28T10:10:31.046"
ebxd:lastUser="Uadmin"
ebxd:uuid="9E7D0530-828C-11DF-B733-0012D01B6E76">... .
exportCredentials If true the select will also return the credentials for each record. No
pageSize (nested under the When pagination is enabled, defines the number of records to retrieve. When
pagination element) pagination is
enabled, yes
previousPageLastRecordPredicate When pagination is enabled, XPath predicate that defines the record No
(nested under the pagination after which the page must fetched, this value is provided by the previous
element) response, as the element lastRecordPredicate. If the passed record is
not found, the first page will be returned.
<c>W</c>
<d>W</d>
</TableName>
</XX>
</credentials>
<lastRecordPredicate>./a='key1'</lastRecordPredicate>
</ns1:select_{TableName}Response>
with:
Element Description
data Content of records that are displayed following the table path.
credentials Contains the access right for each node of each record.
lastRecordPredicate Only returned if the pagination is enabled, this defines the last
records in order to be used on the next request in the element
previousPageLastRecordPredicated.
with:
with:
Element Description
Delete operation
Deletes records or, for a child dataset, defines the record state as "occulting" or "inherited" according
to the record context. Records are selected by the predicate parameter.
Delete request
<m:delete_{TableName} xmlns:m="urn:ebx-schemas:dataservices_1.0">
<branch>String</branch>
<version>String</version>
<instance>String</instance>
<predicate>String</predicate>
<includeOcculting>boolean</includeOcculting>
<inheritIfInOccultingMode>boolean</inheritIfInOccultingMode>
<checkNotChangedSinceLastTime>dateTime</checkNotChangedSinceLastTime>
<blockingConstraintsDisabled>boolean</blockingConstraintsDisabled>
<details locale="Locale"/>
</m:delete_{TableName}>
with:
occultIfInherit Deprecated since version 5.7.0 Occults the record if it is in inherit mode. No
Default value is false.
checkNotChangedSinceLastTime Timestamp used to ensure that the record has not been modified since the No
last read. Also see the optimistic locking [p 600] section.
Delete response
If one of the provided parameters is illegal, if a required parameter is missing, if the action is not
authorized or if no record is selected, an exception is returned. Otherwise, the specific response is
returned:
<ns1:delete_{TableName}Response xmlns:ns1="urn:ebx-schemas:dataservices_1.0">
<status>String</status>
<blockingConstraintMessage>String</blockingConstraintMessage>
</ns1:delete_{TableName}Response>
with:
Element Description
status '00' indicates that the operation has been executed successfully.
'95' indicates that at least one operation has violated a blocking
constraint, resulting in the overall operation being aborted.
Count operation
Count request
<m:count_{TableName} xmlns:m="urn:ebx-schemas:dataservices_1.0">
<branch>String</branch>
<version>String</version>
<instance>String</instance>
<predicate>String</predicate>
</m:count_{TableName}>
with:
Element Description
Count response
<ns1:count_{TableName}Response xmlns:ns1="urn:ebx-schemas:dataservices_1.0">
<count>Integer</count>
</ns1:count_{TableName}Response>
with:
Element Description
Update operation
Update request
<m:update_{TableName} xmlns:m="urn:ebx-schemas:dataservices_1.0">
<branch>String</branch>
<version>String</version>
<instance>String</instance>
<updateOrInsert>boolean</updateOrInsert>
<byDelta>boolean</byDelta>
<blockingConstraintsDisabled>boolean</blockingConstraintsDisabled>
<details locale="Locale"/>
<data>
<XX>
<TableName>
<a>String</a>
<b>String</b>
<c>String</c>
<d>String</d>
...
</TableName>
</XX>
</data>
</m:update_{TableName}>
with:
updateOrInsert If true and the record does not currently exist, the operation creates the No
record.
boolean type, the default value is false.
byDelta If true and an element does not currently exist in the incoming message, No
the target value is not changed.
If false and node is declared hiddenInDataServices, the target value is
not changed.
The complete behavior is described in the sections Insert and update
operations [p 214].
Update response
<ns1:update_{TableName}Response xmlns:ns1="urn:ebx-schemas:dataservices_1.0">
<status>String</status>
<blockingConstraintMessage>String</blockingConstraintMessage>
</ns1:update_{TableName}Response>
with:
Element Description
status '00' indicates that the operation has been executed successfully.
'95' indicates that at least one operation has violated a blocking
constraint, resulting in the overall operation being aborted.
Insert operation
Insert request
<m:insert_{TableName} xmlns:m="urn:ebx-schemas:dataservices_1.0">
<branch>String</branch>
<version>String</version>
<instance>String</instance>
<byDelta>boolean</byDelta>
<blockingConstraintsDisabled>boolean</blockingConstraintsDisabled>
<details locale="Locale"/>
<data>
<XX>
<TableName>
<a>String</a>
<b>String</b>
<c>String</c>
<d>String</d>
...
</TableName>
</XX>
</data>
</m:insert_{TableName}>
with:
byDelta If true and an element does not currently exist in the incoming message, No
the target value is not changed.
If false and node is declared hiddenInDataServices, the target value is
not changed.
The complete behavior is described in the sections Insert and update
operations [p 214].
Insert response
<ns1:insert_{TableName}Response xmlns:ns1="urn:ebx-schemas:dataservices_1.0">
<status>String</status>
<blockingConstraintMessage>String</blockingConstraintMessage>
<inserted>
<predicate>./a='String'</predicate>
</inserted>
</ns1:insert_{TableName}Response>
with:
Element Description
status '00' indicates that the operation has been executed successfully.
'95' indicates that at least one operation has violated a blocking
constraint, resulting in the overall operation being aborted.
blockingConstraintMessage This element is present if the status is equal to '95' with a localized
message. The locale of the message is retrieved from the request
parameter or from the user session.
predicate A predicate matching the primary key of the inserted record. When
several records are inserted, the predicates follow the declaration order of
the records in the input message.
with:
compareWithBranch The identifier of the dataspace with which to compare. One of either this
parameter or the
'compareWithVersion
[p 593]' parameter
must be defined.
compareWithVersion The identifier of the snapshot with which to compare. One of either this
parameter or the
'compareWithBranch
[p 593]' parameter
must be defined.
information.
includeInstanceUpdates Defines if the content updates of the dataset are included. Default is No
false.
pageSize (nested under Defines maximum number of records in each page. Minimal size is No (Only for
pagination element) 50. creation)
context (nested under Defines content of pagination context. No (Only for next)
pagination element)
Note
If none of the compareWithBranch or compareWithVersion parameters are specified, the
comparison will be made with their parent:
• if the current dataspace or snapshot is a dataspace, the comparison is made with its initial
snapshot (includes all changes made in the dataspace);
• if the current dataspace or snapshot is a snapshot, the comparison is made with its parent
dataspace (includes all changes made in the parent dataspace since the current snapshot
was created);
• returns an exception if the current dataspace is the 'Reference' dataspace.
<b>BVALUE2.1</b>
<c>CVALUE2.1</c>
<d>DVALUE2</d>
</TableName>
</XX>
</data>
</updated>
<deleted>
<predicate>./a='AVALUE1'</predicate>
</deleted>
</ns1:getChanges_{TableName}Response>
with:
changes (nested under updated Only the group of field have been updated. Yes
element)
change (nested under changes Group of fields have been updated with own XPath predicate attribute Yes
element) of the record.
data (nested under updated Content under this element corresponding to an XML export of dataset or No
element) updated records.
context (nested under Defines content of pagination context. Yes (Only for
pagination element) next and last)
identifier (nested under context Pagination context identifier. Not defined at last returned page. No
element)
For creation:
Extract of request:
...
<pagination>
<!-- on first request for creation -->
<pageSize>Integer</pageSize>
</pagination>
...
Extract of response:
...
<pagination>
<!-- on next request to continue -->
<context>
<identifier>String</identifier>
<pageNumber>Integer</pageNumber>
<totalPages>Integer</totalPages>
</context>
</pagination>
...
For next:
Extract of request:
...
<pagination>
<context>
<identifier>String</identifier>
</context>
</pagination>
...
Extract of response:
...
<pagination>
<!-- on next request to continue -->
<context>
<identifier>String</identifier>
<pageNumber>Integer</pageNumber>
<totalPages>Integer</totalPages>
</context>
</pagination>
...
For last:
Extract of request:
...
<pagination>
<context>
<identifier>String</identifier>
</context>
</pagination>
...
Extract of response:
...
<pagination>
<context>
<pageNumber>Integer</pageNumber>
<totalPages>Integer</totalPages>
</context>
</pagination>
...
with:
When an error occurs during one operation in the sequence, all updates are rolled back and the client
receives a StandardException error message with details.
See Concurrency and isolation levels [p 432].
<m:multi_ xmlns:m="urn:ebx-schemas:dataservices_1.0">
<branch>String</branch>
<version>String</version>
<instance>String</instance>
<blockingConstraintsDisabled>boolean</blockingConstraintsDisabled>
<details locale="Locale"/>
<request id="id1">
<{operation}_{TableName}>
...
</{operation}_{TableName}>
</request>
<request id="id2">
<{operation}_{TableName}>
...
</{operation}_{TableName}>
</request>
</m:multi_>
with:
request This element contains one operation, like a single operation without Yes
branch, version and instance parameters. This element can be repeated
multiple times for additional operations. Each request can be identified
by an 'id' attribute. In a response, this 'id' attribute is returned for
identification purposes.
Operations such as count, select, getChanges, getCredentials,
insert, delete or update.
Note:
• Does not accept a limit on the number of request elements.
• The request id attribute must be unique in multi-operation requests.
• If all operations are read only (count, select, getChanges, or getCredentials) then the whole
transaction is set as read-only for performance considerations.
Limitation:
• The multi operation applies to one model and one dataset (parameter instance).
• The select operation cannot use the pagination parameter.
See also
API
Procedure
API
Repository
with:
Element Description
Optimistic locking
To prevent an update or a delete operation on a record that was read earlier but may have changed in
the meantime, an optimistic locking mechanism is provided.
A select request can include technical information by adding the element includesTechnicalData:
<m:select_{TableName} xmlns:m="urn:ebx-schemas:dataservices_1.0">
<branch>String</branch>
<version>String</version>
<instance>String</instance>
<predicate>String</predicate>
<includesTechnicalData>boolean</includesTechnicalData>
</m:select_{TableName}>
The value of the lastTime attribute can then be used in the following update request. If the record has
been changed since the specified time, the update will be cancelled. The attribute lastTime has to be
added on the record to prevent the update of a modified record.
<m:update_{TableName} xmlns:m="urn:ebx-schemas:dataservices_1.0">
<branch>String</branch>
<version>String</version>
<instance>String</instance>
<updateOrInsert>true</updateOrInsert>
<data>
<XX>
<TableName ebxd:lastTime="2010-06-28T10:10:31.046">
<a>String</a>
<b>String</b>
<c>String</c>
<d>String</d>
...
</TableName>
</XX>
</data>
</m:update_{TableName}>
The value of the lastTime attribute can also be used to prevent deletion on a modified record:
<m:delete_{TableName} xmlns:m="urn:ebx-schemas:dataservices_1.0">
<branch>String</branch>
<version>String</version>
<instance>String</instance>
<predicate>String</predicate>
<checkNotChangedSinceLastTime>2010-06-28T10:10:31.046</checkNotChangedSinceLastTime>
</m:delete_{TableName}>
Note
The element checkNotChangedSinceLastTime may be used more than once but only for
the same record. This implies that if the predicate element returns more than one record,
the request will fail.
branch Identifier of the target dataspace on One of either this parameter or the
which the operation is applied. When 'version' parameter must be defined.
not specified, the 'Reference' dataspace Required for the dataspace merge,
is used except for the merge dataspace locking, unlocking and replication
operation where it is required. refresh operations.
version Identifier of the target snapshot on which One of either this parameter or the
the operation is applied. 'branch' parameter must be defined
instance The unique name of the dataset on which Required for the replication refresh
the operation is applied. operation.
details Defines if validation returns details. No. If not specified, no details are
returned.
The optional attribute
severityThreshold defines the lowest
severity level of message to return. The
possible values are, in descending order
of severity, 'fatal', 'error', 'warning', or
'info'. For example, setting the value
to 'error' will return error and fatal
validation messages. If this attribute is
not defined, all levels of messages are
returned by default.
The optional attribute locale (default
'en-US') defines the language in which
the validation messages are to be
returned.
locale (nested under the documentation Locale of the documentation. Only required when the documentation
element) element is used
Validate a dataspace
Validate a dataset
<path>Path</path>
</subject>
</reportItem>
</details>
</validationReport>
</ns1:validateInstance_Response>
Create a dataspace
with:
Element Description
status '00' indicates that the operation has been executed successfully.
Create a snapshot
with:
Element Description
status '00' indicates that the operation has been executed successfully.
Locking a dataspace
with:
durationToWaitForLock This parameter defines the maximum duration (in seconds) that the operation waits No,
for a lock before aborting. does not
wait by
default
message User message of the lock. Multiple message elements may be used. No
with:
Element Description
status '00' indicates that the operation has been executed successfully.
'94' indicates that the dataspace has been already locked by
another user.
Otherwise, a SOAP exception is thrown.
Unlocking a dataspace
with:
Element Description
status '00' indicates that the operation has been executed successfully.
Otherwise, a SOAP exception is thrown.
Merge a dataspace
with:
deleteDataOnMerge This parameter is available for the merge dataspace operation. Sets whether the No
specified dataspace and its associated snapshots will be deleted upon merge.
When this parameter is not specified in the request, the default value is
false. It is possible to redefine the default value by specifying the property
ebx.dataservices.dataDeletionOnCloseOrMerge.default in the EBX main
configuration file [p 332].
See Deleting data and history [p 352] for more information.
deleteHistoryOnMerge This parameter is available for the merge dataspace operation. Sets whether the No
history associated with the specified dataspace will be deleted upon merge. Default
value is false.
When this parameter is not specified in the request, the default value is
false. It is possible to redefine the default value by specifying the property
ebx.dataservices.historyDeletionOnCloseOrMerge.default in the EBX main
configuration file [p 332].
See Deleting data and history [p 352] for more information.
Note
The merge decision step is bypassed during merges performed through data services. In
such cases, the data in the child dataspace automatically overrides the data in the parent
dataspace.
with:
Element Description
status '00' indicates that the operation has been executed successfully.
with:
deleteDataOnClose This parameter is available for the close dataspace and close snapshot operations. No
Sets whether the specified snapshot, or dataspace and its associated snapshots, will
be deleted upon closure.
When this parameter is not specified in the request, the default value is
false. It is possible to redefine this default value by specifying the property
ebx.dataservices.dataDeletionOnCloseOrMerge.default in the EBX main
configuration file [p 332].
See Deleting data and history [p 352] for more information.
deleteHistoryOnClose This parameter is available for the close dataspace operation. Sets whether the No
history associated with the specified dataspace will be deleted upon closure. Default
value is false.
When this parameter is not specified in the request, the default value is
false. It is possible to redefine the default value by specifying the property
ebx.dataservices.historyDeletionOnCloseOrMerge.default in the EBX main
configuration file [p 332].
See Deleting data and history [p 352] for more information.
Replication refresh
with:
with:
Element Description
status '00' indicates that the operation has been executed successfully.
parameters Deprecated since version 5.7.0 While it remains available for backward No
compatibility, it will eventually be removed in a future major version.
Note
The parameters element is ignored if at least one
session parameter has been defined.
Start a workflow
Start a workflow from a workflow launcher. It is possible to start a workflow with localized
documentation and specific input parameters (with name and optional value).
Note
The workflow creator is initialized from the session and the workflow priority is retrieved
from the last published version.
Sample request:
<m:workflowProcessInstanceStart xmlns:m="urn:ebx-schemas:dataservices_1.0">
<publishedProcessKey>String</publishedProcessKey>
<documentation>
<locale>Locale</locale>
<label>String</label>
<description>String</description>
</documentation>
</m:workflowProcessInstanceStart>
with:
parameters Deprecated since version 5.7.0 See the description under Common No
parameters [p 609].
Sample response:
<m:workflowProcessInstanceStart_Response xmlns:m="urn:ebx-schemas:dataservices_1.0">
<processInstanceKey>String</processInstanceKey>
</m:workflowProcessInstanceStart_Response>
with:
processInstanceId Deprecated since version 5.6.1 This parameter has been replaced by the No
'processInstanceKey' parameter. While it remains available for backward
compatibility, it will eventually be removed in a future major version.
Resume a workflow
Resume a workflow in a wait step from a resume identifier. It is possible to define specific input
parameters (with name and optional value).
Sample request:
<m:workflowProcessInstanceResume xmlns:m="urn:ebx-schemas:dataservices_1.0">
<resumeId>String</resumeId>
</m:workflowProcessInstanceResume>
with:
parameters Deprecated since version 5.7.0 See the description under Common No
parameters [p 609].
Sample response:
<m:workflowProcessInstanceResume_Response xmlns:m="urn:ebx-schemas:dataservices_1.0">
<status>String</status>
<processInstanceKey>String</processInstanceKey>
</m:workflowProcessInstanceResume_Response>
with:
status '00' indicates that the operation has been executed successfully. Yes
'20' indicates that the workflow has not been found.
'21' indicates that the event has already been received.
processInstanceKey Identifier of the workflow. This parameter is returned if the operation has No
been executed successfully.
End a workflow
End a workflow from its identifier.
Sample request:
<m:workflowProcessInstanceEnd xmlns:m="urn:ebx-schemas:dataservices_1.0">
<processInstanceKey>String</processInstanceKey>
</m:workflowProcessInstanceEnd>
with:
Sample response:
<m:workflowProcessInstanceEnd_Response xmlns:m="urn:ebx-schemas:dataservices_1.0">
<status>String</status>
</m:workflowProcessInstanceEnd_Response>
with:
status '00' indicates that the operation has been executed successfully. Yes
<firstName>firstname</firstName>
<email>[email protected]</email>
<password>***</password>
<passwordMustChange>true</passwordMustChange>
<builtInRoles>
<administrator>false</administrator>
<readOnly>false</readOnly>
</builtInRoles>
<comments>a comment</comments>
</users>
</directory>
</data>
</m:insert_user>
For the insert SOAP response syntax, see insert response [p 591] for more information.
Parameters
The following parameter is applicable.
details Defines attributes that must be applied to response No, but if specified,
messages. the locale attribute
must be provided.
The attribute locale (default: EBX default locale)
defines the language in which the system item messages
must be returned.
CHAPTER 93
Introduction
This chapter contains the following topics:
1. Overview
2. Activation and configuration
3. Interactions
4. Security
5. Monitoring
6. SOAP and REST comparative
7. Limitations
93.1 Overview
REST Data services allow external systems to interact with data governed in the EBX repository using
the RESTful built-in services.
Request and response syntax for built-in services are described in the chapter Built-in RESTful
services [p 623].
Built-in REST data services allow to perform operations such as:
• Selecting, inserting, updating, deleting, or counting records
• Selecting or counting history records
• Selecting dataset values
• Getting the differences on a table between dataspaces or snapshots, or between two datasets based
on the same data model
• Getting the credentials of records
Note
See SOAP and REST comparative [p 620].
For specific deployment, for example using reverse-proxy mode, the URL to ebx-dataservices must
be configured through lineage administration.
Currently only protocol HTTP(S) is supported.
93.3 Interactions
Input and output message encoding
All input and output messages must be exclusively in UTF-8 for REST built-in.
Tracking information
Depending on the data services operation being called, it may be possible to specify session tracking
information.
• Example for a RESTful operation, the JSON request contains:
{
"procedureContext": // JSON Object (optional)
{
"trackingInformation": "String" // JSON String (optional)
}, ...
}
Session parameters
Depending on the data services operation being called, it is possible to specify session input
parameters. They are defined in the request body.
Input parameters are available on custom Java components with a session object, such as: triggers,
access rules, custom web services. They are also available on data workflow operations.
• Example for a RESTful operation, the JSON request contains:
{
"procedureContext": // JSON Object (optional)
{
"trackingInformation": "String", // JSON String (optional)
"inputParameters": // JSON Array (optional)
[
// JSON Object for each parameter
{
"name" : "String" // JSON String (required)
"value" : "String" // JSON String (optional)
},
...
]
}, ...
}
Exception handling
When an error occurs, a JSON exception response is returned to the caller. For example:
{
"code": 999, // JSON Number, HTTP status code
"errors": [
{
"severity": "...", // JSON String, severity (optional)
"rowIndex": 999, // JSON Number, request row index (optional)
"userCode": "...", // JSON String, user code (optional)
"message": "...", // JSON String, message
"details": "...", // JSON String, URL (optional)
The response contains an HTTP status code and a table of errors. The severity of each error is specified
by a character, with one of the possible values (F: fatal, E: error, W: warning, I: information).
The HTTP error 422 (Unprocessable entity) corresponds to a functional error. It contains a user code
under the userCode key and is a JSON String type.
93.4 Security
Authentication
Authentication is mandatory to access built-in services. Several authentication methods are available
and described below. The descriptions are ordered by priority (EBX applies the highest priority
authentication method first).
• 'Token Authentication Scheme' method is based on the HTTP-Header Authorization, as
described in RFC 2617.
> Authorization: <tokenType> <accessToken>
For more information on this authentication scheme, see Token authentication operations [p 632].
• 'Basic Authentication Scheme' method is based on the HTTP-Header Authorization in base 64
encoding, as described in RFC 2617 (Basic Authentication Scheme).
If the user agent wishes to send the userid "Alibaba" and password "open sesame",
it will use the following header field:
> Authorization: Basic QWxpYmFiYTpvcGVuIHNlc2FtZQ==
Note
The WWW-Authenticate [p 628] header can be valued with this method.
• 'Standard Authentication Scheme' is based on the HTTP Request. User and password are extracted
from request parameters. For more information on request parameters, see Request parameters
[p 576] section.
Note
The 'REST Forward Authentication Scheme' is only available for the Built-in
RESTful services [p 623].
Global permissions
Global access permissions can be independently defined for the REST built-in services access. For
more information see Global permissions [p 357].
Lookup mechanism
Because EBX offers several authentication methods, a lookup mechanism based on conditions was set
to know which method should be applied for a given request. The method application conditions are
evaluated according to the authentication scheme priority. If the conditions are not satisfied, the server
evaluates the next method. The following table presents the available authentication methods for each
supported protocol and their application conditions. They are ordered from the highest priority to the
lowest.
In case of multiple authentication methods present in the same request, EBX will return an HTTP
code 401 Unauthorized.
93.5 Monitoring
Data service events can be monitored through the log category ebx.dataServices, as declared in
the EBX main configuration file. For example, ebx.log4j.category.log.dataServices= INFO,
ebxFile:dataservices.
See also
Configuring the EBX logs [p 330]
EBX main configuration file [p 325]
Data
Dataspaces
Validate a dataset X
Locking a dataspace X
Workflow
Administration
Other
93.7 Limitations
Date, time & dateTime format
Data services only support the following date and time formats:
JMS
• JMS protocol is not supported.
CHAPTER 94
Built-in RESTful services
This chapter contains the following topics:
1. Introduction
2. Request
3. Response
4. Administration operations
5. Token authentication operations
6. Data operations
7. Limitations
94.1 Introduction
The architecture used is called ROA (Resource-Oriented Architecture), it can be an alternative to
SOA (Service-Oriented Architecture). The chosen resources are readable and/or writable by third-
party systems, according to the request content.
The HATEOAS approach of the built-in RESTful services also allows to experience an intuitive and
straightforward navigation, which implies that the data details could be obtained through a link.
Note
All operations are stateless.
94.2 Request
This chapter describes the elements to use in order to build a conform REST request, such as: the
HTTP method, the URL format, the header fields and the message body.
See also
Interactions [p 617]
Security [p 618]
HTTP method
Considered HTTP methods for built-in RESTful services, are:
• GET: used to select master data defined in the URL (the URL size limit depends on the application
server or on the browser, that must be lower than or equal to 2KB).
• POST: used to insert one or more records in a table or to select the master data defined in the URL
(the size limit is 2MB or more depending on the application server. Each parameter is limited to
a value containing 1024 characters).
• PUT: used to update the master data defined in the URL.
• DELETE: used to delete either the record defined in the URL or multiple records defined with the
table URL and the record table in the message body.
URL
REST URL contains:
http[s]://<host>[:<port>]/ebx-dataservices/rest/{category}/{categoryVersion}/
{specificPath}[:{extendedAction}]?{queryParameters}
Where:
• {category} corresponds to the operation category [p 625].
• {categoryVersion} corresponds to the category version: current value is v1.
• {specificPath} corresponds to a specific path inside the category.
• {extendedAction} corresponds to the extended action name (optional).
• {queryParameters} corresponds to common [p 627] or dedicated operation parameters passed
by the URL.
Operation category
It specializes the operation, it is added in the path of the URL in {category} and it takes one of the
following values:
Header fields
These header field definitions are used by EBX.
Accept-Language Used for specifying the preferred locale for the response.
The supported locales are defined in the schema model.
If none of the preferred locale are supported, the default
locale for the current model is used.
Content-Type Used to specify the request body media type. The supported
types are application/json and application/x-www-form-
urlencoded. The request value is checked and if it is not
supported, then an HTTP error message is returned with the
code 415 (Unsupported media type).
See RFC2616 for more information about HTTP Header Field Definitions.
Common parameters
These optional parameters are available for all data service operations.
Parameter Description
indent Specifies if the response should be indented, to be easier to read for a human.
Boolean type, default value is false.
Message body
It contains the request data using the JSON format, see JSON Request body [p 658].
Note
Requests may define a message body only when using POST or PUT HTTP methods.
94.3 Response
This chapter describes the responses returned by built-in RESTful services.
• See Exception handling [p 617] for details on standard error handling (where the HTTP code is
greater than or equal to 300).
Header fields
These header field definitions are used by EBX.
Content-Language Indicates the locale used in the response for labels and
descriptions.
See also
Request header X-Requested-With [p 626]
Authentication [p 618]
HTTP codes
201 (Created) A new record has been created, in this case, the header field Location is returned
with its resource URL.
204 (No content) The request has been successfully handled but no response body is returned.
400 (Bad request) the request URL or body is not well-formed or contains invalid content.
403 (Forbidden) Permission was denied to read or modify the specified resource for the authenticated
user.
This error is also returned when the user:
• is not allowed to modify a field mentioned in the request message body.
• is not allowed to access the REST connector.
For more details, see Global permissions [p 619].
404 (Not found) The resource specified in the URL cannot be found.
406 (Not acceptable) Content type defined in the request's Accept parameter is not supported. This error
can be returned only if the EBX property ebx.rest.request.checkAccept is set to
true.
415 (Unsupported media type) The request content is not supported, the request header value Content-Type is not
supported by the operation.
422 (Unprocessable entity) The new resource's content cannot be accepted for semantic reasons.
500 (Internal error) Unexpected error thrown by the application. Error details can usually be found in
EBX logs.
Message body
The response body content's format depends on the HTTP code value:
• HTTP codes from 200 included to 300 excluded: the content format depends on the associated
request (JSON [p 661] samples).
With the exception of code 204 (No content).
• HTTP codes greater than or equal to 300: the content describes the error. See JSON [p 617] for
details on the format.
Note
administration category and administration dataspaces can only be used by
administrators.
Directory operations
The EBX default directory configuration is manageable with built-in RESTful services. The users and
roles tables, the mailing lists and other objects are concerned. For more information, see Users and
roles directory [p 373].
Note
Triggers are present on the directory's tables, ensuring the data consistency.
association table. As for the 'usersRoles' table, the management of roles inclusions requires manual
operations. Each table is self-descriptive when metadata is requested.
See select [p 635], update [p 646], insert [p 643] and delete [p 648] operations for more information.
See also
EBX main configuration file [p 325]
Repository administration [p 346]
Parameters
The following parameter is applicable.
Parameter Description
HTTP codes
400 (Bad request) The request is not correct, it contains one of the following errors:
• the HTTP method is not GET nor POST,
• the HTTP parameter systemInformationMode is not correct,
• the operation is not supported.
• the request path is invalid.
Response body
It is returned, if and only if, the HTTP code is 200 (OK). The content structure depends on the provided
parameter systemInformationMode or its default value.
See the JSON [p 661] example of the flat representation.
See the JSON [p 662] example of the hierarchical representation.
Note
The token timeout is modifiable through the administration property
ebx.dataservices.rest.auth.token.timeout [p 332] (the default value is 30 minutes).
Note
No other authentication information is required in the HTTP headers.
Message body
The message body must be defined in the request. It necessarily contains a login and a password value
(they are both String).
See the JSON [p 658] example of a token creation request.
HTTP codes
Response body
If the HTTP code is 200 (OK), the body holds the token value and its type.
See the JSON [p 663] example of a token creation response.
The token can later be used to authenticate a user by setting the HTTP-Header Authorization
accordingly.
Message body
The message body must be defined in the request. It necessarily contains a password and a
passwordNew, the login is optional (all are String).
See the JSON [p 658] example of a password change and token creation request.
HTTP codes
Response body
If HTTP code 204 (No content) is returned, then the password has been modified.
Header fields
HTTP codes
Select operation
Select operation may use one of the following methods:
• GET HTTP method,
• POST HTTP method without message body or
• POST HTTP method with message body and optionally session parameters [p 617].
URL formats are:
• Dataset tree, depending on operation category [p 625]:
The data category returns the hierarchy of the selected dataset, this includes group and table
nodes.
The history category returns the hierarchy of the selected history dataset, this includes the pruned
groups for history table nodes only.
http[s]://<host>[:<port>]/ebx-dataservices/rest/{category}/v1/{dataspace}/
{dataset}
Note
Terminal nodes and sub-nodes are not included.
• Dataset node: the data category returns the terminal nodes contained in the selected node.
http[s]://<host>[:<port>]/ebx-dataservices/rest/data/v1/{dataspace}/{dataset}/
{pathInDataset}
Note
Not applicable with the history category.
Note
The record access by the primary key (primaryKey parameter) is limited to its root
node. It is recommended to use the encoded primary key, available in the details
field in order to override this limitation. Similarly, for a history record, use the
encoded primary key, available in the historyDetails field.
The history category returns the field history record content where structure depends on its type.
http[s]://<host>[:<port>]/ebx-dataservices/rest/{category}/v1/{dataspace}/
{dataset}/{pathInDataset}/{encodedPrimaryKey}/{pathInRecord}
Note
The field must be either an association node, a selection node, a terminal node or
above.
Where:
• {category} corresponds to the operation category [p 625].
• {dataspace} corresponds to B followed by the dataspace identifier or to V followed by the snapshot
identifier.
• {dataset} corresponds to the dataset identifier.
• {pathInDataset} corresponds to the path of the dataset node, that can be a group node or a table
node.
• {encodedPrimaryKey} corresponds to the encoded representation of the primary key.
• {xpathExpression} corresponds to the record primary key, using the XPath expression.
• {pathInRecord} corresponds to the path starting from the table node.
Parameters
The following parameters are applicable to the select operation.
Parameter Description
includeContent Includes the content field with the content corresponding to the selection.
Boolean type, default value is true.
includeDetails Includes the details field in the metadata and the response, for each indirect
reachable resource. The returned value corresponds to its URL resource.
Type Boolean, default value is true.
Note
The includeHistory parameter is ignored in the history
category, the default value is true.
See also
includeMeta [p 638]
includeDetails [p 638]
includeLabel Includes the label field associated with each simple type content.
Possible values are:
• yes: the label is included for the foreign key, enumeration, record and selector
values.
• all: the label field is included, as for the yes value and also for the Content of
simple type [p 674].
Note
The label field is not included if it is equal to the content field.
includeMeta Includes the meta field corresponding to the description of the structure returned in
the content field.
Boolean type, default value is false.
See also
Parameter Description
includeHistory [p 638]
includeDetails [p 638]
includeSelector Includes the selector field in the response, for each indirect reachable resource.
The returned value corresponds to its URL resource.
Type Boolean, default value is true.
includeSortCriteria Includes the sortCriteria field corresponding to the list of sort criteria applied.
The sort criteria parameters are added by using:
• sort [p 641]
• sortOnLabel [p 641]
• viewPublication [p 642]
Boolean type, default value is false.
Example JSON [p 670]
Note
This parameter is ignored with the history category.
Note
This parameter is ignored with the history category.
Table parameters
The following parameters are applicable to tables, associations and selection nodes.
Parameter Description
filter XPath predicate [p 225] expression defines the field values to which the request is
applied. If empty, all records will be retrieved.
String type value.
Note
The history code operation value is usable with ebx-
operationCode path field from the meta section associated with
this field.
Note
This parameter is ignored with the data category.
primaryKey Search a record by a primary key, using the XPath expression. The XPath predicate
[p 225] expression should only contain field(s) of the primary key and all of them.
Fields are separated by the operator and. A field is represented by one of the
following possibilities according to its simple type:
• For the date, time or dateTime types: use the date-equal(path, value)
• For other types: indicate the path, the = operator and the value.
Example with a composed primary key: ./pk1i=1 and date-equal(./
pk2d,'2015-11-13')
The response will only contain the corresponding record, otherwise an error is
returned. Consequently, the other table parameters are ignored (as filter [p 640],
viewPublication [p 642], sort [p 641], etc.)
String type value.
pageRecordFilter Specifies the record XPath predicate [p 225] expression filter of the page.
String type value.
Parameter Description
sort Specifies that the operation result will be sorted according to the specified criteria.
The criteria are composed of one or more criteria, the result will be sorted by
priority from the left. A criterion is composed of the field path and, optionally, the
sorting order (ascending or descending, on value or on label). This parameter can be
combined with:
1. the sortOnLabel [p 641] parameter as a new criteria added after the sort.
2. the viewPublication [p 642] parameter as a new criteria added after the sort.
The value structure is as follows:
<path1>:<order>;...;<pathN>:<order>
Where:
• <path1> corresponds to the field path in priority 1.
• <order> corresponds to the sorting order, with one of the following values:
• asc: ascending order on value (default),
• desc: descending order on value,
• lasc: ascending order on label,
• ldesc: descending order on label.
String type, the default value orders according to the primary key fields (ascending
order on value).
Note
The history code operation value is usable with the ebx-
operationCode path field from the meta section associated with
this field.
sortOnLabel Specifies that the operation result will be sorted according to the record label. This
parameter can be combined with:
1. the sort [p 641] parameter as a new criteria added before the sortOnLabel.
2. the viewPublication [p 642] parameter as a new criteria added after the
sortOnLabel.
Parameter Description
Where:
• <order> corresponds to the sorting order, with one of the following values:
• lasc: ascending order on label,
• ldesc: descending order on label.
The behavior of this parameter is described in the section defaultLabel [p 461].
String type value.
viewPublication Specifies the name of the published view. This parameter can be combined with:
1. the filter [p 640] parameter as the logical and operation.
2. the sort [p 641] parameter as a new criteria added before the viewPublication.
3. the sortOnLabel [p 641] parameter as new criteria added before the
viewPublication.
The behavior of this parameter is described in the section EBX as a Web Component
[p 194].
Selector parameters
The following parameters are only applicable to fields that return an enumeration, foreign key or
osd:resource (Example JSON [p 677]). By default, a pagination mechanism is enabled.
Parameter Description
Note
This parameter is ignored with the history category.
firstElementIndex Specifies the index of the first element returned by the selector. Must be an integer
higher than or equal to 0.
Integer type, default value is 0.
HTTP codes
403 (Forbidden) The selected resource is hidden for the authenticated user.
Response body
After a successful dataset, table, record or field selection, the result is returned in the response body.
The content depends on the provided parameters and selected data.
Example: JSON [p 663].
Insert operation
Insert operation uses the POST HTTP method. A body message is required to specify data. This
operation supports the insertion of one or more records in a single transaction. Moreover, it is also
possible to update record(s) through parameterization.
• Record: insert a new record or modify an existing one in the selected table.
• Record table: insert or modify one or more records in the selected table, while securing a
consistent answer. Operations are executed sequentially, in the order defined on the client side.
When an error occurs during a table operation, all updates are cancelled and the client receives
an error message with detailed information.
http[s]://<host>[:<port>]/ebx-dataservices/rest/data/v1/{dataspace}/{dataset}/
{pathInDataset}
Where:
• {dataspace} corresponds to B followed by the dataspace identifier or to V followed by the snapshot
identifier.
Parameters
The following parameters are applicable with the insert operation.
Parameter Description
includeDetails Includes the details field in the answer for access to the details of the data. The
returned value corresponds to its URL resources.
Type Boolean, the default value is false.
Note
Only applicable on the record table.
includeForeignKey Includes the foreignKey field in the answer for each record. The returned value
corresponds to the value of a foreign key field that was referencing this record.
Boolean type, the default value is false.
Note
Only applicable on the record table.
includeLabel Includes the label field in the answer for each record.
Possible values are:
• yes: the label field is included.
• no: the label field is not included (use case: integration).
String type, the default value is no.
Note
Only applicable on the record table.
updateOrInsert Specifies the behavior when the record to insert already exists:
• If true: the existing record is updated with new data.
For a request on a record table, the code field is added to the report in order to
specify if this is an insert 201 or an update 204.
• If false (default value): a client error is returned and the operation is aborted.
Boolean type value.
blockingConstraintsDisabled Specifies whether blocking constraints are ignored, if so, the operation is committed
regardless of the validation error created, otherwise, the operation would be aborted.
Boolean type, default value is false.
See Blocking and non-blocking constraints [p 488] for more information.
Message body
The request must define a message body. The format depends on the inserted object type:
• Record: similar to the select operation of a record but without the record's header (example JSON
[p 658]).
• Record table: Similar to the select operation on a table but without the pagination information
(example JSON [p 660]).
HTTP codes
400 (Bad request) The request is incorrect. This occurs when the body message structure does not
comply with what was mentioned in Message body [p 644].
403 (Forbidden) Authenticated user is not allowed to create a record or the request body contains a
read-only field.
409 (Conflict) Concurrent modification, only available if updateOrInsert is true, the Optimistic
locking [p 653] is activated and the content has changed in the meantime, it must be
reloaded before update.
422 (Unprocessable entity) The request cannot be processed. This occurs when:
• A blocking validation error occurs (only available if
blockingConstraintsDisabled is false).
• The record cannot be inserted because a record with the same primary key
already exists (only available if updateOrInsert is false).
• The record cannot be inserted because the definition of the primary key is either
non-existent or incomplete.
• The record cannot be updated because the value of the primary key cannot be
modified.
Response body
The response body format depends on the inserted object type:
• Record: is empty if the operation was executed successfully. The header field Location is returned
with its URL resource.
• Record table: (optional) contains a table of element(s), corresponding to the insert operation
report (example JSON [p 677]). This report is automatically included in the response body, if at
least one of the following options is set:
• includeForeignKey
• includeLabel
• includeDetails
Update operation
This operation allows the modification of a single dataset or record. The PUT HTTP method must be
used. Available URL formats are:
• Dataset node: modifies the values of terminal nodes contained in the selected node.
http[s]://<host>[:<port>]/ebx-dataservices/rest/data/v1/{dataspace}/{dataset}/
{pathInDataset}
Note
Also available for POST HTTP methods. In this case, the URL must correspond to
the table by setting the parameter updateOrInsert to true.
Note
The field must be either a terminal node or above.
Where:
• {dataspace} corresponds to B followed by the dataspace identifier or to V followed by the snapshot
identifier.
• {dataset} corresponds to the dataset identifier.
• {pathInDataset} corresponds to the path of the dataset node:
• For dataset node operations, this must be any terminal node or above except table node,
• For record and field operations, this corresponds to the table node.
• {encodedPrimaryKey} corresponds to the encoded representation of the primary key.
Parameters
Here are the parameters applicable with the update operation.
Parameter Description
blockingConstraintsDisabled Specifies whether blocking constraints are ignored, if so, the operation is committed
regardless of the validation error created, otherwise, the operation would be aborted.
Boolean type, default value is false.
See Blocking and non-blocking constraints [p 488] for more information.
byDelta Specifies the behavior for setting value of nodes that are not defined in the request
body. This is described in the Update modes [p 679] section.
Boolean type, the default value is true.
checkNotChangedSinceLastUpdateDate Timestamp in datetime format used to ensure that the record has not been modified
since the last read. Also see the Optimistic locking [p 653] section.
DateTime type value.
Message body
The request must define a message body.
The structure is the same as for:
• the dataset node (sample JSON [p 658]),
• the record (sample JSON [p 658]),
• the record fields (sample JSON [p 659]),
depending on the updated scope, by only keeping the content entry.
HTTP codes
204 (No content) The record, field or dataset node has been successfully updated.
400 (Bad request) The request is incorrect. This occurs when the body request structure does not
comply.
403 (Forbidden) Authenticated user is not allowed to update the specified resource or the request
body contains a read-only field.
409 (Conflict) Concurrent modification, the Optimistic locking [p 653] is activated and the content
has changed in the meantime, it must be reloaded before the update.
422 (Unprocessable entity) The request cannot be processed. This occurs when:
• A blocking validation error occurs (only available if
blockingConstraintsDisabled is false).
• The record cannot be updated because the value of the primary key cannot be
modified.
Delete operation
The operation uses the DELETE HTTP method.
Two URL formats are available:
• Record: delete a record specified in the URL.
http[s]://<host>[:<port>]/ebx-dataservices/rest/data/v1/{dataspace}/{dataset}/
{pathInDataset}/{encodedPrimaryKey}
• Record table: deletes several records in the specified table, while providing a consistent answer.
This mode requires a body message containing a record table. The deletions are executed
sequentially, according to the order defined in the table. When an error occurs during a table
operation, all deletions are cancelled and an error message is displayed with detailed information.
http[s]://<host>[:<port>]/ebx-dataservices/rest/data/v1/{dataspace}/{dataset}/
{pathInDataset}
Where:
• {dataspace} corresponds to B followed by the dataspace identifier or to V followed by the snapshot
identifier.
• {dataset} corresponds to the dataset identifier.
• {pathInDataset} corresponds to the path of the table node.
• {encodedPrimaryKey} corresponds to the encoded representation of the primary key.
In a child dataset context, this operation modifies the inheritanceMode property value of the record
as follows:
Parameters
Here are the following parameters applicable with delete operation.
Parameter Description
inheritIfInOccultingMode Deprecated since version 5.8.1 While it remains available for backward
compatibility reasons, it will eventually be removed in a future version.
Inherits the record if it is in occulting mode.
Boolean type, the default value is true.
checkNotChangedSinceLastUpdateDate Timestamp in datetime format used to ensure that the record has not been modified
since the last read. Also see the Optimistic locking [p 653] section.
DateTime type value.
blockingConstraintsDisabled Specifies whether blocking constraints are ignored, if so, the operation is committed
regardless of the validation error created, otherwise, the operation would be aborted.
Boolean type, default value is false.
See Blocking and non-blocking constraints [p 488] for more information.
Message body
The request must define a message body only when deleting several records:
• Record table: The message contains a table of elements related to a record, with for each element
one of the following properties:
• details: corresponds to the record URL, it is returned by the select operation.
• primaryKey: corresponds to the primary key of the record, using the XPath expression.
• foreignKey: corresponds to the value that a foreign key would have if it referred to a record.
HTTP codes
200 (OK) The operation has been executed successfully. A report is returned in the response
body.
403 (Forbidden) Authenticated user is not allowed to delete or occult the specified record.
404 (Not found) The selected record is not found. In the child dataset context, it should be necessary
to use the includeOcculting parameter.
409 (Conflict) Concurrent modification, The Optimistic locking [p 653] is activated and the content
has changed in the meantime, it must be reloaded before deleting the record.
The parameter value checkNotChangedSinceLastUpdateDate exists but does not
correspond to the actual last update date of the record.
422 (Unprocessable entity) Only available if blockingConstraintsDisabled is false, the operation fails
because of a blocking validation error.
Response body
After a successful record deletion or occulting, a report is returned in the response body. It contains
the number of deleted, occulted and inherited record(s).
Example JSON [p 678].
Count operation
Count operation may use one of the following methods:
• GET HTTP method,
• POST HTTP method without message body or
• POST HTTP method with message body but without content field on root.
The URL formats are:
• Dataset node: the data category returns the number of terminal nodes contained in the selected
node.
http[s]://<host>[:<port>]/ebx-dataservices/rest/data/v1/{dataspace}/{dataset}/
{pathInDataset}?count=true
Note
Not applicable with the history category.
Note
The field must be either an association node, a selection node, a terminal node or
above.
Where:
• {category} corresponds to the operation category [p 625].
• {dataspace} corresponds to B followed by the dataspace identifier or to V followed by the snapshot
identifier.
• {dataset} corresponds to the dataset identifier.
• {pathInDataset} corresponds to the path of the dataset node, that can be a group node or a table
node.
• {encodedPrimaryKey} corresponds to the encoded representation of the primary key.
Parameters
The following parameters are applicable to the count operation.
Parameter Description
count This parameter is used to specify whether this is a count operation or a selection
operation.
Boolean type, default value is false.
Table parameters
The following parameters are applicable to tables, associations and selection nodes.
Parameter Description
filter XPath predicate [p 225] expression defines the field values to which the request is
applied. If empty, all records will be considered.
String type value.
Note
The history code operation value is usable with the ebx-
operationCode path field from the meta section associated with
this field.
Note
This parameter is ignored with the data category.
viewPublication Specifies the name of the published view to be considered during the count
execution. This parameter can be combined with:
• the filter [p 652] parameter as the logical and operation.
The behavior of this parameter is described in the section EBX as a Web Component
[p 194].
Selector parameters
The following parameters are only applicable to fields that return an enumeration, foreign key or
osd:resource.
Parameter Description
Note
This parameter is ignored with the history category.
HTTP codes
403 (Forbidden) The selected resource is hidden for the authenticated user.
Optimistic locking
To prevent an update or a delete operation on a record that was previously read but may have changed
in the meantime, an optimistic locking mechanism is provided.
To enable optimistic locking, a select request must set the parameter includeTechnicals to true.
See Technical data [p 679] for more information.
The value of the lastUpdateDate property must be included in the following update request. If the
record has been changed since the specified time, the update or delete will be cancelled.
The property value lastUpdateDate can also be used in the request URL
checkNotChangedSinceLastUpdateDate parameter to prevent deletion on a modified record.
Note
The checkNotChangedSinceLastUpdateDate parameter may be used more than once but
only on the same record. This implies that if the request URL returns more than one
record, the request will fail.
Inheritance
EBX inheritance features are supported by built-in RESTful services using specific properties and
automatic behaviors. In most cases, the inheritance state will be automatically computed by the server
according to the record and field definition or content. Every action that modifies a record or a
field may have an indirect impact on those states. In order to fully handle the inheritance life cycle,
direct modifications of the state are allowed under certain conditions. Forbidden or incoherent explicit
alteration attempts are ignored.
Inheritance properties
The following table describes properties related to the EBX inheritance features.
inheritance record or table metadata Specifies if dataset inheritance is activated for the table. The
value is computed from the data model and cannot be modified
through built-in RESTful services.
inheritedField field metadata Specifies the field's value source. The source data are directly
taken from the data model and cannot be modified through
built-in RESTful services.
inheritanceMode record in child dataset Specifies the record's inheritance state. To set a record's
inheritance from overwrite to inherit, its inheritanceMode
value must be explicitly provided in the request. In this specific
case, the content property will be ignored if present. occult
and root explicit values are always ignored. An overwrite
explicit value without a content property is ignored.
Note
Inherited record's fields are necessarily
inherit.
Note
Root records in child dataset will always be
root.
field in overwrite record Specifies the field's inheritance state. To set a field's inheritance
to inherit, its inheritanceMode value must be explicitly
provided in the request. The content property will be ignored
in this case. overwrite explicit value without a content
property is ignored.
Note
inheritanceMode at field level does not
appear for root, inherit and occult records.
Note
inheritedFieldMode and inheritanceMode
properties cannot be both set on the same
field.
inheritedFieldMode inherited field Specifies the inherited field's inheritance state. To set a field's
inheritance to inherit, its inheritedFieldMode value must be
Note
inheritedFieldMode and inheritanceMode
properties cannot be both set on the same
field.
Note
inheritedFieldMode has priority over the
inheritanceMode property.
94.7 Limitations
General limitations
• Indexes, in the request URL {pathInDataset} or {pathInRecord}, are not supported.
• Nested aggregated lists are not supported.
• Dataset nodes and field operations applied to nodes that are sub-terminal are not supported.
See Access properties [p 503] for more information about terminal nodes.
Read operations
• Within the selector, the pagination context is limited to the nextPage property.
• Within the viewPublication parameter, the hierarchical view is not supported.
• The sortOnLabel parameter ignores programmatic labels.
• The system information response's properties cannot be browsed through the REST URL with
the hierarchical representation.
See System information operation [p 631] for more information.
Write operations
• Association fields cannot be updated, therefore, the list of associated records cannot be modified
directly.
• Control policy onUserSubmit-checkModifiedValues of the user interface is not supported.
To retrieve validation errors, invoke the select operation on the resource by including the
includeValidation parameter.
Directory operations
• Changing or resetting a password for a user is not supported.
CHAPTER 95
JSON format
This chapter contains the following topics:
1. Introduction
2. Global structure
3. Meta-data
4. Sort criteria information
5. Validation
6. Content
7. Update modes
8. Known limitations
95.1 Introduction
The JSON (JavaScript Object Notation) is the data-interchange format used by EBX RESTful
operations [p 623].
This format is lightweight, self-describing and can be used to design UIs or to integrate EBX in a
company's information system.
• The data context is exhaustive, except for association fields and selection nodes that are not
directly returned. However, these fields are included in the response with a URL link named
details included by default. It can be indirectly used to get the fields content.
• The volume of data is limited by a pagination mechanism activated by default, it can be configured
or disabled.
URL formatted links allow retrieving:
• Tables, records, dataset non-terminal nodes, foreign keys, resource fields (property details).
• Possible values for foreign keys or enumerations (selector parameter).
Note
JSON data are always encoded in UTF-8.
Auth category
The request body holds several properties directly placed in the root JSON Object.
• Token creation
Specifies the login and password to use for an authentication token creation attempt.
{
"login": "...", // JSON String
"password": "..." // JSON String
}
• Password change
Specifies the login, password and passwordNew to use for the password change.
{
"login": "...", // JSON String
"password": "...", // JSON String
"passwordNew": "..." // JSON String
}
Data category
The request body contains at least a content property where master data values will be defined.
• Dataset node
Specifies the target values of terminal nodes under the specified node. This request is used on the
dataset node update operation.
{
"content": {
"nodeName1": {
"content": true
},
"nodeName2": {
"content": 2
},
"nodeName3": {
"content": "Hello"
}
}
}
• Record
Specifies the target record content by setting the value for each field. For missing fields, the
behavior depends on the request parameter byDelta. This structure is used on table record insert
or on record update.
Some technical data can be added beside the content property such as lastUpdateDate.
{
...
"lastUpdateDate": "2015-12-25T00:00:00.001",
...
"content": {
"gender": {
"content": "Mr."
},
"lastName": {
"content": "Chopin"
},
"lastName-en": {
"content": "Chopin",
"inheritedFieldMode": "inherit"
},
"firstName": {
"content": "Fryderyk"
},
"firstName-en": {
"content": "Frédéric",
"inheritedFieldMode": "overwrite"
},
"birthDate": {
"content": "1810-03-01"
},
"deathDate": {
"content": "1849-10-17"
},
"jobs": {
"content": [
{
"content": "CM"
},
{
"content": "PI"
}
]
},
"infos": {
"content": [
{
"content": "https://en.wikipedia.org/wiki/Chopin"
}
]
}
}
}
See also
Insert operation [p 643]
Update operation [p 646]
• Record fields
Specifies the target values of fields under the record terminal node by setting the value of each
field. For missing fields, the behavior depends on the request parameter byDelta. This structure
is only used for table record updates.
"content": [
{
"content": "CM"
},
{
"content": "PI"
}
]
}
• Record table
Defines the content of one or more records by indicating the value of each field. For missing
fields, the behavior depends on the parameter of the request byDelta. This structure is used upon
insert or update of records in the table.
{
"rows": [
{
"content": {
"gender": {
"content": "M"
},
"lastName": {
"content": "Saint-Saëns"
},
"firstName": {
"content": "Camille"
},
"birthDate": {
"content": "1835-10-09"
},
...
}
},
{
"content": {
"gender": {
"content": "M"
},
"lastName": {
"content": "Debussy"
},
"firstName": {
"content": "Claude"
},
"birthDate": {
"content": "1862-10-22"
},
...
}
}
]
}
See also
Insert operation [p 643]
Update operation [p 646]
• Record table to be deleted
Defines one or more records. This structure is used upon deleting several records from the same
table.
{
"rows": [
{
"details": "http://.../root/table/1"
},
{
"details": "http://.../root/table/2"
},
{
"primaryKey": "./oid=3"
},
{
"foreignKey": "4"
},
...
]
}
• Field
Specifies the target field content. This request is used on the field update.
The request has the same structure as defined in node value [p 672] by only keeping the content
entry. Other entries are simply ignored.
Only writable fields can be mentioned in the request, this excludes the following cases:
• Association node,
• Selection node,
• Value function,
• JavaBean field that does not have a setter,
• Unwritable permission on node for authenticated user.
Admin category
The selection operation for this category only provides the values corresponding to the request under
a content property.
• System information
Contains EBX instance's system information. The representation of these data can be flat or
hierarchical.
Flat representation:
{
"content": {
"bootInfoEBX": {
"label": "EBX configuration",
"content": {
"product.version": {
"label": "EBX product version",
"content": "5.8.1 [...] Enterprise Edition"
},
"product.configuration.file": {
"label": "EBX main configuration file",
"content": "System property [ebx.properties=./ebx.properties]"
}
// others keys
}
},
"repositoryInfo": {
"label": "Repository information",
"content": {
"repository.identity": {
"label": "Repository identity",
"content": "00905A5753FD"
},
"repository.label": {
"label": "Repository label",
"content": "My repository"
}
// others keys
}
},
"bootInfoVM": {
"label": "System information",
"content": {
"java.home": {
"label": "Java installation directory",
"content": "C:\\JTools\\jdk1.8.0\\jre"
},
"java.vendor": {
"label": "Java vendor",
"content": "Oracle Corporation"
}
// others keys
}
}
}
}
Hierarchical representation:
{
"content": {
"bootInfoEBX": {
"label": "EBX configuration",
"content": {
"product": {
"content": {
"version": {
"label": "EBX product version",
"content": "5.8.1 [...] Enterprise Edition"
},
"configuration": {
"content": {
"file": {
"label": "EBX main configuration file",
"content": "System property [ebx.properties=./ebx.properties]"
}
}
}
}
},
"vm": {
"content": {
"startTime": {
"label": "VM start time",
"content": "2017/09/11-10:04:17-0729 CEST"
},
"identifier": {
"label": "VM identifier",
"content": "1"
}
}
}
// other hierarchical keys
}
}
}
}
Auth category
The response body contains several properties directly placed in its root JSON object.
• Token creation
Contains the token value and its type.
{
"accessToken": "...", // JSON String
"tokenType": "..." // JSON String
}
Data category
The selection operation contains two different parts.
The first one named meta contains the exhaustive structure of the response.
The other, regrouping content, rows, pagination...etc, contains the values corresponding to the
request.
• Dataset tree
Contains the hierarchy of table and non-terminal group nodes.
{
"meta": {
"fields": [
{
"name": "rootName",
"label": "Localized label",
"description": "Localized description",
"type": "group",
"pathInDataset": "/rootName",
"fields": [
{
"name": "settings",
"label": "Settings",
"type": "group",
"pathInDataset": "/rootName/settings",
"fields": [
{
"name": "settingA",
"label": "A settings label",
"type": "group",
"pathInDataset": "/rootName/settings/settingA"
},
{
"name": "settingB",
"label": "B settings label",
"type": "group",
"pathInDataset": "/rootName/settings/settingB"
}
]
},
{
"name": "table1",
"label": "Table1 localized label",
"type": "table",
"minOccurs": 0,
"maxOccurs": "unbounded",
"pathInDataset": "/rootName/table1"
},
{
"name": "table2",
"label": "Table2 localized label",
"type": "table",
"minOccurs": 0,
"maxOccurs": "unbounded",
"pathInDataset": "/rootName/table2"
}
]
}
]
},
"validation": [
{
"level": "error",
"message": "Value must be greater than or equal to 0.",
"details": "http://.../rootName/settings/settingA/settingA1?includeValidation=true"
},
{
"level": "error",
"message": "Field 'Settings A2' is mandatory.",
"details": "http://.../rootName/settings/settingA/settingA2?includeValidation=true"
}
],
"content": {
"rootName": {
"details": "http://.../rootName",
"content": {
"settings": {
"details": "http://.../rootName/settings",
"content": {
"weekTimeSheet": {
"details": "http://.../rootName/settings/settingA"
},
"vacationRequest": {
"details": "http://.../rootName/settings/settingB"
}
}
},
"table1": {
"details": "http://.../rootName/table1"
},
"table2": {
"details": "http://.../rootName/table2"
}
}
}
}
}
See also
Meta-data [p 668]
Validation [p 671]
Select operation [p 635]
• Dataset node
Contains the list of terminal nodes under the specified node.
{
"meta": {
"fields": [
{
"name": "nodeName1",
"label": "Localized label of the field node 1",
"description": "Localized description",
"type": "boolean",
"minOccurs": 1,
"maxOccurs": 1,
"pathInDataset": "/rootName/.../nodeName1"
},
{
"name": "nodeName2",
"label": "Localized label of the field node 2",
"type": "int",
"minOccurs": 1,
"maxOccurs": 1,
"pathInDataset": "/rootName/.../nodeName2"
}
]
},
"content": {
"nodeName1": {
"content": true
},
"nodeName2": {
"content": -5,
"validation": [
{
"level": "error",
"message": "Value must be greater than or equal to 0."
}
]
}
}
}
• Table
JSON Object containing the properties:
• (Optional) The table meta data [p 668],
• (Optional) The sort criteria applied,
• (Optional) The table validation report,
• rows containing a table of selected records. Each record is represented by an object, if no
record is selected then the table is empty.
• (Optional) pagination containing pagination [p 676] information if activated.
{
"rows": [
{
"label": "Claude Levi-Strauss",
"details": "http://.../root/individu/1",
"content": {
"id": {
"content": 1
},
...
}
},
{
"label": "Sigmoud Freud",
"details": "http://.../root/individu/5",
"content": {
"id": {
"content": 2
},
...
}
},
...
{
"label": "Alfred Dreyfus",
"details": "http://.../root/individu/10",
"content": {
"id": {
"content": 30
},
...
}
}
],
"sortCriteria": [
{
"path": "/name",
"order": "lasc"
},
...
],
"pagination": {
"firstPage": null,
"previousPage": null,
"nextPage": "http://.../root/individu?pageRecordFilter=./id=9&pageSize=9&pageAction=next",
"lastPage": "http://.../root/individu?pageSize=9&pageAction=last"
}
}
• Record
JSON object containing:
• The label,
• (Optional) The record URL,
• (Optional) The technical data [p 679],
• (Optional) The table metadata [p 668],
• (Optional) The record validation report,
• (Optional) The inheritance mode of the record is: root, inherit, overwrite or occult. This
value is available for a child dataset,
See also
Record lookup mechanism [p 270]
Inheritance [p 654]
• The record content.
{
"label": "Name1",
"details": "http://.../rootName/table1/pk1",
"creationDate": "2015-02-02T19:00:53.142",
"creationUser": "admin",
"lastUpdateDate": "2015-09-01T17:22:24.684",
"lastUpdateUser": "admin",
"inheritanceMode": "root",
"meta": {
"name": "table1",
"label": "Table1 localized label",
"type": "table",
"minOccurs": 0,
"maxOccurs": "unbounded",
"primaryKeys": [
"/pk"
],
"inheritance": "true",
"fields": [
{
"name": "pk",
"label": "Identifier",
"type": "string",
"minOccurs": 1,
"maxOccurs": 1,
"pathInRecord": "pk",
"filterable": true,
"sortable": true
},
{
"name": "name",
"label": "Name",
"type": "string",
"minOccurs": 1,
"maxOccurs": 1,
"pathInRecord": "name",
"filterable": true,
"sortable": true
},
{
"name": "name-fr",
"label": "Nom",
"type": "string",
"minOccurs": 1,
"maxOccurs": 1,
"inheritedField": {
"sourceNode": "./name"
},
"pathInRecord": "name-fr",
"filterable": true,
"sortable": true
},
{
"name": "parent",
"label": "Parent",
"description": "Localized description.",
"type": "foreignKey",
"minOccurs": 1,
"maxOccurs": 1,
"foreignKey": {
"tablePath": "/rootName/table1",
"details": "http://.../rootName/table1"
},
"enumeration": "foreignKey",
"pathInRecord": "parent",
"filterable": true,
"sortable": true
}
]
},
"content": {
"pk": {
"content": "pk1"
},
"name": {
"content": "Name1"
},
"name-fr": {
"content": "Name1",
"inheritedFieldMode": "inherit"
},
"parent": {
"content": null,
"validation":
[
{
"level": "error",
"message": "Field 'Parent' is mandatory."
}
]
}
},
"validation": {
...
}
}
• Fields
For association or selection nodes, contains the target table with associated records if, and only
if, the includeDetails parameter is set to true.
For other kinds of nodes, contains the current node value [p 672], by adding meta entry if enabled.
Note
Node, records and field in meta, rows and content may be hidden depending on their
resolved permissions (see permissions [p 273]).
95.3 Meta-data
This section can be activated on demand with the includeMeta [p 638] parameter. It describes the
structure and the JSON typing of the content section.
This section is deactivated by default for selection operations.
Structure of table
Table meta-data is represented by a JSON object with the following properties:
label String Table label. If undefined, the name of the schema node is Yes
returned.
history Boolean Specifies if the table content is historized. Its value is true if No
history is activated, false otherwise.
primaryKeyFields Array Array of the paths corresponding to the primary key. Yes
fields Array Array of fields, that are direct children of the record node. Each Yes
field may also recursively contain sub-fields.
Structure of field
Each authorized field is represented by a JSON object with the following properties:
label String Node label. If undefined, the name of the schema node is Yes
returned.
type String Node type: simple type [p 674], group, table, foreignKey, etc. Yes
inheritedField Object Holds information related to the inherited field's value source. No
"inheritedField": {
"sourceRecord": "/path/to/record", // (optional)
"sourceNode": "./path/to/Node"
}
enumeration String Specifies if the field is an enumeration value. Possible values are: No
• foreignKey
• static
• dynamic
• programmatic
• nomenclature
• resource
pathInDataset String Relative field path starting from the schema node. No (**)
pathInRecord String Relative field path starting from the table node. No (*)
filterable Boolean Specifies whether the field can be used for filtering record using No (*)
filter request parameter.
sortable Boolean Specifies whether the field can be used in sort criteria using sort No (*)
request parameter.
fields Array of Object Contains the structure and typing of each group field. No
elements
(*) Only available for table, record and record field operations.
(**) Only available for dataset tree operations.
order String Possible values are: asc, lasc, desc or ldesc. Yes
95.5 Validation
The validation can be activated on demand with the includeValidation [p 639] parameter (deactivated
by default). If it is activated, validation properties are directly added on target nodes with one or
several messages. For messages without a target node path, a validation property is added on the
root node.
A validation property contains a JSON Array and for each message, corresponding to a validation
item, a JSON Object with properties:
level String Severity of the validation item, the possible values Yes
are: info, warn, error.
95.6 Content
This section can be deactivated on demand with the includeContent [p 638] parameter (activated by
default). It provides the content of the record values, dataset, or field of one of the content fields for
an authorized user. It also has additional information, including labels, technical information, URLs...
The content is represented by a JSON Object with a property set for each sub-node.
Node value
content Content of simple type [p 674] Contains the node value. Available for all nodes No
except association and selection. However,
Content of group and list [p
their content can be retrieved by invoking the URL
676]
provided in details.
See also
inheritedFieldMode [p 672]
See also
inheritanceMode [p 672]
false
null
20.001
15
-1e-13
• "HH:mm:ss" "11:55:00.000"
• "HH:mm:ss.SSS"
• "yyyy-MM-ddTHH:mm:ss" "2015-04-13T11:55:00.000"
• "yyyy-MM-
ddTHH:mm:ss.SSS"
xs:integer
Pagination
This feature allows returning a limited and parameterizable number of data. Pagination can be applied
to data of the following types: records, association values, selection node values and selectors. A
context named pagination is returned only if it has been activated. This context allows browsing data
similarly to the UI.
Pagination is activated by default.
firstPage String or null (*) URL to access the first page. Yes (**)
previousPage String or null (*) URL to access the previous page. Yes (**)
nextPage String or null (*) URL to access the next page. Yes
lastPage String or null (*) URL to access the last page. Yes (**)
Note
(*) Only defines if data is available in this context and not in the response.
Note
(**) Not present on selector.
Selector
By invoking the URL represented by the property selector on a field that provides an enumeration,
this returns a JSON Object containing the properties:
• rows containing an Array of JSON Object where each one contains two entries, such as the
returned content that can be persisted and the corresponding label. The list of possible items is
established depending on the current context.
• (Optional) pagination containing pagination [p 676] information (activated by default).
{
"rows": [
{
"content": "F",
"label": "feminine"
},
{
"content": "M",
"label": "masculine"
}
],
"pagination": {
"nextPage": null
}
}
• foreignKey contains a string of the JSON String type, corresponding to the content to be
used as a foreign key for this record. This property is included if, and only if, the parameter
includeForeignKey is set to true.
• label contains a string of the JSON String type, and allows to retrieve the record label. This
property is included if, and only if, the parameter includeLabel is set to yes.
• details containing a string of the JSON String type, corresponding to the resource URL. This
property is included if, and only if, the parameter includeDetails is set to true.
{
"rows": [
{
"code": 204,
"foreignKey": "62",
"label": "Claude Debussy",
"details": "http://.../root/individu/62"
},
{
"code": 201,
"foreignKey": "195",
"label": "Camille Saint-Saëns",
"details": "http://.../root/individu/195"
}
]
}
Technical data
Each returned record is completed with the properties corresponding to its technical data, containing:
{
...
"creationDate": "2015-12-24T19:00:53.158",
"creationUser": "admin",
"lastUpdateDate": "2015-12-25T00:00:00.001",
"lastUpdateUser": "admin",
...
}
See the RESTful data services operations update [p 646] and insert [p 643], as well as ImportSpec.
setByDelta in the Java API for more information.
API
The property does not exist in the source document If the byDelta mode is activated (default):
• For the update operation, the field value remains
unchanged.
• For the insert operation, the behavior is the same as when
the byDelta mode is disabled.
If the byDelta mode is disabled through the RESTful
operation parameter:
The target field is set to one of the following values:
• If the element defines a default value, the target field is set
to that default value.
• If the element is of a type other than a string or list, the
target field value is set to null.
• If the element is an aggregated list, the target field value is
set to an empty list value.
• If the element is a string that differentiates null from an
empty string, the target field value is set to null. If it is a
string that does not differentiate the two, an empty string.
• If the element (simple or complex) is hidden in the data
services, the target value remains unchanged.
Note
The user performing the import must have the
required permissions to create or change the
target field value. Otherwise, the operation
will be aborted.
The element is present and its value is null (for example, The target field is always set to null except for lists, in which
"content": null) case it is not supported.
CHAPTER 96
REST Toolkit
This chapter contains the following topics:
1. Introduction
2. Application definitions
3. Service and operation definitions
4. Authentication and lookup mechanism
5. REST authentication and permissions
6. URI builders
7. Exception handling
8. Monitoring
9. Packaging and registration
96.1 Introduction
EBX offers the possibility to develop custom REST services using the REST Toolkit. The REST
Toolkit supports JAX-RS 2.1 (JSR 370) and JSON-B (JSR 367).
A REST service is implemented by a Java class and its operations are implemented by Java methods.
The response can be generated by serializing POJO objects. The request input can be unserialized
to POJOs. Various input and output formats, including JSON, are supported. For more details on
supported formats, see media types [p 682].
Rest Toolkit supports the following:
• Injectable objects
EBX provides injectable objects useful to authenticate the request's user, to access the EBX
repository or to built URIs without worrying about the configuration (for example reverse-proxy
[p 616] or REST forward [p 544] modes);
See also JAX-RS: JavaTM API for RESTful Web Services 2.1
class. The minimum requirement is to define the base URL, using the @ApplicationPath annotation
and the set of packages to scan for REST service classes.
The application path cannot be "/" and must not collide with an existing resource
from the module. It is recommended to use "/rest" (the value of constant
RESTApplicationAbstract.REST_DEFAULT_APPLICATION_PATH).
configuration' > 'Modules and data models' or when logging and debugging.
import javax.ws.rs.*;
import com.orchestranetworks.rest.*;
import com.orchestranetworks.rest.annotation.*;
@ApplicationPath(RESTApplicationAbstract.REST_DEFAULT_APPLICATION_PATH)
@Documentation("My REST sample application")
public final class RESTApplication extends RESTApplicationAbstract
{
public RESTApplication()
{
// Adds one or more package names which will be used to scan for components.
super((cfg) -> cfg.addPackages(this.getClass().getPackage()));
}
}
• application/json (MediaType.APPLICATION_JSON_TYPE)
• application/octet-stream (MediaType.APPLICATION_OCTET_STREAM_TYPE)
• application/x-www-form-urlencoded
(MediaType.APPLICATION_FORM_URLENCODED_TYPE)
• multipart/form-data (MediaType.MULTIPART_FORM_DATA_TYPE)
• text/css
• text/html (MediaType.TEXT_HTML_TYPE)
• text/plain (MediaType.TEXT_PLAIN_TYPE)
Valid HTTP(S) methods are specified by JAX-RS annotations @GET, @POST, @PUT, etc. Only one of
these annotations can be set on each Java method (this means that a Java method can support only
one HTTP method).
Warning: query parameters with a name prefixed with ebx- are reserved by REST Toolkit and should
be not be defined by custom REST services unless explicitly authorized by the EBX documentation.
The following REST Toolkit service sample provides methods to query and manage track data:
import java.net.*;
import java.util.*;
import java.util.concurrent.*;
import javax.servlet.http.*;
import javax.ws.rs.*;
import javax.ws.rs.container.*;
import javax.ws.rs.core.*;
import com.orchestranetworks.rest.annotation.*;
import com.orchestranetworks.rest.inject.*;
/**
* The REST Toolkit Track service v1.
*/
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Path("/track/v1")
@Documentation("Track service")
public final class TrackService
{
@Context
private ResourceInfo resourceInfo;
@Context
private SessionContext sessionContext;
/**
* Gets service description
*/
@GET
@Path("/description")
@Documentation("Gets service description")
@Consumes({ MediaType.TEXT_PLAIN, MediaType.APPLICATION_JSON })
@Produces({ MediaType.TEXT_PLAIN, MediaType.APPLICATION_JSON })
@AnonymousAccessEnabled
public String handleServiceDescription()
{
return this.resourceInfo.getResourceMethod().getAnnotation(Documentation.class).value();
}
/**
* Selects all tracks.
*/
@GET
@Path("/tracks")
@Documentation("Selects all tracks")
public Collection<TrackDTO> handleSelectTracks()
{
return TRACKS.values();
}
/**
* Counts all tracks.
*/
@GET
@Path("/tracks:count")
@Documentation("Counts all tracks")
public int handleCountTracks()
{
return TRACKS.size();
}
/**
* Selects a track by id.
*/
@GET
@Path("/tracks/{id}")
@Documentation("Selects a track by id")
public TrackDTO handleSelectTrackById(@PathParam("id") Integer id)
{
final TrackDTO track = TRACKS.get(id);
if (track == null)
throw new NotFoundException("Track id [" + id + "] does not found.");
return track;
}
/**
* Deletes a track by id.
*/
@DELETE
@Path("/tracks/{id}")
@Documentation("Deletes a track by id")
public void handleDeleteTrackById(@PathParam("id") Integer id)
{
if (!TRACKS.containsKey(id))
throw new NotFoundException("Track id [" + id + "] does not found.");
TRACKS.remove(id);
}
/**
* Inserts or updates one or several tracks.
* <p>
* The complex response structure corresponds to one of:
* <ul>
* <li>An empty content with the <code>location<code> HTTP header defined
* to the access URI.</li>
* <li>A JSON array of {@link ResultDetailsDTO} objects.</li>
* </ul>
*/
@POST
@Path("/tracks")
@Documentation("Inserts or updates one or several tracks")
public Response handleInsertOrUpdateTracks(List<TrackDTO> tracks)
{
int inserted = 0;
int updated = 0;
TRACKS.put(track.id, track);
resultDetails[resultIndex++] = ResultDetailsDTO.create(
code,
null,
String.valueOf(track.id),
uri);
}
return Response.ok().entity(resultDetails).build();
}
/**
* Updates one track.
*/
@PUT
@Path("/tracks/{id}")
@Documentation("Update one track")
public void handleUpdateOneTrack(@PathParam("id") Integer id, TrackDTO aTrack)
{
final TrackDTO track = TRACKS.get(id);
if (track == null)
throw new NotFoundException("Track id [" + id + "] does not found.");
TRACKS.put(aTrack.id, aTrack);
}
}
This REST service uses the following Java classes to serialize and unserialize data:
/**
* DTO for a track.
*/
public final class TrackDTO
{
public Integer id;
public String singer;
public String title;
}
import java.net.*;
/**
* DTO for result details.
*/
@JsonbPropertyOrder({ "code", "label", "foreignKey", "details" })
public final class ResultDetailsDTO
{
public int code;
public String label;
public String foreignKey;
public URI details;
See also
Authentication [p 618]
Lookup mechanism [p 619]
Some methods may need to be restricted to given profiles. These methods may be annotated by
Authorization to specify an authorization rule. An authorization rule is a Java class that implements
API
import javax.ws.rs.*;
import com.orchestranetworks.rest.annotation.*;
/**
* The REST Toolkit service v1.
*/
@Path("/service/v1")
@Documentation("Service")
public final class Service
{
...
/**
* Gets service description
*/
@GET
@AnonymousAccessEnabled
public String handleServiceDescription()
{
...
}
/**
* Gets restricted service
*/
@GET
@Authorization(IsUserAuthorized.class)
public RestrictedServiceDTO handleRestrictedService()
{
...
}
}
96.8 Monitoring
REST Toolkit events monitoring is similar to the data services log configuration. The difference is the
property key which must be ebx.log4j.category.log.restServices.
Some additional properties are available to configure the log messages. See Configuring REST toolkit
services [p 333] for further information.
import com.orchestranetworks.module.*;
@WebListener
public final class RegistrationModule extends ModuleRegistrationListener
{
@Override
public void handleContextInitialized(final ModuleInitializedContext aContext)
{
// Registers dynamically a REST Toolkit application.
aContext.registerRESTApplication(RESTApplication.class);
}
}
CHAPTER 97
Third-party Licenses
EBX includes several open source and third-party software. The table below holds the full list as well
as their associated licenses.
Apache Commons BCEL [6.0] Java Library for byte code engineering. This library is used jointly Apache
with the Apache CXF library. Released on July 10, 2016. All packages License,
have been renamed so as to prevent any conflict when it is deployed. Version 2.0
All related references to the renamed packages have been updated.
Download: bcel-6.0-src.zip
Apache Commons DBCP2 [2.1.1] Java Library for Database Connection Pooling. Released on August 7, Apache
2015. All packages have been renamed so as to prevent any conflicts License,
when it is deployed. All related references to the renamed packages Version 2.0
have been updated.
Download: commons-dbcp2-2.1.1-src.zip
Apache Commons file uploads Java Library for multipart file upload functionality support on servlets Apache
[1.2.2] and web applications. Released on July 29, 2010. All packages have License,
been renamed so as to prevent any conflicts when it is deployed. Version 2.0
Download: commons-fileupload-1.2.2-src.zip
Apache Commons IO [2.5] Java Library holding utility classes and more. Released on April 21, Apache
2016. All packages have been renamed so as to prevent any conflicts License,
when it is deployed. Version 2.0
Download: commons-io-2.5-src.zip
Apache Commons logging [1.2] Java Library for logging used by Apache Commons DBCP2. Released Apache
on July 9, 2014. All packages have been renamed so as to prevent any License,
conflicts when it is deployed. All related references to the renamed Version 2.0
packages have been updated.
Download: commons-logging-1.2-src.zip
Apache Commons Pool2 [2.4.2] Java Library for object pooling. Released on August 1, 2015. All Apache
packages have been renamed so as to prevent any conflicts when it is License,
deployed. All related references to the renamed packages have been Version 2.0
updated.
Download: commons-pool2-2.4.2-src.zip
Apache CXF [3.2.4] Java Library for creating web services according to the Apache
Representational State Transfer (REST) architectural style. This License,
library version implements the JAX-RS 2.1 specification. Released Version 2.0
on March 22, 2018 All packages have been renamed so as to prevent
any conflicts when it is deployed. Dynamic services and String's value
referencing full path packages have been updated according to the
renaming process.
Download: apache-cxf-3.2.4-src.zip
Apache Geronimo JSON [1.0] Java Library for JSON serialization / deserialization. This library Apache
version implements the JSR-374 specification and is used as License,
distributed. Released on April 16, 2017. Version 2.0
Download: geronimo-json_1.1_spec-1.0.zip
Apache GSON [2.3.1] Java Library for JSON serialization / deserialization. Released on Apache
November 20, 2014. All packages have been renamed so as to prevent License,
any conflicts when it is deployed. Version 2.0
Download: gson-2.3.1
Apache johnzon [1.1.7] Java Library for JSON serialization / deserialization. This library Apache
version implements the JSR 367 specification. Released on March 1, License,
2018. All packages have been renamed so as to prevent any conflicts Version 2.0
when it is deployed. Dynamic services and String's value referencing
full path packages have been updated according to the renaming
process.
Download: johnzon-1.1.7-source-release.zip
Apache Log4J [1.1.3] Java library for logging. It comes from project _xLog4J 1.1.3- The Apache
allSources.2.patch4 All packages have been renamed so as to Software
prevent any conflicts when it is deployed. All related references to the License,
renamed packages have been updated. Version 1.1
Download: jakarta-log4j-1.1.3.zip
Apache Lucene [2.9.4] Java Library for data indexing and searching. Released on March 30, Apache
2011 All packages have been renamed so as to prevent any conflicts License,
when it is deployed. Version 2.0
Download: lucene-2.9.4-src.zip
Apache Oro [2.0.6] Java library for text-processing. Release on January 17, 2002. All The Apache
packages have been renamed so as to prevent any conflicts when it is Software
deployed. License,
Version 1.1
Download: oro-2.0.6
Apache Quartz Scheduler Java library for job scheduling. Released on February 11, 2009. All Apache
[1.6.6] packages have been renamed so as to prevent any conflicts when it is License,
deployed. Some features were removed because not needed. Version 2.0
Download was: Quartz-1.6.6.zip
Apache Xalan [2.7.0] Java library for XML documents processing. Released on august 8, Apache
2005. All packages have been renamed so as to prevent any conflicts License,
when it is deployed. All related references to the renamed packages Version 2.0
have been updated.
Download: xalan-j_2_7_0-src.zip
Apache Xerces [2.7.1] Java library for XML documents processing. Released on August 8, Apache
2005. All packages have been renamed so as to prevent any conflicts License,
when it is deployed. All related references to the renamed packages Version 2.0
have been updated.
Download: xercesImpl-2.7.1.jar
CKEditor [4.1.3] JavaScript library for HTML rich text edition. It is under the Mozilla CKEditor
Public License Version 1.1 (the "MPL"). Released on June 20, 2014. License under
"MPL" 1.1
Location: _OSS-Src/CKEditor.zip
terms
Download: ckeditor_4.1.3_standard.zip
CKEditor [4.4.1] JavaScript library for HTML rich text edition. It is under the Mozilla CKEditor
Public License Version 1.1 (the "MPL"). Released on June 20, 2014. License under
"MPL" 1.1
Location: _OSS-Src/CKEditor.zip
terms
Download: ckeditor_4.4.1_standard.zip
CKEditor [4.7.3] JavaScript library for HTML rich text edition. It is under the Mozilla CKEditor
Public License Version 1.1 (the "MPL"). Released on September 12, License under
2017. "MPL" 1.1
terms
Location: _OSS-Src/CKEditor.zip
Download: ckeditor_4.7.3_standard.zip
Core-js [1.6.1] JavaScript library including polyfills, dictionaries, extended partial The MIT
applications and other features. Released on August 31, 2017. It is License
used as it is provided.
Download: Core-js-2.5.1.zip
Dom4J [1.6.1] Java Library for XML manipulations. It is used as it is provided. Apache-style
open source
Download: dom4J 1.6.1
license
Fast Classpath Scanner [3.0.3] Java Library for Java class scanning. Released on March 1, 2018. All The MIT
packages have been renamed so as to prevent any conflicts when it is License
deployed. Dynamic services and String's value referencing full path
packages have been updated according to the rename processing.
Download: fast-classpath-scanner-3.0.3.zip
Font Awesome [4.2.0] Icon set and toolkit for easy scalable vector graphics on websites. The MIT
Released on August 26, 2014. It is used as it is provided. License , SIL
Open Font
Locations: ebx-manager.war/www/common/stylesheets and ebx-
License 1.1
manager.war/www/common/fonts
Download: font-awesome-v4.2.0.zip
Font Awesome [4.7.0] Icon set and toolkit for easy scalable vector graphics on websites. The MIT
Released on October 24, 2016. It is used as it is provided. License, SIL
Open Font
Locations: ebx-manager.war/www/common/stylesheets and ebx-
License 1.1
manager.war/www/common/fonts
Download: font-awesome-v4.7.0.zip
Font Droid Sans Mono [1.00 Font library. Released in 2007. It is used as it is provided. The Apache
build 112] License,
Locations: ebx-manager.war/www/common/fonts
Version 2.0
Download: Font-Droid-Sans-Mono-1.00.ttf
Font Lato [2.015] Font library. Released in 2010. It is used as it is provided. SIL Open Font
License 1.1
Locations: ebx-manager.war/www/common/fonts
Download: font-lato-2.015.zip
Gagawa HTML Generator [1.0.1] Java Library for HTML generation. Released on February 18, 2009. The MIT
All packages have been renamed so as to prevent any conflicts when it License
is deployed.
Download: gagawa-1.0.1.jar
GoJS [1.7.25] JavaScript library for custom interactive diagrams and complex Northwoods
visualizations across modern web browsers. Released on September Software
21, 2017. It is used as it is provided. License
Download: GoJs-1.7.25.zip
Javax Activation [1.1.1] Java Library that implements JSR-925, providing the following CDDL version
services: 1.0
• It determines the type of arbitrary data.
• It encapsulates access to data.
• It discovers the operations available on a particular type of data.
• It instantiates the software component that corresponds to the
desired operation on a particular piece of data.
Released on October 23, 2009. All packages under
com.sun.activation have been renamed so as to prevent any
conflicts when it is deployed. The ones under javax.activation
remain unchanged. All related references to the renamed packages
have been updated.
Location: _OSS-Src/javax.activation.zip
Download: activation-1.1.1.jar
Javax Annotations [1.3] Java Library for common Java API annotations. Released on CDDL version
September 22, 2016. It is used as it is provided. 1.0
Location: _OSS-Src/javax.annotation.api.zip
Download: javax-annotations-1.3.zip
Javax JSON Bind [1.0] Java Library for JSON-B specifications. Released on June 17, 2017. It CDDL version
is used as it is provided. 1.1
Location: _OSS-Src/javax.json.bind.zip
Download: javax-json-bind-1.0.zip
Javax SAAJ API [1.3.5] Java Library for handling SOAP messages. It holds the SOAP with CDDL version
Attachments API for Java (SAAJ) specifications (JSR-67). Released on 1.1
March 21, 2013. It is used as it is provided.
Location: _OSS-Src/javax.xml.soap.saaj.api.zip
Download: saaj-api-1.3.5.jar
Javax WS RS [2.1] Java Library for RESTful Web Services development in Java SE and CDDL version
Java EE. It holds the JAX-RS 2.1 API specifications. Released on 1.1
August 4, 2017. It is used as it is provided.
Location: _OSS-Src/javax.ws.rs.zip
Download: javax-ws-rs-2.1.zip
Javax XML Bind [2.2.8] Java Library providing a runtime binding framework for client CDDL version
applications including unmarshalling, marshalling, and validation 1.1
capabilities. It holds the Java Architecture for XML Binding (JAXB)
API specifications (JSR-222). Released on march 21, 2013. It is used
as it is provided.
Location: _OSS-Src/javax.xml.bind.zip
Download: jaxb-api-2.2.8.jar
Jaxen [1.1.3] Java library holding a universal Java XPath engine. All packages have The Werken
been renamed so as to prevent any conflicts when it is deployed. Company
License
Download: jaxen 1.1.3
JDOM [1.0] Java Library for XML documents manipulations. All packages have Apache-style
been renamed so as to prevent any conflicts when it is deployed. open source
license
Download: jdom-1.0
Jericho HTML Parser [2.6] Java Library for analysis and manipulations of parts of an HTML Eclipse Public
document. Released on June 25, 2008. All packages have been License -
renamed so as to prevent any conflicts when it is deployed. Version 1.0
Location: _OSS-Src/Jericho.zip
Download: jericho-html-2.6.zip
JSON Java Library for JSON manipulations. Modified on February 10, 2011. The JSON
All packages have been renamed so as to prevent any conflicts when License
it is deployed. For reasons of limitation in nested objects, the class
com.onwbp.org.json.JSONWriter has been modified.
Download: json
Mimepull [1.4] Java Library providing a streaming API to access attachments parts in CDDL version
a MIME message. Released on June 4, 2009. All packages have been 1.0
renamed so as to prevent any conflicts when it is deployed. All related
references to the renamed packages have been updated.
Location: _OSS-Src/mimepull.zip
Download: mimepull-1.4.zip
Pikaday [1.6.1] JavaScript library for date picking. Released on June 14, 2017. It is BSD and MIT
used as it is provided. license
Download: Pikaday-1.6.1.zip
Prop-types.js [15.6.1] JavaScript library for runtime type checking. Released on February 26, MIT License
2018. It is used as it is provided.
Download: Prop-types-15.6.1.zip
React.js [16.3.1] JavaScript library for building user interfaces. Released on April 4, MIT License
2018. It is used as it is provided.
Download: React-16.3.1.zip
Require.js [2.3.3] JavaScript library for file and module loading. Released on February MIT License
19, 2017. It is used as it is provided.
Download: Require-2.3.3.js
ResizeObserver.js [0.1.0] JavaScript library for observing changes to the element size. Released the Apache
on March 25, 2018. It is used as it is provided. License,
Version 2.0
Download: ResizeObserver-0.1.0.zip
SAAJ Impl [1.3.6] Java Library for handling SOAP messages. It is the reference CDDL version
implementation of SAAJ API (JSR-67). Released on December 9, 1.1
2010. All packages have been renamed so as to prevent any conflicts
when it is deployed. All related references to the renamed packages
have been updated.
Location: _OSS-Src/saaj.impl.zip
Download: saaj-impl-1.3.6.jar
Stax2 [3.1.4] Java Library for XML processing. Released on February 28, 2014. All BSD License
packages have been renamed so as to prevent any conflicts when it is
deployed. All related references to the renamed packages have been
updated.
Download: Stax2-api-3.1.4.zip
SyntaxHighlighter [3.0.83] JavaScript library for syntax highlighting. Released on July 2, 2010. It The MIT
is used as it is provided. License
Download: syntaxHighlighter-3.0.83.zip
Woodstox Core [5.0.3] Java Library for high-performance XML processing, implementing Apache
Stax (JSR-173), SAX2 and Stax2 APIs. This library is used jointly License,
with Apache CXF library. Released on August 24, 2016. All packages Version 2.0
have been renamed so as to prevent any conflicts when it is deployed.
All related references to the renamed packages have been updated.
Download: woodstox-core-5.0.3.jar
Xmlschema Core [2.2.3] Java Library to manipulate and generate XML schema representations. Apache
This library is used jointly with Apache CXF library. Released on License,
January 18, 2018. All packages have been renamed so as to prevent Version 2.0
any conflicts when it is deployed. All related references to the
renamed packages have been updated.
Download: xmlschema-core-2.2.3.jar
YUI [2.8.0r4] JavaScript library for building richly interactive web applications. It is BSD License
used as it is provided.
Location: ebx.documentation/[ en - fr ]/[ simple -
advanced ]/resources/yui
Download: yui_2.8.0.zip
YUI [2.9.0] JavaScript library for building richly interactive web applications. It is BSD License
used as it is provided.
Location: ebx-manager.war/www/common/yui
Download: yui_2.9.0.zip